ePoster
Presentation Description
Institution: Townsville University Hospital - Queensland, Australia
Purpose
Large language models (LLM) most prominently Chat-GPT, have experienced a surge in popularity among the public prompting medical professionals to question their utility in their practice. Cleft palate repair is most common among infants from 9-18 months of age, typically carers will have a range of questions regarding the procedure and utilise a variety of resources for information. This study aims to explore the extent of knowledge Chat-GPT has on cleft palate repair as a resource for carer education.
Methodology
ChatGPT was provided with five different questions about cleft palate repair for answering. In addition, ChatGPT was asked to provide five high level evidence references to support its answers. Responses were evaluated based on their accuracy, detail, and comprehensiveness. A stimulated doctor – carer consultation was conducted to assess whether ChatGPT could offer safe and accurate information regarding cleft palate repair.
Results
A generalised superficial level of information was provided about cleft palate repair by ChatGPT that is comprehensible by non-medical individuals. Multiple references were either non-existent or inaccurate. With regards to stimulated doctor-carer interaction, it was able to provide general steps of cleft palate repair but unable to provide specific advice catered to patient’s individualised needs due to variance of cleft repair types.
Conclusion
General information regarding cleft palate repair education for carers can be outlined by ChatGPT. With the production of non-existent and inaccurate references healthcare professionals need to exercise caution when using ChatGPT in practice. Further refinement of ChatGPT is needed for future utilisation in patient education.
Speakers
Authors
Authors
Dr Daphne Wang - , Dr Sheramya Vigneswaran - , Dr Atul Ingle -