ePoster
Presentation Description
Institution: Royal Prince Alfred Hospital - NSW, Australia
Introduction
ChatGPT is a large language model program developed by OpenAI, an artificial intelligence (AI) laboratory founded by Elon Musk. The role of AI programs like ChatGPT in healthcare and health education remains undefined with a large untapped potential. Passing the Generic Surgical Sciences Examination (GSSE) is a pre-requisite for all junior doctors in Australia wishing to purse formalised surgical training with RACS. AI systems such as ChatGPT may be of great use to candidates preparing to sit the GSSE by acting as a free adjunct study tool alongside more traditional resources including textbooks and subscription services / courses. The present study aims to determine how reliable ChatGPT is in answering questions tailored to help GSSE candidates study and prepare for the exam.
Methods
Randomly selected sample GSSE questions from each of the anatomy, physiology and pathology subsections were taken from the widely distributed study resource ‘The Bank’. Questions from all question subtypes (Type A, Type B and Type X) were used. Two authors (LB and TW) independently asked ChatGPT (Version 3.0), using identical proformas, to answer the same 120 anatomy, 100 physiology and 100 pathology questions. Responses were collated and any disparities in responses were resolved by consensus.
Results
ChatGPT was able to answer 65.4% of anatomy questions correctly (all type X). In the physiology section it correctly answered 71.4% of Type A, 55% of Type B and 72.4% of Type X questions. Finally, in the Pathology section it answered 40% of Type A correctly, and 75% of Type X correctly.
Conclusion
In our study, ChatGPT failed to demonstrate a satisfactory degree of accuracy to be reliably used as a study tool for GSSE preparation.
Speakers
Authors
Authors
Dr Luca Borruso - , Dr Thomas Warburton - , Dr Jack Loa -