ePoster
Presentation Description
Institution: Townsville University Hospital - Queensland, Australia
Purpose
The increasing popularity of large language models (LLM) such as Chat-GPT has healthcare experts questioning their utility medical practice. Australia has one of the world’s highest melanoma rates and its prevalence makes it pivotal to have early detection and urgent management. This study aims to explore the extent of knowledge Chat-GPT has on melanoma identification and management.
Methodology
Five questions relevant to melanoma detection and management were provided to ChatGPT for answering. In addition, ChatGPT was asked to provide five high level evidence references to support its answers. Responses were evaluated based on their accuracy in aligning with current Australian melanoma guidelines. A stimulated doctor – patient consultation was also conducted to assess whether ChatGPT could offer safe and accurate advice regarding melanoma identification and management.
Results
A relatively superficial level of information was provided by ChatGPT pertaining to identification and management of melanoma that is easily comprehensive by non-medical individuals. With regards to stimulated doctor-patient interaction, it was able to provide advice on identification of a melanoma based on appearance and advised further healthcare professional consultation. References provided by ChatGPT were relevant to Australia only when prompted and were either non-existent or inaccurate.
Conclusion
ChatGPT has potential to provide a general overview of melanoma identification and management to the public. Due to the inaccuracy of information and production of non-existent references healthcare professionals need to exercise caution when using this in practice. Further refinement of ChatGPT is needed for future utilisation in medicine.
Speakers
Authors
Authors
Dr Daphne Wang - , Dr Sheramya Vigneswaran - , Dr Atul Ingle -