Laura Brennan

and 3 more

Introduction Ensuring that patients are well-informed in making health decisions has become increasingly pressing, particularly in light of resource constraints faced by the NHS. The emergence of artificial intelligence (AI) and natural language processing technologies, such as ChatGPT, offers potential solutions for delivering accessible patient information. This study explores the application of ChatGPT as a patient information tool, focusing on patients undergoing Functional Endoscopic Sinus Surgery (FESS) in the UK. Methods To evaluate the effectiveness of ChatGPT in providing patient information, the authors devised three common patient queries related to FESS. These questions were presented to both ChatGPT and three authors (including validation by a supervising Consultant) to generate a 150-word response. 20 qualified clinicians were blinded to responses and subsequently completed a 5-point Likert scale questionnaire to evaluate each response. Results When comparing mean scores between author and ChatGPT responses, it was found that there was no statistically significant difference for Accuracy, Completeness, Clarity or Appropriateness in any of the 1-3 questions asked. When explaining FESS, ChatGPT responses scored ≥50% on accuracy, clarity and appropriateness. ChatGPT responses scored lower in all areas when asked to described the alternatives to surgery. When explaining the risks of surgery, ChatGPTs responses scored higher on average. Conclusions This study establishes a foundational assessment of ChatGPT’s potential utility as a source of patient information within UK ENT departments. Notably, the study finds no significant disparities in the evaluations of accuracy, completeness, clarity, and appropriateness between ChatGPT-generated responses and those crafted by medical experts.