Both Patients and Plastic Surgeons Prefer Artificial Intelligence–Generated Microsurgical Information

  SFX Search  Buy Article Permissions and Reprints Abstract

Background With the growing relevance of artificial intelligence (AI)-based patient-facing information, microsurgical-specific online information provided by professional organizations was compared with that of ChatGPT (Chat Generative Pre-Trained Transformer) and assessed for accuracy, comprehensiveness, clarity, and readability.

Methods Six plastic and reconstructive surgeons blindly assessed responses to 10 microsurgery-related medical questions written either by the American Society of Reconstructive Microsurgery (ASRM) or ChatGPT based on accuracy, comprehensiveness, and clarity. Surgeons were asked to choose which source provided the overall highest-quality microsurgical patient-facing information. Additionally, 30 individuals with no medical background (ages: 18–81, μ = 49.8) were asked to determine a preference when blindly comparing materials. Readability scores were calculated, and all numerical scores were analyzed using the following six reliability formulas: Flesch–Kincaid Grade Level, Flesch–Kincaid Readability Ease, Gunning Fog Index, Simple Measure of Gobbledygook Index, Coleman–Liau Index, Linsear Write Formula, and Automated Readability Index. Statistical analysis of microsurgical-specific online sources was conducted utilizing paired t-tests.

Results Statistically significant differences in comprehensiveness and clarity were seen in favor of ChatGPT. Surgeons, 70.7% of the time, blindly choose ChatGPT as the source that overall provided the highest-quality microsurgical patient-facing information. Nonmedical individuals 55.9% of the time selected AI-generated microsurgical materials as well. Neither ChatGPT nor ASRM-generated materials were found to contain inaccuracies. Readability scores for both ChatGPT and ASRM materials were found to exceed recommended levels for patient proficiency across six readability formulas, with AI-based material scored as more complex.

Conclusion AI-generated patient-facing materials were preferred by surgeons in terms of comprehensiveness and clarity when blindly compared with online material provided by ASRM. Studied AI-generated material was not found to contain inaccuracies. Additionally, surgeons and nonmedical individuals consistently indicated an overall preference for AI-generated material. A readability analysis suggested that both materials sourced from ChatGPT and ASRM surpassed recommended reading levels across six readability scores.

Keywords artificial intelligence - accuracy - comprehensiveness - clarity - readability - quality - online resources - American Society of Reproductive Medicine Author Contributions

C.E.B., A.Z.F., and D.C.W. lead conception and design, collection and assembly of data, data analysis and interpretation, and manuscript writing for this project. All authors assisted with administrative support, provision of study materials or patients, and final review and approval of the manuscript.


*These authors contributed equally to this work.


Publication History

Received: 09 September 2023

Accepted: 15 February 2024

Accepted Manuscript online:
21 February 2024

Article published online:
26 March 2024

© 2024. Thieme. All rights reserved.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

留言 (0)

沒有登入
gif