Comment on: Artificial intelligence chatbot and Academy Preferred Practice Pattern® Guidelines on cataract and glaucoma

The possibility of biases in the chatbot's recommendations is a topic that is not frequently covered in the literature. AI chatbot algorithms are frequently trained on biased or insufficient data, which can lead to patients receiving unfair or erroneous advice. This can be especially problematic in domains such as ophthalmology, where accurate and trustworthy counsel is essential to the success of patients.

Furthermore, as title of the study by Mihalache et al. notes, there are no practice recommendations in ophthalmology that are expressly designed for AI chatbots, indicating a gap in the literature that needs attention.1 To guarantee that patients receive safe and adequate care, there is a need for explicit and established rules on the use of AI chatbots in the diagnosis and treatment of illnesses such as glaucoma and cataracts.

Moreover, the lack of an abstract in the study hinders readers' ability to rapidly comprehend the main conclusions and research implications. An abstract offers a succinct synopsis of the goals, procedures, findings, and conclusions of the study; its absence can make it more difficult for scientists and medical professionals to judge the study's applicability and validity. Owing to ChatGPT's complete reliance on human user input, human user behavior code must be written.2

1. Mihalache A, Huang RS, Popovic MM, Muni RH. Artificial intelligence chatbot and Academy Preferred Practice Pattern ® Guidelines on cataract and glaucoma. J Cataract Refract Surg 2024. doi:10.1097/j.jcrs.0000000000001317 2. Kleebayoon A, Wiwanitkit V. ChatGPT, critical thing and ethical practice. Clin Chem Lab Med 2023;61:e221

留言 (0)

沒有登入
gif