Performance of ChatGPT on the MCAT: The Road to Personalized and Equitable Premedical Learning

Abstract

Despite an increasingly diverse population, an unmet demand for undergraduates from underrepresented racial and ethnic minority (URM) backgrounds exists in the field of medicine as a result of financial hurdles and insufficient educational support faced by URM students in the premedical journey. With the capacity to provide highly individualized and accessible no- or low-cost dynamic instruction, large language models (LLMs) and their chatbot derivatives are posed to change this dynamic and subsequently help shape a more diverse future physician workforce. While studies have established the passing performance and insightful explanations of one of the most accurate LLM-powered chatbots to date, Chat Generative Pre-trained Transformer (ChatGPT), on standardized exams such as medical licensing exams, the role of ChatGPT in premedical education remains unknown. We evaluated the performance of ChatGPT on the Medical College Admission Test (MCAT), a standardized 230-question multiple choice exam that assesses a broad range of competencies in the natural, physical, social, and behavioral sciences as well as critical analysis and reasoning. Depending on its visual item response strategy, ChatGPT performed at or above the median performance of 276,779 student test takers on the MCAT. Additionally, ChatGPT-generated answers demonstrated both a high level of agreement with the official answer key as well as insight into its explanations. Based on these promising results, we anticipate two primary applications of ChatGPT and future LLM iterations in premedical education: firstly, such models could provide free or low-cost access to personalized and insightful explanations of MCAT competency-related questions to help students from all socioeconomic and URM backgrounds. Secondly, they could be used to generate additional test questions by test-makers or for targeted preparation by pre-medical students. These applications of ChatGPT in premedical education could be an invaluable, innovative path forward to increase diversity and improve equity among premedical students.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding.

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

留言 (0)

沒有登入
gif