Can Electronic care planning using AI Summarization Yield equal Documentation Quality? (EASY eDocQ)

Abstract

Importance. Data, information and knowledge in health care has expanded exponentially over the last 50 years, leading to significant challenges with information overload and complex, fragmented care plans. Generative AI has the potential to facilitate summarization and integration of knowledge and wisdom to through rapid integration of data and information to enable efficient care planning. Objective. Our objective was to understand the value of AI generated summarization through short synopses at the care transition from hospital to first outpatient visit. Design. Using a de-identified data set of recently hospitalized patients with multiple chronic illnesses, we used the data-information-knowledge-wisdom framework to train clinicians and an open-source generative AI Large Language Model system to produce summarized patient assessments after hospitalizations. Both sets of synopses were judged blinded in random order by clinician judges. Participants. De-identified patients had multiple chronic conditions and a recent hospitalization. Raters were physicians at various levels of training. Main outcome. Accuracy, succinctness, synthesis and usefulness of synopses using a standardized scale with scores > 80% indicating success. Results. AI and clinicians summarized 80 patients with 10% overlap. In blinded trials, AI synopses were rated as useful 75% of the time versus 76% for human generated ones. AI had lower succinctness ratings for the Data synopsis task (55-67%) versus human (84-86%). For accuracy and synthesis, AI had near equal or better scores in other domains (AI: 72%-79%, humans: 68%-84%), with best scores from AI in Wisdom. Interrater agreement was moderate, indicating different preferences for synopsis content, and did not vary between AI and human-created synopses. Discussion. AI-created synopses that were nearly equivalent to human-created ones; they were slightly longer and did not always synthesize individual data elements compared to humans. Given their rapid introduction into clinical care, our framework and protocol for evaluation of these tools provides strong benchmarking capabilities for developers and implementers.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

The Oregon Health & Science University Institutional Review Board waived ethical approval for this work.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

Data Availability

All data produced in the present study are available upon reasonable request to the authors. The data on which the work was done is available on Physionet.org

https://physionet.org/content/mimiciii/1.4/

留言 (0)

沒有登入
gif