ChatGPT and academic publishing: Potential and perils

Dear Editor,

ChatGPT (Chat Generative Pre-Trained Transformer), is a recently launched for-profit artificial intelligence (AI) technology developed by OpenAI Incorporated (California, United States). In this watershed moment of technological advancement, the medical community is poised to be significantly impacted and transformed by this AI revolution. Potential applications of AI in medicine include disease diagnosis, risk assessment, precision medicine, drug discoveries, electronic health record maintenance, robotic surgery and academic publishing.

In the field of academic publishing, ChatGPT has already been utilised in identifying literature gaps, summarising literature, drafting and editing manuscripts (Supplementary material), preparing patient information sheets and performing statistical analysis.1 ChatGPT can assist users in various aspects of manuscript creation, including idea brainstorming, manuscript outlining and structuring, fact-checking, providing language assistance and feedback and helping overcome writer’s block (Supplementary material). Potential applications include designing clinical trials, writing complete manuscripts, conducting peer review, and aiding editorial decisions.

Although the potentials are immense, the technology is not without its perils [Table 1]. While ChatGPT has demonstrated impressive abilities in generating fluent and seemingly rational text, it is important to remember that it relies on the data it was trained on and may not always produce accurate or reliable information. It is improbable that the programme has access to all available literature in the field, leading to often incorrect and imaginary references that have been an issue with the AI-generated manuscripts.2 Moreover, the algorithm lacks the ability to discern the credibility of the sources it uses to generate responses. In the context of medical publishing, this could result in the spread of misinformation.3 Therefore, AI-generated responses should be carefully evaluated before relying on them for medical decision-making or dissemination of medical information.

Table 1: Potentials and Perils of ChatGPT in Medical Publishing

Potentials of ChatGPT Perils of ChatGPT

Preparation of manuscripts

Generate review of literature

Providing patient education material

Improvement of manuscript language and grammar

Enhancing appropriate medical terminology

Performing statistical analysis

Citation formatting

Aiding peer review process and editorial decision

Generating plain language summaries of scientific writings

Misinformation due to inaccuracies and biases

Inability to discern credibility of source data

Plagiarisms due to inadvertent replication of existing work

Limited awareness of developments beyond 2021

Compromised data privacy

Challenge of determining respective right holders of the content generated

Lack of accountability

Decline in expertise and critical thinking on part of researchers

It is apparent that the emergence of AI technology has presented ethical dilemmas, which the medical community is largely unprepared to handle. The accreditation of ChatGPT as an author in publications has sparked a heated debate within the medical community.4 There is a general consensus that ChatGPT cannot be considered a legitimate author, as it lacks accountability for the content it generates.5 Completely banning AI-based algorithms from academic publishing is neither feasible nor sensible, as it would deprive the medical community of valuable and constructive tools. Furthermore, at present, there are no reliable tools available to differentiate between AI-generated and human-generated text, which makes it difficult to completely exclude AI-generated text from medical writing. Therefore, it is essential to establish clear ground rules for the use of AI in academic publishing to ensure that the benefits of AI language models are harnessed responsibly and ethically. Transparency, integrity and accountability on authors’ part are vital. One way these aims can be achieved is by obtaining author declarations on the extent of use of AI in manuscript preparation. Rigorous human oversight at every step of publication is paramount for safeguarding the medical literature from the potential for errors, biases, and inaccuracies introduced by AI-generated information.

Intellectual property rights pose another challenge, as it is currently difficult to define who has ownership over AI-generated text. OpenAI’s terms of use specify that users are assigned the all “its right, title and interest in and to output.”6 However, the model also assigns the user the responsibility to ensure that their use of ChatGPT’s responses complies with relevant laws and regulations. Due diligence on the part of authors is critical to determine ownership of AI-authored text to avoid potential legal disputes and to ensure that creators and the authors of the source data are appropriately compensated for. While preparing manuscripts using ChatGPT, the authors may end up giving their own data to the AI algorithm. As per open AI’s privacy policy, certain data from user interaction with the algorithm is retained in their database.7 This may be at risk of being accessed or used without permission. Researchers should hence undertake appropriate measures to protect the privacy and confidentiality of their data, and carefully consider the potential risks and benefits of sharing their data in any form. Patient consent forms should explicitly include permission for the sharing of confidential data across AI platforms.

AI hence has shown immense potential in the field of medical writing, and it is expected to be integrated into the publishing system in the years to come. However, the existing technology is limited by biases, misinformation and inaccuracies. While ChatGPT can serve as a valuable complement to one’s work, it is essential to maintain the central role of human expertise and critical thinking in the formulation of manuscripts. In order to safeguard the sanctity of academic publishing, ethical guidelines to govern the use of these technologies need to be established.

留言 (0)

沒有登入
gif