AI, Parenting, and Child Development

To most people, generative artificial intelligence (AI) is a hyperobject. So complex and far-reaching, it is hard to wrap our minds around. We may think of the mutinous HAL in 2001: A Space Odyssey or the helpful Jarvis in Iron Man. We may hope that AI will learn to answer all our emails and teach our students—or worry that it will spread misinformation and disrupt our society. Two articles in this issue of Journal of Developmental and Behavioral Pediatrics help make sense of what AI might mean for parenting or developmental-behavioral advice.

It is worth noting that AI feels particularly disruptive right now because of the rapid intensification of AI model power over the past 2 to 3 years. Older AI models took years to be trained to play games like Chess or Go, far longer than our pediatric patients take to learn a new skill. Compared with the amazing process of how children learn from their physical and social worlds, AI models seemed like simple trial-and-error pattern recognition.

But then came along the large language models (LLMs) like GPT-4, trained on trillions of data points scraped from the internet. These LLMs rapidly learned from themselves and are now estimated to solve problems at the grade level of college students.1 Since OpenAI made the Chat Generative Pretraining Transformer (ChatGPT) application available for public use in March 2023, any human with a browser can ask it to write a love letter, compose misinformation, or provide medical advice.

Building on a growing literature examining ChatGPT's medical knowledge, Kim et al2 analyzed ChatGPT responses to almost 100 developmental and behavioral case scenarios, taken from the Journal of Developmental and Behavioral Pediatrics and other sources. Compared with an expert panel of clinicians, they found that ChatGPT's diagnosis was accurate two-thirds of the time, and it recommended mostly correct treatment plans. It is interesting that ChatGPT's treatment plans could often be excessive, which suggests that it throws the “kitchen sink” of what it finds online at a problem. As clinicians know, this is not ideal—it can exhaust or overwhelm caregivers. Of additional concern is whether bias in LLM training data will lead to biased recommendations based on the patient's race/ethnicity. Or, if unproven treatments make it into LLM training data, how will these be prevented from appearing in ChatGPT's recommendations?

ChatGPT is only 1 AI-enabled application, but as Kucirkova and Zuckerman3 point out, there will be a multitude of different AI-enabled medical devices, toys, or educational products that families will need to navigate in the coming years. They propose that clinicians and caregivers use the POWER rubric, developed for teachers to use when deciding whether to adopt educational technology. This rubric encourages use of evidence-based products centered around goals of child progress and well-being. When applied to commercialized AI-enabled tech outside of schools, I would suggest that the “R” stand for “regulation” rather than companies deciding what counts as “responsible.” Without guardrails, AI will only widen the power differential that exists between technology companies and families, who struggle to understand how AI works, what products it is in, and what its risks are. Children will use AI products in unpredictable ways, which will increase the likelihood of unintended consequences—like those already observed with social media algorithms.4 Will a sponsored AI chatbot convince my patients to vape?

Like any new technology harnessed for commercial purposes, it is unclear how well children's and families' unique needs are being considered in the design and testing of these powerful tools. Therefore, AI regulation will need a governance structure that includes experts who understand children and families—including caregivers and youth themselves. Accountability and safety testing will need to take place with an eye to what children might experience and what biases marginalized families will encounter.

I asked Bing's AI chatbot to write a concluding paragraph for this commentary “about AI and pediatrics,” and it wrote a vague summary about risks and benefits—which must be the most common information about this topic on the internet. It made me wonder, why would we want to use AI to keep parroting ourselves?

Instead, I will conclude with a framing that does not dominate discourse on the internet, but our field knows well: relational health. We cannot be replaced by a chatbot who provides medical advice or tells bedtime stories because AI has no coherence or mind-mindedness about its users. It is pretty good at facts, not good at meaning-making. That is what we do as clinicians when we remember a child's unique fears and super powers; notice how hard they are working at a challenging task; or act as a holding environment for a stressed parent. AI can certainly be our helper, but there always needs to be a human in the loop.

1. The AI Dilemma. Center for Humane Technology; 2023. Available at: https://www.humanetech.com/podcast/the-ai-dilemma. Accessed December 28, 2023. 2. Kim R, Margolis A, Barile J, et al. Challenging the chatbot: an assessment of ChatGPT's diagnoses and recommendations for DBP case studies. J Dev Behav Pediatr. 2024;45:e8-e13. 3. Kucirkova NI, Zuckerman B. Generative AI for children's digital health: clinician advice. J Dev Behav Pediatr. 2024;45:e86-e87. 4. Murthy V. Social Media and Youth Mental Health: The U.S. Surgeon General's Advisory; 2023. Available at: https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf. Accessed December 28, 2023.

留言 (0)

沒有登入
gif