“I’m a new mom. Do I need to worry if my 1-week-old is sleeping more than three hours straight?” I typed into my phone.
I received the following response from ChatGPT: “In the first few weeks, newborns often have irregular sleep patterns, and it's quite normal for them to sleep longer stretches at times. However, it's important to ensure your baby is getting enough nourishment. If your 1-week-old is sleeping longer than three hours consistently, it's a good idea to wake them up for feedings during the day to ensure they are getting the nutrition they need. If you have any concerns about your baby's sleep or feeding habits, don't hesitate to reach out to your pediatrician for guidance and reassurance.”
Not a bad answer. I’ve heard my nurses giving the same answer over the phone, and I’ve given similar advice to new parents.
The rapid development of artificial intelligence (AI) such as generative language models (e.g., ChatGPT) has many of us questioning how our work may change.
Nearly all of the major electronic health records (EHRs) are incorporating AI into their programs. AI will be able to help us write our subjective, objective, assessment and plan (SOAP) notes as it learns from patient conversations what belongs in the note and what does not. This should help improve our EHR experience and could make writing notes in the evening (“pajama time”) a thing of the past!
The ability to scan a patient’s medical record in a blink could help ensure that we do not miss an obscure result. For example, if an adolescent’s records include newborn pharmacogenetic testing results that indicated the patient metabolized sertraline differently, we would know to adjust the dosing for optimal response now that we are treating the patient for depression. So many possibilities exist for streamlining and augmenting our work.
Despite these benefits, we should approach AI with caution, using lessons learned from social media.
Purposefully designed social media algorithms keep us attached to our devices so we are exposed to ads or other sources of revenue for app creators. The monetization of the time we spend using a social media app has been called the “attention economy.”
Algorithms that keep children on their screens may purposely or inadvertently expose them to harmful content that leads to unhealthy behaviors; the digital ecosystem was not designed with children in mind.
The AAP Center of Excellence on Social Media and Youth Mental Health recognizes the need to give parents and caregivers, educators, clinicians and teens themselves the tools they need to navigate social media in a way that supports children’s healthy mental development, leveraging the benefits and reducing the risks of social media. Resources like a searchable Q&A portal, tip sheets and other materials are available at www.aap.org/socialmedia.
While the Academy works to provide young people meaningful guidance to lead healthy lives online, the onus cannot fall entirely on families. We must address through legislation this unregulated digital environment, which has allowed industry practices that are incompatible with what young people need to thrive. For this reason, the AAP is advocating vigorously with Congress to pass new privacy, design and safety protections for young people online like the bipartisan Kids Online Safety Act and Children and Teens’ Online Privacy Protection Act. Bills like these will ensure technology companies build a healthier digital ecosystem for children and teens.
To elevate and advance our shared priorities, we also are working with federal officials, like the surgeon general, who share our concern about the impact of digital media on youth mental health.
With AI, the focus is slightly different but just as concerning. The goal of AI is for the user to prefer one AI “entity” over another through an emotional connection, i.e., a “relationship economy.” The AI entity becomes similar to a friend or close acquaintance.
You may have experienced this already if you have a smartwatch or smart device that uses Siri, Google Assistant or Alexa. We personify these devices by giving them names, and they learn about our routines and preferences. In addition, features in apps like Instagram and Facebook allow users to create an AI “friend.”
So how can we guide the development of AI so that we are not looking back in 10 years wondering what happened to children and adolescents who are using this technology?
Designing AI “friends” and resources that are safe for children will require a concerted effort from legislators, developers, corporations, parents, pediatric experts and others.
AI absolutely will change how we interact with technology. The Academy is convening an AI workgroup of internal leaders with expertise across key domains like digital media and clinical information technology. The group will identify steps we can take to articulate what young people will need in this rapidly developing area and will be prepared to weigh in with policymakers and other decision-makers influencing the trajectory of AI development.
By speaking up for the needs of children and adolescents, we can play a crucial role in advocating for changes that better society. I look forward to advocating with AAP members to improve the safety of AI for our patients and our profession.
As always, thank you for continuing to keep children as the center of all that you do and for your hard work taking care of families in your community.