The widespread availability of artificial intelligence (AI) has not only introduced extensive benefits to society, but also new dangers. One disturbing consequence of AI usage is the production and dissemination of virtual child sexual abuse material (VCSAM), which poses imminent risks to pediatric and adolescent populations. This discussion aims to shed light on the dangers and implications of VCSAM for pediatric populations, along with the cautionary measures needed to combat them.

VCSAM, also known as AI-driven CSAM or “deepfakes,” is defined through 2 main subcategories: AI-generated CSAM refers to entirely new sexual images of fictional children, whereas AI-manipulated CSAM alters images and videos of real children into sexually explicit content. In 2022, the National Center for Missing & Exploited Children’s CyberTipline received ∼32 million suspected reports of online CSAM alone.1  Moreover, studies have reported significant upticks in the amount of circulating VCSAM and researchers foresee cases rising dramatically in the coming months as AI becomes increasingly integrated into society.1,2 

Recent AI advancements allow anyone with a digital device to type simple descriptions and output photorealistic new or modified CSAM content in seconds, often bypassing filters and detection systems. In July 2023, a study from Carnegie Mellon University and the Center for AI Safety showed that anyone can circumvent AI safety measures and generate limitless amounts of images.3  Predators also rapidly adapt and go to various lengths to evade VCSAM safeguards, making it increasingly complex for law enforcement and technology companies to combat this issue.

While AI-generated CSAM may be considered “fake,” its potential dangers are very real; any virtual content that introduces the sexualization of minors compromises children in the “real world.” Children can experience long-term psychological consequences such as depression, posttraumatic stress disorder, and a diminished sense of self-worth from accidental exposure to AI-generated CSAM.4  Additionally, the photorealistic nature of AI-generated CSAM images complicates efforts to identify and protect real victims of child sexual abuse, because police and law enforcement may not be able to distinguish images as real or AI-generated. This poses obstacles to finding and prosecuting those who consume and distribute such content.

Accordingly, technology companies, including both AI and social media companies, have formulated a classification system called “Tech Coalition” that categorizes suspected CSAM by the apparent age and nature of depicted acts of the victim. Stanford AI researchers recommend that classifications also include whether images were AI-generated to help law enforcement identify real victims among millions of fake images.1  They also urge AI companies to train their models to not create CSAM and bake watermarks into AI-generated images to aid the justice system in finding and prosecuting predators.

Predators can use AI to manipulate publicly available images of minors, typically from social media, into sexual content that appears realistic in likeness to a victim. The AI-manipulated content is then circulated on public forums and the dark Web, where victims can face significant challenges in stopping the continual sharing or removal of the content from the Internet. Malicious actors can also send the AI-manipulated content directly to victims for sextortion or harassment. Sextortion is the act of coercing victims by threatening to publicly share their sexually explicit images. Predators may use AI-manipulated CSAM to extort victims for money, compliance for other demands (eg, sending real sexual content), and/or to harass children for their own pleasure. In June 2023, the Federal Bureau of Investigation (FBI) reported a significant increase of sextortion and harassment victims who reported AI-manipulated CSAM that was created from content posted on their social media and the Internet.2  Such cases are known to cause, among other psychological consequences, suicide, depression, hopelessness, and shame among victims, especially if they are not adequately supported.4  As such, VCSAM is an urgent issue that requires action to eliminate existing predators and implement preventive measures to protect future youth.

Given the relatively recent and dramatic increase of AI availability and its usage in everyday life, the legal space regarding VCSAM, particularly AI-generated CSAM, is currently a gray area. In 2003, the US Congress passed a law banning “computer-generated child pornography.” However, in 2003, creating such materials was exorbitantly expensive and timely. Though the law did attempt to include future methods of generating CSAM, it must be updated for the current technological age.

As such, a bipartisan inquiry was launched to the US Department of Justice on July 25, 2023, regarding AI-generated CSAM.5  Their letter served as a call to action for the US Department of Justice to identify current efforts and legal hurdles in prosecuting such cases, along with their future plans to prevent further creation and distribution of AI-generated CSAM. Law enforcement, lawmakers, and technology companies must work together to protect child exploitation and to hold perpetrators accountable under the law.

A recent example of a legal AI-manipulated CSAM case occurred in April 2023, where a 61-year-old Quebec man was sentenced to >3 years in prison for using AI to produce synthetic videos of CSAM, which included superimposing faces of real children onto bodies of other people.6  This was the first case in Canada involving deepfakes of child sexual exploitation and emphasizes how crucial it is for governments around the world to be vigilant about regulating AI-manipulated content.

As technology increasingly permeates children’s lives, caregivers and pediatricians play a vital role in safeguarding them from the dangers of VCSAM. Parents should be generally aware of the basics of digital safety so they can warn their children of potential risks, promote safe Internet usage, and encourage them to openly communicate any concerns or uncomfortable experiences they may encounter online. Regularly monitoring their children’s online activity, utilizing parental controls on digital devices, and applying privacy settings on social media accounts are all recommended safeguards to prevent accidental exposure to inappropriate sites or content.2  Additionally, parents should consider limiting their own posting of their children because these images can be easily manipulated. In the face of this novel threat, it is imperative for pediatricians and parents to unite in their commitment to preserving the safety of the next generation, ensuring that the digital world remains a space of wonder and learning, not exploitation.

As the first point of contact, pediatricians have the opportunity to detect early signs of potential exploitation and exposure to harmful content. During checkups, pediatricians can use psychosocial risk tools like the Home, Education, Activities, Drug Use and Abuse, Sexual Behavior, Suicidality, and Depression assessment7  to detect potential issues in the child’s digital activities. Additionally, pediatricians should provide appropriate guidance and resources to caregivers regarding VCSAM, such as the American Academy of Pediatrics’ handout on Internet safety.8  If a caregiver or patient expresses any concerns, pediatricians should instruct them to immediately save any evidence, cease further communication with the perpetrator, and strongly advise them to report the incident to the FBI’s hotline (1-800-CALL-FBI) and/or the National Center for Missing & Exploited Children’s CyberTipline hotline (1-800-THE-LOST).

Because the current legal space regarding VCSAM is unclear, this article is a call to action for pediatricians to work with legal experts and advocate for robust legislation and regulations that address the dangers of AI-driven CSAM. Within this, creating and enforcing strict penalties for those who produce or distribute such material should be a top priority. Moreover, the pediatrics community should urge lawmakers to require or incentivize technology companies to develop AI tools that can prevent, detect, and swiftly remove circulating VCSAM. As society becomes increasingly digital, pediatricians have the unique opportunity to inspire policies and technological measures that protect children from this novel online threat.

Ms Krishna conceptualized the piece, conducted the initial literature review, drafted the manuscript, and revised the manuscript; Ms Dubrosa conceptualized the piece, conducted parts of the literature review, assisted with drafting the manuscript, and assisted with revision of the manuscript; Dr Milanaik conceptualized the piece, and critically reviewed and revised the manuscript; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no conflicts of interest relevant to this article to disclose.

AI

artificial intelligence

FBI

Federal Bureau of Investigation

VCSAM

virtual child sexual abuse material

1
Lapowsky
I
.
The New York Times
.
The race to prevent “the worst-case scenario for machine learning.”
2
Federal Bureau of Investigation
.
Malicious actors manipulating photos and videos to create explicit content and sextortion schemes
.
Available at: https://www.ic3.gov/Media/Y2023/PSA230605. Accessed August 3, 2023
3
Metz
C
.
The New York Times
.
Researchers poke holes in safety controls of ChatGPT and other chatbots
.
4
Ali
S
,
Paash
AS
.
A systemic review of the technology enabled sexual abuse (OCSA) & its impacts
.
JLORI
.
2022
;
25
(
5S
):
1
18
5
Marsha Blackburn US Senator for Tennessee
.
Blackburn, Ossoff launch bipartisan inquiry to address ai-generated child sex abuse material online
.
6
Serebrin
J
.
CBC
.
Quebec man who created synthetic, AI-generated child pornography sentenced to prison
.
7
Cohen
E
,
Mackenzie
RG
,
Yates
GL
.
HEADSS, a psychosocial risk assessment instrument: implications for designing effective intervention programs for runaway youth
.
J Adolesc Health
.
1991
;
12
(
7
):
539
544
8
Pediatric Patient Education
.
Beyond screen time: a parent’s guide to media use
.
Available at: https://doi.org/10.1542/peo_document099. Accessed July 29, 2023