Despite efforts to reduce the age of diagnosis for autism spectrum disorder (ASD) and ameliorate disparities in the identification of children with ASD from diverse backgrounds, we have only recently begun to move the needle.1,2 We acknowledge the formidable task pediatricians face in recognizing ASD, a condition with relatively low prevalence and signs that may not be apparent during a brief clinical encounter.3 Despite its promise for aiding pediatricians and improving ASD identification, emerging research suggests that the Modified Checklist for Autism in Toddlers (M-CHAT) is less accurate in detecting ASD in clinical practice than previously thought. In this issue of Pediatrics, Carbone et al4 performed a retrospective study using electronic health record data to evaluate screening practices and ASD diagnostic outcomes for children aged 16 to 30 months seen between 2013 and 2016 in one large health system in Utah. This important work closely parallels a similar study published by our research group in Pediatrics in 2019.5 We applaud the editors for publishing 2 articles with such similar methods and findings, because replication is particularly critical when findings are in contrast to previous results and suggest the need to reconsider current clinical practice.
Importantly, these 2 articles reveal the value of real-world clinical data. Research done under ideal conditions (with the assistance of research assistants, strict adherence to screening protocols, and free, quick access to diagnostic evaluations) has previously indicated that the positive predictive value (PPV) of the M-CHAT and M-CHAT, Revised, was high.6,7 However, these 2 studies conducted without researcher intervention in screening or diagnosis, yielded substantially lower estimates of PPV, indicating that far fewer children who screened positive went on to be diagnosed with ASD. In addition, the methods employed in the Carbone et al4 study and ours allowed researchers to provide some of the first estimates of the M-CHAT’s sensitivity and specificity in the United States because both studies were able to follow children who screened negative through their electronic health records to ascertain diagnostic outcomes. This systematic follow-up revealed poor sensitivity and a false-negative problem because more children with ASD had negative screen results than positive on the M-CHAT in both studies.
In fact, results across our 2 studies were strikingly similar, despite differences in the locations, practices, patient characteristics, and screening processes: sensitivity was 33% in the Intermountain Healthcare clinic and 39% in our population, and PPV estimates were 18% and 15%, respectively. Both studies also reported that (1) ASD diagnostic rates were higher among those whose screen results were positive and (2) children whose screen results were positive tended to receive a diagnosis at younger ages. In these common findings, there is good and bad news for current ASD screening practice. Results from Carbone et al4 reveal that children whose screen results were positive received a diagnosis an average of 12 months earlier than those whose screen results were negative and 10 months earlier than children who were not screened, suggesting a key role of screening in lowering the age of diagnosis for children whom the M-CHAT helped identify. However, this work also revealed that the majority of children with ASD were in fact missed by the M-CHAT and thus did not receive the benefit of universal screening.
In all, these findings indicate that a paradigm shift is needed to meaningfully improve early identification. Although the M-CHAT appears to aid detection of ASD at an early age, results from Carbone et al4 suggest that current screening methods may not be enough to detect all children, and the solution may not simply be to improve adherence to existing screening guidelines. In fact, it seems there is a ceiling to the number of children with ASD who can be detected with existing screening tools. Instead, work from Carbone et al,4 our group, and others signals that we need new ways of thinking about ASD screening to identify the significant number of children who are missed by current screening practices. The way forward may be examining developmental trajectories over time, much like we examine physical growth charts, or using more objective measures that can overcome some of the limitations inherent to parent-completed tools.
Importantly, as we develop these solutions, we need to think about implementation early in the process and move toward testing in real-world settings sooner rather than later. Pediatricians should be involved in crafting the solutions; if they are expected to identify ASD risk, we must make sure solutions are feasible within their workflow. We must consider how strategies work for heterogeneous populations, in diverse health care settings, and for those with differential access to diagnostic and intervention services. We should expect that solutions developed in research settings may not translate to clinical practice if the health care context is not accounted for. Without studies of real-world practice, we may put our faith into solutions that ultimately do not meet the need of pediatricians and our patients.
Opinions expressed in these commentaries are those of the authors and not necessarily those of the American Academy of Pediatrics or its Committees.
FUNDING: Supported in part by the National Institute of Mental Health (R03MH116356). Funded by the National Institutes of Health (NIH).
COMPANION PAPER: A companion to this article can be found online at www.pediatrics.org/cgi/doi/10.1542/peds.2019-2314.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments