Human papillomavirus (HPV) is the most common sexually transmitted disease worldwide, affecting >79 million people annually in the United States alone.1 HPV’s more serious sequelae (cervical and other anogenital cancers, genital warts, high-grade cervical dysplasia, and possibly HPV-positive head and neck cancers) are largely preventable through vaccination. Taken together, HPV vaccines have a remarkable potential to dramatically reduce the incidence of HPV-related cancers and other diseases. Yet, the vaccine is still largely underused in many countries. In the United States, as of 2016, only 43.4% of adolescents ages 13 to 17 years had completed the recommended number of doses.2 This is well below national goals and reveals an urgent need to identify interventions that can be used to improve vaccine uptake.
Many interventions have been developed to this end, but only a minority have been tested in rigorously designed trials, and even fewer have been found to actually improve vaccination rates.3,4 Thus, it is exciting to see the study by Dixon et al5 because it is 1 of the few in which these benchmarks have been reached. In their study, the authors randomly assigned 5 pediatric clinics within a safety-net health system to receive a tablet-based educational intervention for parents that was designed to improve adolescent HPV vaccination rates (versus usual care). The intervention itself was innovative in that it consisted of a digital video about the vaccine that parents viewed in the clinic waiting room that varied depending on the parents’ current vaccine acceptance and also on whether the adolescents had received any previous doses. These innovations are important for 2 reasons. First, among parents who are already in favor of the vaccine, providing unsolicited “extra” information can actually bring up concerns or issues that the parents had not considered previously, thereby potentially creating problems that did not previously exist. Second, barriers to initiating the vaccine series versus getting subsequent doses are generally different, and thus require different messages to overcome. Both of these are nicely addressed in the Dixon et al5 intervention.6
The main outcome assessed in this study was the individual-level change in HPV vaccination status (receipt of first, second, or third dose as needed) in the 2 weeks after the clinic visit. The authors found that adolescents who were seen in intervention clinics had nearly twofold the odds of receiving a needed vaccine dose compared with those in control clinics. Not surprisingly, among intervention clinic adolescents, there was a threefold increase in the odds of vaccination if the adolescents’ parents had actually interacted with the intervention compared with adolescents whose parents had not.
This study has a lot of strengths, including the randomized controlled trial study design, generally balanced demographic characteristics across study arms, relatively large sample size (n = 1596), and analyses that accounted for potential clustering of results at the clinic level. An additional strength not highlighted in the article is that the authors still saw a significant effect from their intervention despite the fact that baseline vaccination rates seemed to be substantially higher in intervention clinics than in control clinics. Although not reported on specifically, their figures reveal that intervention clinics started the study with ∼39% of patients having received no doses of the vaccine (∼210 of 537) compared with ∼56% (∼600 of 1059) in those in the control clinics. Looking at the reciprocal of these values means that at baseline, 61% (100%–39%) of adolescents in the intervention clinics had at least started the vaccine series compared with only 44% in the control clinics. Past vaccine implementation research reveals that increases in vaccination rates are not linear; clinics with high levels of vaccination at baseline generally see a smaller rise in vaccination with interventions than clinics with low baseline vaccination rates. Thus, the apparent differences in baseline vaccination rates between the intervention and control clinics in the Dixon et al5 study would likely have blunted any effects of the intervention. The fact that an intervention effect was still found reveals that had both clinics started with the same low baseline vaccination rates, the effect size from the intervention may have been even larger.
Although these results are exciting, they should be viewed with a substantial word of caution related to the feasibility of implementing such an intervention more broadly. As the authors aptly point out, the logistics of delivering such an intervention were substantial.5 Only 1 of 4 patients who were eligible to receive the intervention actually did so. This low level of engagement arose from difficulties in integrating an iPad-based intervention into the clinics’ workflow, a challenge that has been reported by many others attempting to integrate mobile health technologies into primary care.7 It is critical that future researchers address this formidable challenge because our current lack of understanding of how to implement mobile and technology-based interventions into busy clinical settings prevents many effective interventions from realizing their full potential.
Opinions expressed in these commentaries are those of the author and not necessarily those of the American Academy of Pediatrics or its Committees.
FUNDING: No external funding.
COMPANION PAPER: A companion to this article can be found online at www.pediatrics.org/cgi/doi/10.1542/peds.2018-1457.
POTENTIAL CONFLICT OF INTEREST: Dr Dempsey serves on advisory boards for Merck, Sanofi, and Pfizer and is a consultant for Pfizer, but she does not receive any research support from these companies.
FINANCIAL DISCLOSURE: Dr Dempsey serves on advisory boards for Merck, Sanofi, and Pfizer and is a consultant for Pfizer, but she does not receive any research support from these companies.