Evidence on repeating vaccination misinformation or "myths" in debunking text is inconclusive; repeating myths may unintentionally increase agreement with myths or help discredit myths. In this study we aimed to compare the effect of repeating vaccination myths and other text-based debunking strategies on parents’ agreement with myths and their intention to vaccinate their children.
For this online experiment we recruited 788 parents of children aged 0 to 5 years; 454 (58%) completed the study. We compared 3 text-based debunking strategies (repeating myths, posing questions, or making factual statements) and a control. We measured changes in agreement with myths and intention to vaccinate immediately after the intervention and at least 1 week later. The primary analysis compared the change in agreement with vaccination myths from baseline, between groups, at each time point after the intervention.
There was no evidence that repeating myths increased agreement with myths compared with the other debunking strategies or the control. Posing questions significantly decreased agreement with myths immediately after the intervention compared with the control (difference: −0.30 points, 99.17% confidence interval: −0.58 to −0.02, P = .004, d = 0.39). There was no evidence of a difference between other debunking strategies or the control at either time point, or on intention to vaccinate.
Debunking strategies that repeat vaccination myths do not appear to be inferior to strategies that do not repeat myths.
What’s Known on This Subject:
Vaccination misinformation may fuel hesitancy and refusal and factor in vaccine-preventable disease outbreaks. Evidence on repeating vaccination misinformation or “myths” in debunking text is inconclusive; repeating myths may unintentionally increase agreement with myths or help discredit myths.
What This Study Adds:
This online experiment was conducted in parents of children <5, which is a group of key public health significance. We found written debunking interventions that repeat myths do not appear to be inferior to other strategies.
Childhood vaccination raises questions and concerns for ∼40% of parents in Australia.1 In addition to practical barriers to vaccination, vaccine concerns among parents can lower childhood vaccination rates and are associated with outbreaks of measles and pertussis.2,3 Vaccine misinformation (information not supported by evidence) can exacerbate parental concerns: shared in social networks or spread by those seeking to oppose vaccination,4,5 misinformation may reduce confidence in vaccination by increasing perceptions of risk.6,7 Misinformation provides an underpinning for misperceptions such as vaccines overwhelming children’s immune systems and the dangers of giving too many vaccines too early, the preference for natural rather than vaccine-induced immunity, and the association of vaccines with autism.1,8
Countering misinformation is key to avoiding negative effects on vaccination attitudes.9 Parents of young children are at high risk of misinformation exposure10 and are important targets for interventions to counter misinformation. Encouragingly, parents indicate receptiveness to trusted sources that address their concerns and provide accurate, evidenced-based information.11 Global health agencies, like the World Health Organization and United Nations Children’s Fund (UNICEF), health care providers, other advocates of vaccination, and the media all play a key role in addressing misinformation, especially in online settings, where it is most easily spread.12
Commonly used strategies to address misinformation, however, have been shown to have adverse rather than positive effects in parent populations.13 One frequently used strategy to counter misinformation is to prominently repeat specific examples of misinformation or “myths” before debunking them. With this strategy, myths are often presented as headings, followed by corrective, evidence-based text.14,15 Reviews of evidence and some recent studies, however, suggest this approach may be flawed: repeating a myth may backfire by rendering it memorable and thus likely to be recalled as true on the basis of recall and familiarity, a phenomenon known as familiarity bias.16–19 Hence, recommendations for debunking misinformation have emphasized providing factual information over repeating myths to avoid triggering familiarity backfire effects.20 Authors of a recent review, however, have questioned whether backfire effects reliably occur,21 while other research has failed to reveal evidence that repeating myths is counterproductive.22–25 The literature on debunking vaccination misinformation is limited. This gap in evidence is important to address, especially in view of the deployment of coronavirus disease of 2019 (COVID-19) vaccines, which are subject to a range of claims made by opponents of vaccination.
With this study, we sought to assist health communicators addressing misinformation about childhood vaccination with evidence on the effectiveness of various debunking strategies. The aim of this study was to compare how different text-based debunking strategies affect parents’ agreement with vaccination myths and their intention to vaccinate their children.
Methods
Participants
This was a prospective online experiment testing a communication intervention aimed at reducing agreement with vaccination myths in parents of young children. Participants were parents of children aged 0 to 5 years. Eligible participants were 18 years or older, residing in Australia, and competent at reading and responding in English. Participants gave written informed consent. The Macquarie University Human Research Ethics Committee granted ethics approval (ref. 5201954658790).
Procedure
Research company Quality Online Research recruited participants from its accredited online panel, the representativeness of which is obtained by using quota controls according to Australian Bureau of Statistics Census data. The company recruited participants between September 16 and October 30, 2019, inviting them via e-mail or survey technology and offering between A$1.00 and A$3.00 as incentive for participation. The company stopped recruitment when predetermined targets were achieved.
At baseline, participants responded to myth agreement, intention to vaccinate, and vaccine confidence items (see Materials for definitions). Participants were randomly assigned to receive 1 of 3 debunking interventions or a control text. Immediately after the intervention, participants responded to myth agreement and intention to vaccinate items again. Participants were also asked to provide demographic information. For quality control, participants were asked to summarize the intervention text in a free-response text box. After 1 week, participants were invited to complete a follow-up survey responding to myth agreement and intention to vaccinate items. They had up to 3 weeks to respond. At the close of study, participants were given a debriefing statement with credible information correcting vaccination myths used in the intervention.
The study aimed to recruit 452 participants to ensure a sample size of 376 participants (with an expected 17% loss to follow-up), calculated to allow detection of an effect size of d = 0.5 when comparing change in myth agreement (primary outcome) between groups (see Supplemental Information for sample size calculations). This study was powered at 80% to be confirmatory for the primary outcome. Participants with incomplete surveys or poor-quality free responses (off-topic, unclear, unanswered) or who responded too quickly (determined a priori by the research company as per their quality control measures) were excluded by the research company.
Of the 788 parents of children aged 0 to 5 years who consented and were randomly assigned, 454 (58%) completed the follow-up survey and were included in the analysis (Fig 1). Of the 454 participants included, 63% were female (284 of 454), 56% (255 of 454) were aged between 30 and 39 years, 60% (272 of 454) had a household income of $80 000+ per year, and 61% (275 of 454) had university qualifications.
Flow diagram revealing progress of participants through the online experiment.
Mean response time between baseline and follow-up survey was 16 (SD = 5.55) days. Of the 788 randomly assigned participants, 14% (107 of 788) were excluded because of their poor-quality responses, while 29% (227 of 788) failed to respond to the invitation to complete a follow-up survey; this attrition was higher than the expected 17%. There was no significant difference in exclusion across intervention groups (χ2[N = 788, df =3] = 1.70, P = .64).
Attrition analysis compared sex and measures of vaccine confidence, myth agreement, and intention to vaccinate at baseline. Participants who did not complete the follow-up survey were more likely to be female (χ2 = 9.91, P = .007) and have a lower vaccine confidence score at baseline (P = .046, Cohen’s d = 0.15). There was no evidence of a difference in myth agreement or intention to vaccinate at baseline between participants who did and did not complete the follow-up survey. There was no significant difference in attrition across intervention groups (χ2[N = 681, df =3] = 2.85, P = .42).
Materials
Intervention
Participants were asked to read a short piece of text (∼350 words) debunking 3 vaccination myths. The 3 myths were “It’s better for children to develop immunity from diseases”; “It’s safer to vaccinate babies and young children when they are older”; and “Vaccines overwhelm a baby's immune system.” The text was modified from a resource addressing common vaccine misperceptions developed to support health care providers’ consultations with parents.26
Each intervention (myth, question, or statement) used a different debunking strategy to counter the myths. The myth intervention repeated the vaccination myths (“Myth: Vaccines overwhelm a baby’s immune system”) in the headings before providing corrective text. The question intervention posed questions (“Can vaccines overwhelm a baby’s immune system?”) in the headings before providing corrective text. The statement intervention made factual statements (“A baby’s immune system to able to respond to a vaccine and fight germs at the same time”) in the headings before providing corrective text. The corrective text was the same for each intervention (Fig 2); only the headings differed between interventions. Participants in the control group were given unrelated text with a similar length and structure about parenting strategies. Survey software required participants to view this page for a minimum of 30 seconds (see Supplemental Information for full intervention texts).
Intervention texts, comprising 3 vaccination myths, followed by corrective text.
Intervention texts, comprising 3 vaccination myths, followed by corrective text.
Survey Items
Myth agreement was assessed with 3 items, by using a 5-point scale (1 = strongly disagree, 5 = strongly agree). The responses to each of the 3 individual vaccination myths described above were averaged to create a myth agreement score, which revealed high internal consistency at baseline (α = .84), immediately after the intervention (α = .85), and 1+ weeks after the intervention (α = .84). Intention to vaccinate was assessed with a single item, by using a 0 to 100 scale (0 = definitely not, 100 = definitely). Myth agreement and intention to vaccinate items were consistent with survey questions used in studies with similar parent populations.8,27 Vaccine confidence was measured by using the 4-item short form of the Vaccine Confidence Scale (benefits factor)28,29 (5-point scale, 1 = strongly disagree, 5 = strongly agree; α = .85) and were averaged to create a vaccine confidence score.
Data Analysis
The primary outcome measure was the change in myth agreement, calculated as the difference from baseline at each time point after the intervention. The primary analysis compared mean change in myth agreement between groups, at each time point after the intervention. For this analysis, independent samples t tests were used, adjusted for multiple comparisons between groups with Bonferroni correction (P < .0083; confidence intervals [CIs] of 99.17%). Cohen’s d was calculated to describe the magnitude of intervention effects.30 Observational within-group changes in myth agreement from baseline were also analyzed by using repeated measures analysis of variance (ANOVA). The findings of the difference-in-difference analyses were confirmed with a repeated measures analysis of covariance (ANCOVA) (see Supplemental Information).
A secondary outcome measure was the change in intention to vaccinate, calculated as the difference from baseline at each time point after the intervention. Changes between groups were compared by using independent samples t tests. Observational within-group changes from baseline were analyzed by using repeated measures ANOVAs. All analyses were conducted by using SPSS (version 25; IBM SPSS Statistics, IBM Corporation).
Subgroup Analysis
A prespecified subgroup analysis included data from 217 moderate-low vaccine confidence participants only (48%; 217 of 454). Participants were categorized as moderate-low vaccine confidence if their vaccine confidence score measured at baseline (5 point scale, 1 = strongly disagree, 5 = strongly agree) was ≤4.38. High vaccine confidence participants (score >4.38) (52%; 237 of 454) were excluded. These categories are based on results of a previous study in parents of young children, in which a score of ≤4.38 (converted from a score by using an 11 point scale) was associated with delay of any vaccine.29 Mean changes in myth agreement and intention to vaccinate were compared between groups, at each time point after the intervention. Independent samples t tests were used for this analysis, adjusting for multiple comparisons between groups with Bonferroni correction (P < .0083).
Preregistration
The study aims and hypotheses, methods, and data analysis plan were preregistered with the Open Science Framework (https://osf.io/jthn2). A minor variation to the preregistration was to analyze myth agreement as a composite score, with the aim of presenting simple and straightforward results in the article. An analysis of change in myth agreement for each individual myth is retained in the Supplemental Information.
Results
At baseline, mean myth agreement scores were between neutral (3) and slightly disagree (2) (see Table 1). Within-group observational changes in myth agreement, both immediately after the intervention and 1+ weeks after the intervention, are shown in Table 2 and Fig 3.
Mean change in myth agreement after the intervention for all groups. The light bars reveal change in myth agreement immediately after the intervention; the dark bars reveal change in myth agreement 1+ weeks after the intervention. Error bars indicate 95% CIs.
Mean change in myth agreement after the intervention for all groups. The light bars reveal change in myth agreement immediately after the intervention; the dark bars reveal change in myth agreement 1+ weeks after the intervention. Error bars indicate 95% CIs.
Vaccination-Specific Characteristics of Participants by Intervention Group at Baseline, Immediately After the Intervention and 1+ Week After the Intervention
. | Myth Agreement,a Mean Score (SD) . | Intention to Vaccinate,b Mean Score (SD) . | Vaccine Confidence,c Mean Score (SD), Baseline . | ||||
---|---|---|---|---|---|---|---|
Baseline . | Immediately After . | 1+ wk After . | Baseline . | Immediately After . | 1+ wk After . | ||
All (n = 454) | 2.39 (1.06) | 2.20 (1.04) | 2.19 (1.02) | 92.00 (15.26) | 92.29 (15.94) | 91.90 (16.06) | 4.33 (0.65) |
Myth (n = 127) | 2.41 (1.00) | 2.20 (1.03) | 2.14 (1.00) | 92.61 (12.58) | 92.81 (14.82) | 91.76 (15.80) | 4.32 (0.63) |
Question (n = 118) | 2.37 (1.18) | 2.03 (1.08) | 2.12 (0.96) | 93.23 (14.28) | 93.57 (13.17) | 91.88 (16.76) | 4.37 (0.69) |
Statement (n = 103) | 2.36 (1.07) | 2.20 (1.03) | 2.17 (1.00) | 88.91 (19.67) | 90.74 (17.66) | 90.99 (16.50) | 4.34 (0.70) |
Control (n = 106) | 2.42 (0.97) | 2.38 (1.02) | 2.36 (1.12) | 92.89 (14.12) | 91.75 (18.19) | 92.99 (15.28) | 4.30 (0.60) |
. | Myth Agreement,a Mean Score (SD) . | Intention to Vaccinate,b Mean Score (SD) . | Vaccine Confidence,c Mean Score (SD), Baseline . | ||||
---|---|---|---|---|---|---|---|
Baseline . | Immediately After . | 1+ wk After . | Baseline . | Immediately After . | 1+ wk After . | ||
All (n = 454) | 2.39 (1.06) | 2.20 (1.04) | 2.19 (1.02) | 92.00 (15.26) | 92.29 (15.94) | 91.90 (16.06) | 4.33 (0.65) |
Myth (n = 127) | 2.41 (1.00) | 2.20 (1.03) | 2.14 (1.00) | 92.61 (12.58) | 92.81 (14.82) | 91.76 (15.80) | 4.32 (0.63) |
Question (n = 118) | 2.37 (1.18) | 2.03 (1.08) | 2.12 (0.96) | 93.23 (14.28) | 93.57 (13.17) | 91.88 (16.76) | 4.37 (0.69) |
Statement (n = 103) | 2.36 (1.07) | 2.20 (1.03) | 2.17 (1.00) | 88.91 (19.67) | 90.74 (17.66) | 90.99 (16.50) | 4.34 (0.70) |
Control (n = 106) | 2.42 (0.97) | 2.38 (1.02) | 2.36 (1.12) | 92.89 (14.12) | 91.75 (18.19) | 92.99 (15.28) | 4.30 (0.60) |
Five-point scale, 1 = strongly disagree, 5 = strongly agree; higher scores indicate more agreement with vaccination myths.
Scale of 0–100, 0 = definitely not, 100 = definitely; higher scores indicate stronger intention to vaccinate.
Five-point scale, 1 = strongly disagree, 5 = strongly agree; higher scores indicate more positive beliefs about vaccination.
Within-Group Observational Mean Change in Myth Agreement and Intention to Vaccinate From Baseline, Immediately After the Intervention and 1+ Week After the Intervention
. | Immediately After the Intervention . | 1+ wk After the Intervention . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
M Diff (SD) . | df, Error . | F . | P . | np2 . | M Diff (SD) . | df, Error . | F . | P . | np2 . | |
Change in myth agreementa from baseline | ||||||||||
Myth | −0.20 (0.87) | 1, 126 | 17.18 | <.001 | 0.12 | −0.27 (0.7) | 1, 126 | 18.18 | <.001 | 0.13 |
Question | −0.35 (0.87) | 1, 117 | 18.97 | <.001 | 0.14 | −0.25 (0.97) | 1, 117 | 8.05 | .005 | 0.06 |
Statement | −0.16 (0.65) | 1, 102 | 5.95 | .016 | 0.06 | −0.19 (0.74) | 1, 102 | 6.62 | .012 | 0.06 |
Control | −0.04 (0.69) | 1, 105 | 0.44 | .510 | <0.001 | −0.06 (0.82) | 1, 105 | 0.56 | .456 | <0.001 |
Change in intention to vaccinateb from baseline | ||||||||||
Myth | 0.20 (10.09) | 1, 126 | 0.05 | .826 | <0.001 | −0.86 (13.69) | 1, 126 | 0.50 | .481 | <0.001 |
Question | 0.34 (7.06) | 1, 117 | 0.27 | .603 | <0.001 | −1.35 (14.67) | 1, 117 | 0.99 | .321 | 0.01 |
Statement | 1.83 (11.58) | 1, 102 | 2.56 | .113 | 0.02 | 2.08 (18.74) | 1, 102 | 1.27 | .263 | 0.01 |
Control | −1.14 (11.78) | 1, 105 | 1.00 | .321 | <0.001 | 0.10 (8.51) | 1, 105 | 0.02 | .900 | <0.001 |
. | Immediately After the Intervention . | 1+ wk After the Intervention . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
M Diff (SD) . | df, Error . | F . | P . | np2 . | M Diff (SD) . | df, Error . | F . | P . | np2 . | |
Change in myth agreementa from baseline | ||||||||||
Myth | −0.20 (0.87) | 1, 126 | 17.18 | <.001 | 0.12 | −0.27 (0.7) | 1, 126 | 18.18 | <.001 | 0.13 |
Question | −0.35 (0.87) | 1, 117 | 18.97 | <.001 | 0.14 | −0.25 (0.97) | 1, 117 | 8.05 | .005 | 0.06 |
Statement | −0.16 (0.65) | 1, 102 | 5.95 | .016 | 0.06 | −0.19 (0.74) | 1, 102 | 6.62 | .012 | 0.06 |
Control | −0.04 (0.69) | 1, 105 | 0.44 | .510 | <0.001 | −0.06 (0.82) | 1, 105 | 0.56 | .456 | <0.001 |
Change in intention to vaccinateb from baseline | ||||||||||
Myth | 0.20 (10.09) | 1, 126 | 0.05 | .826 | <0.001 | −0.86 (13.69) | 1, 126 | 0.50 | .481 | <0.001 |
Question | 0.34 (7.06) | 1, 117 | 0.27 | .603 | <0.001 | −1.35 (14.67) | 1, 117 | 0.99 | .321 | 0.01 |
Statement | 1.83 (11.58) | 1, 102 | 2.56 | .113 | 0.02 | 2.08 (18.74) | 1, 102 | 1.27 | .263 | 0.01 |
Control | −1.14 (11.78) | 1, 105 | 1.00 | .321 | <0.001 | 0.10 (8.51) | 1, 105 | 0.02 | .900 | <0.001 |
df, degrees of freedom; M Diff, mean difference; np2, partial eta squared.
Change calculated from scores by using a 5-point scale, 1 = strongly disagree, 5 = strongly agree; a negative change means less agreement with the myths compared with baseline.
Change calculated from scores by using a 0–100 scale, 0 = definitely not, 100 = definitely; a positive change means a stronger intention to vaccinate compared with baseline.
The primary analysis compared the change in myth agreement from baseline, between groups, at each time point after the intervention. The results of this analysis are shown in Table 3. The null hypothesis for comparing change in myth agreement between posing questions and control immediately after the intervention was rejected: compared with the control group, the question group showed a significant decrease in agreement with vaccination myths of a medium effect size immediately after the intervention (difference between groups: −0.30 points, 99.17% CI: −0.58 to −0.02, P = .004, Cohen’s d = 0.39). We found no clear evidence of a difference between change in myth agreement between the control and other groups or between the groups themselves: there was no evidence of differences between any other groups at this time point or at 1+ weeks after the intervention. This includes the myth group, which did not increase myth agreement compared with the other groups or the control at any time point (see Table 3). There was no evidence of a difference between groups in intention to vaccinate at any time point. The results of the repeated measures ANCOVA aligned with those of the difference-in-difference analyses (see Supplemental Information).
Comparing Mean Change in Myth Agreement and Intention to Vaccinate Between Groups
. | Immediately After the Intervention . | 1+ wk After the Intervention . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
M Diff . | t (df) . | P . | 99.17% CI . | M Diff . | t (df) . | P . | 99.17% CI . | |||
Lower . | Upper . | Lower . | Upper . | |||||||
Comparing change in myth agreement from baseline | ||||||||||
Myth versus question | 0.15 | 1.579 (243) | .116 | −0.10 | 0.39 | −0.01 | −0.101 (243) | .920 | −0.30 | 0.28 |
Myth versus statement | −0.05 | −0.593 (228) | .554 | −0.26 | 0.16 | −0.08 | −0.812 (228) | .418 | −0.33 | 0.18 |
Myth versus control | −0.16 | −1.954 (231) | .052 | −0.37 | 0.06 | −0.21 | −2.058 (231) | .041 | −0.47 | 0.06 |
Question versus statement | −0.19 | −1.846 (219) | .066 | −0.47 | 0.09 | −0.07 | −0.565 (219) | .572 | −0.38 | 0.25 |
Question versus control | −0.30* | −2.884 (222) | .004 | −0.58 | −0.02 | −0.19 | −1.606 (222) | .110 | −0.52 | 0.13 |
Statement versus control | −0.11 | −1.207 (207) | .229 | −0.36 | 0.13 | −0.13 | −1.181 (207) | .239 | −0.42 | 0.16 |
Comparing change in intention to vaccinate from baseline | ||||||||||
Myth versus question | −0.14 | −0.127 (243) | .899 | −3.12 | 2.84 | 0.49 | 0.270 (243) | .787 | −4.33 | 5.31 |
Myth versus statement | −1.63 | −1.139 (228) | .256 | −5.44 | 2.18 | −2.94 | −1.371 (228) | .172 | −8.64 | 2.76 |
Myth versus control | 1.34 | 0.934 (231) | .351 | −2.48 | 5.15 | −0.96 | −0.629 (231) | .530 | −5.03 | 3.11 |
Question versus statement | −1.49 | −1.168 (219) | .244 | −4.88 | 1.90 | −3.43 | −1.522 (219) | .129 | −9.42 | 2.57 |
Question versus control | 1.48 | 1.154 (222) | .250 | −1.94 | 4.90 | −1.45 | −0.892 (222) | .373 | −5.78 | 2.88 |
Statement versus control | 2.97 | 1.836 (207) | .068 | −1.34 | 7.27 | 1.97 | 0.985 (207) | .326 | −3.37 | 7.31 |
. | Immediately After the Intervention . | 1+ wk After the Intervention . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
M Diff . | t (df) . | P . | 99.17% CI . | M Diff . | t (df) . | P . | 99.17% CI . | |||
Lower . | Upper . | Lower . | Upper . | |||||||
Comparing change in myth agreement from baseline | ||||||||||
Myth versus question | 0.15 | 1.579 (243) | .116 | −0.10 | 0.39 | −0.01 | −0.101 (243) | .920 | −0.30 | 0.28 |
Myth versus statement | −0.05 | −0.593 (228) | .554 | −0.26 | 0.16 | −0.08 | −0.812 (228) | .418 | −0.33 | 0.18 |
Myth versus control | −0.16 | −1.954 (231) | .052 | −0.37 | 0.06 | −0.21 | −2.058 (231) | .041 | −0.47 | 0.06 |
Question versus statement | −0.19 | −1.846 (219) | .066 | −0.47 | 0.09 | −0.07 | −0.565 (219) | .572 | −0.38 | 0.25 |
Question versus control | −0.30* | −2.884 (222) | .004 | −0.58 | −0.02 | −0.19 | −1.606 (222) | .110 | −0.52 | 0.13 |
Statement versus control | −0.11 | −1.207 (207) | .229 | −0.36 | 0.13 | −0.13 | −1.181 (207) | .239 | −0.42 | 0.16 |
Comparing change in intention to vaccinate from baseline | ||||||||||
Myth versus question | −0.14 | −0.127 (243) | .899 | −3.12 | 2.84 | 0.49 | 0.270 (243) | .787 | −4.33 | 5.31 |
Myth versus statement | −1.63 | −1.139 (228) | .256 | −5.44 | 2.18 | −2.94 | −1.371 (228) | .172 | −8.64 | 2.76 |
Myth versus control | 1.34 | 0.934 (231) | .351 | −2.48 | 5.15 | −0.96 | −0.629 (231) | .530 | −5.03 | 3.11 |
Question versus statement | −1.49 | −1.168 (219) | .244 | −4.88 | 1.90 | −3.43 | −1.522 (219) | .129 | −9.42 | 2.57 |
Question versus control | 1.48 | 1.154 (222) | .250 | −1.94 | 4.90 | −1.45 | −0.892 (222) | .373 | −5.78 | 2.88 |
Statement versus control | 2.97 | 1.836 (207) | .068 | −1.34 | 7.27 | 1.97 | 0.985 (207) | .326 | −3.37 | 7.31 |
df, degree of freedom; M Diff, mean difference.
Mean difference significant at the 0.0083 level; 99.17% CI is Bonferroni adjusted.
Comparing changes in myth agreement between groups for each myth individually indicated differences between myths. For the “Vaccines overwhelm immune systems” myth, the question and myth groups showed a significant decrease in myth agreement of a medium size compared with the control (difference between question and control: −0.45 points, 99.17% CI: −0.81 to −0.09, P = .001, Cohen’s d = 0.45; difference between myth and control: −0.31 points, 99.17% CI: −0.59 to −0.03, P = .004, Cohen’s d = 0.38). There were no significant differences between groups for the “Disease-acquired immunity is better” myth or the “Delaying vaccines is safer” myth (see Supplemental Table 6 for full results of analysis per myth).
In the subgroup analysis of moderate-low vaccine confidence participants, there was no evidence of significant differences between groups in the change in myth agreement from baseline, at either time point. There was no difference in intention to vaccinate between groups at either time point (see Supplemental Table 5).
Discussion
The results provide no evidence of a difference between debunking strategies that repeat myths alongside corrective text compared with strategies that do not repeat myths. This is inconsistent with findings from a recent study and reviews assessing the effects of debunking strategies, which suggest that repeating myths may inadvertently increase myth agreement.16–19 The study revealed that repeating vaccination myths did not perform more poorly than the other debunking strategies. There was clear evidence of a reduction in myth agreement relative to the control group for the debunking strategy that posed questions. Given this followed appropriate statistical adjustment for multiple comparisons, it suggests that a false-positive is less likely to have been detected. No differences in parents’ intention to vaccinate between groups were observed with any strategy.
In an analysis of an important subgroup of moderate-low vaccine confidence parents, the study also revealed no evidence that repeating myths performed more poorly than other strategies. Research comparing groups with low and high vaccine confidence would be worthwhile.
These findings on repeating myths are consistent with conclusions from a recent meta-analysis, which suggests corrective attempts can reduce misinformation effects.31 Research comparing message formats for debunking influenza vaccination misinformation has also found that accurate knowledge increases after debunking, regardless of the message format, and that repeating misinformation does not inadvertently increase inaccurate knowledge.22 Repeating misinformation with corrective text may in fact draw attention to the correction, thereby enhancing, rather than impairing, correct recall.23,24 In addition to the specific debunking strategies, parents’ agreement with myths in this study may have been influenced by other characteristics of the debunking text, such as succinctness, ease of comprehension, and provision of detailed corrective information.17,24
Analysis by individual myth suggests that some misperceptions about vaccines, such as the superiority of “natural” disease-acquired immunity and time-based safety concerns, are more persistent than other myths. Future research may reveal why this is the case. There may be various factors at play, such as the association of an individual’s beliefs about “unnatural” (vaccine-acquired) immunity with moral values related to bodily purity versus degradation.32 Equally, the novelty of a myth to an individual may render corrections ineffective.33
This study has implications for how health professionals, global health authorities, and other advocates of vaccination debunk vaccine misinformation in written text. In contrast with existing debunking guidelines, this study’s findings suggest that repeating vaccination myths is not necessarily harmful. Any repetition of myths, however, should be done judiciously and be accompanied by corrective text, because familiarity may still lead to myths being misremembered as facts. Furthermore, it has been proposed that repeating new or obscure misinformation may unintentionally help disseminate it to broader audiences.24,34 This is a particular consideration on social media platforms, where attempts to counter novel vaccination myths may increase their prominence by amplifying their reach, increasing audience exposure. Recent experiments repeating novel misinformation about a range of topics found no evidence of repetition leading to stronger misconceptions.35 Experiments with vaccination misinformation specifically would be worthwhile conducting, as would further investigations of the relationship between novel vaccine misinformation and social media amplification.
This research was conducted in parents of children <5. This is a key strength of the study because it reveals generalizability to a population of key public health significance, especially given the high risk of exposure to vaccine misinformation parents experience online, and the limited understanding of how they respond to various debunking strategies in this context.
Measuring responses to just 3 myths may have limited the ability to draw conclusions about effective strategies to debunk other myths, such as new myths (like those about COVID-19 vaccines), prominent and persistent myths like the alleged link between vaccines and autism, or those about vaccines for specific diseases like measles, pertussis, or human papillomavirus. Not attributing the debunking text to a credible source may have lessened participants’ willingness to accept the debunking information. Although the sample was intended to be representative of the population, respondents analyzed were not, which may impact on the external validity of the findings. Parents’ vaccination intention, rather than uptake, was measured as an outcome. Although in keeping with similar studies, uptake would provide a more accurate measure of vaccination behavior. Furthermore, no significant findings for parents’ vaccination intentions were observed. Further investigation of how parents’ agreement with vaccination myths is associated with intentions and behavior is warranted, as is research into what types of debunking interventions may improve intentions. Finally, in this study, myth agreement was analyzed as a continuous variable. Informal analysis of the data as categories of agreeing and disagreeing parents (not included here) suggests exposing parents to vaccination myths without corrective text may increase myth agreement. This effect is worth investigating further in future research.
Conclusions
In this study, repeating myths as a debunking strategy did not appear to be inferior to strategies that do not. Posing myths as questions may be an effective debunking strategy when paired with corrective text. Further research should elucidate why some myths are more persistent than others and evaluate debunking strategies for novel vaccination myths and those that change behavior.
Acknowledgments
We acknowledge the contributions of Noel Brewer and Ullrich Ecker in critiquing and improving this article.
Full article can be found online at www.pediatrics.org/cgi/doi/10.1542/peds.2020-049304
FUNDING: Supported by Macquarie University Research Training Program Scholarship 2017438 and National Health and Medical Research Council project grant APP1128968. The funder/sponsor did not participate in the work.
Dr Steffens performed the literature search, developed the study design and protocol, conducted the statistical analysis and interpretation of the data, developed the figures and tables, drafted the initial manuscript, and reviewed and revised the manuscript; Drs Dunn, Danchin, and Leask contributed to the literature search, developed the study design and protocol, contributed to the statistical analysis, conducted interpretation of the data, contributed to the development of figures and tables, and reviewed and revised the manuscript; Dr Marques developed the study design and protocol, contributed to the statistical analysis and interpretation of the data, contributed to the development of figures and tables, and reviewed and revised the manuscript; Dr Witteman developed the study design and protocol and reviewed and revised the manuscript; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: During the conduct of the study, Dr Steffens reports funding from Macquarie University; Dr Dunn reports grants from the National Health and Medical Research Council; Dr Leask reports grants from the National Health and Medical Research Council and funding from the World Health Organization; Dr Witteman reports funding from the Canada Research Chairs program and grants from the Canadian Institutes of Health Research; and Drs Marques and Dr Danchin have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments