It should come as no surprise to regular readers of Evidence eMended that the placebo effect is very real, and it can be dramatic particularly in disorders with prominent neuropsychiatric features. Should we now provide a sugar pill to prevent migraines in children?
Source: Powers SW, Coffey CS, Chamberlin LA, et al. Trial of amitriptyline, topiramate, and placebo for pediatric migraine. N Engl J Med. 2017;376(2):115-124; doi:10.1056/NEJMoa1610384. See AAP Grand Rounds commentary by Dr. David Urion (subscription required).
This multicenter study looked at 2 migraine prevention drugs, amitriptyline and topiramate, both of which had shown modest benefit in adult trials. However, they failed to show benefit over placebo pediatric trial. Specifically, in analysis of 328 children randomized in a double-blind fashion to 1 of the 3 treatments, the primary outcome of percentage of children experiencing a relative reduction of 50% or more in the number of headache days was essentially the same: 52% with amitryptiline, 55% with topiramate, and 61% with placebo. Furthermore, the amitryptline and topiramate groups had a higher rate of adverse events. This suggests that empathy, reassurance, and close follow up are likely to be as effective as medications for pediatric migraine prevention. Of course that's not as easy as giving a pill; empathy and reassurance require time.
The editorial accompanying this article took a dig at the outcome measure utilized in the study, referring to a 2000 publication by the International Headache Society suggesting that the primary outcome for migraine prevention trials should be the headache event frequency per month, very different from number of headache days. However, a more recent (2012) guide from this same organization suggests that both headache frequency and headache days be measured. The current study had headache days per 28 days as a secondary outcome (all 3 arms had 4-5 days per 4 weeks), but I don't know why they chose to track it in that manner rather than in numbers of migraine attacks.
This study included pre-planned data reviews by a data monitoring committee to provide interim assessments particularly for futility of treatment, and in fact the study was stopped early when an interim analysis met the pre-defined criteria. Occasionally this tactic can impair a study's ability to find small differences in treatment outcomes, but I think it was appropriate here.