When it comes to my own health, I try to explore all available evidence to guide my decisions. Should I drink more or less coffee or wine, should I try to adhere to the Mediterranean diet, what types of exercise are best? Sadly, I have to admit that I don't know the answers to any of those questions.
VanderWeele TJ, Ding P. Sensitivity analysis in observational research: introducing the E-value. Ann Intern Med 2017; 167:268-274. doi: 10.7326/M16-2607.
The reason I'm somewhat clueless is that most of the studies examining those issues are observational studies. As I've noted many times in these postings, even prospective observational studies can only suggest an association, not cause and effect, because there are too many potential confounding variables that can't be measured. So, I was quite intrigued by an article I saw last summer, and I've been waiting for a time to let you know about it.
Sensitivity analysis has to do with assessing how "durable" a study's findings are. A simple way to look at this is to examine confidence intervals around a particular study result. Rather than just looking at the single point finding of that result, look at the statistical extremes of the results as a sort of worst (or best) case scenario. If even assuming the extreme worst study result, the intervention still seems worthwhile, then it's more likely that the results are true.
Still, however, unmeasured confounders can trip us up. These investigators propose a better way to perform a sensitivity, or bias, analysis, the E-value. They define E-value as "...the minimum strength of association ... that an unmeasured confounder would need to have with both the treatment and outcome ... to fully explain away a specific treatment-outcome association." In other words, a researcher could use this methodology to suggest how likely it is that an unmeasured confounder could explain the study outcomes. As suggested in the accompanying editorial, I'll be looking for E-values in any study reporting exposure association outcomes, such as relative risks, rate or hazard ratios, and odds ratios.
It's important to remember, as I've said in prior postings, that the traditional study hierarchy (evidence pyramid) placing randomized controlled trials above observational studies isn't that simple. Although meta-analyses of randomized controlled trials and observational studies of the same clinical question sometimes have conflicting results, it isn't necessarily due to inherent features of those study designs. A well-done Cochrane study has shown that these differences also are affected by the degree of heterogeneity in the meta-analyses themselves.
As for what I do with my dietary habits, I'll continue my current coffee, wine, and Mediterranean-type foods consumption until I see more convincing evidence of risks and benefits.