As much as I poke fun at contrived acronyms, I confess to favor this one. I felt like I was in Nerd Nirvana after reading this early release article:
Evans SR, Pennello G, Pantoja-Galicia N, et al. Benefit-risk evaluation for diagnostics: a framework (BED-FRAME). Clin Infect Dis 2016; May 18. pii:ciw239; Epub ahead of print.
I struggled whether to use this article for my precious 5th Tuesday posting, where I've freed myself from the confines of AAP Grand Rounds to comment on any article I want. I finally decided that I loved this article too much, so I'm indulging myself.
The article will appeal only to true EBM nerds. I promise not to bore you with the mathematical minutiae, but I really think these authors' approach, or something similar to it, represent a leap forward in how we use diagnostic tests.
We all know that no diagnostic test is perfect, but beyond that fact lies the dilemma of how these inaccuracies impact clinical outcomes in different patient scenarios. BED-FRAME is an attempt at a graphical display to understand how to use test results, based on the tests' diagnostic performance, incorporating all those delightful terms like sensitivity, specificity, likelihood ratios, and disease prevalence.
BED-FRAME has a 5-step approach:
1. A visual display of "expected clinical impact" of the "expected diagnostic yield." In their example of antibiotic susceptibility testing, this would vary both with the prevalence of disease as well as the susceptibility rate in that particular time and place.
2. If comparing 2 diagnostic tests, another graphical display of the expected between-test differences in false and true negatives as varying by disease prevalence.
3. A number-needed-to-test, similar to number-needed-to-treat: how many patients would be evaluated by a new test versus the comparator test to find 1 additional true positive result?
4. A graph of "weighted accuracy" versus "relative importance." Accuracy of a test (i.e. how often is it correct, both when positive and negative) always depends on disease prevalence, so one can't really use an accuracy number as a stand alone number for all clinical settings. Using the weighted accuracy (correcting for the relative importance of different errors - e.g. incorrectly finding a bacteria is susceptible to an antibiotic usually is a bigger problem than incorrectly designating it as resistant to that antibiotic).
5. A visual display of the difference in weighted accuracy as a function of relative importance and of prevalence. This is the big picture summary view.
OK, I said I wouldn't bore you with the math, but I suspect now I've confused a large swath of readers. I'll use the authors' example of resistant Acinetobacter infections to try to clear the fog. They wanted to compare 2 rapid molecular assays (designated PCR/ESI-MS and MB; don't sweat these acronyms!) for detecting carbapenem resistance, and here's their summary graph:
The green-shaded areas indicate that the PCR/ESI-MS test should be used in regions where the susceptibility rates and relative importance lie within that zone, while MB is preferred if your numbers lie within the orange area. So, there is no 1 best test for all circumstances; one needs to assess all those variables (disease prevalence, susceptibility rates, relative importance) to choose which Acinetobacter rapid susceptibility test to use. Pretty complicated, but much better for our patients.
Of course this whole idea is a work in progress, and likely physicians would have electronic decision aids to choose the correct test. Just remember, you heard it first here!
And thanks to all of you for indulging my bit of Nerd Nirvana following the holiday weekend.