Seven hours into her overnight shift, Dr Carter opens the computer to complete her sixth admission of the evening. “BEEP! BEEP! BEEP!” her pager interrupts with “Sepsis Alert,” the fourth in the past 48 hours for a 12-year-old boy with metabolic disease, technology dependence, and paroxysmal sympathetic hyperactivity who was admitted 1 week ago for dehydration and increased storming. She leaves her computer to attend the mandatory bedside huddle. The result is the same as the 3 previous ones: storming caused his fever and tachycardia, not sepsis; no change to management. As Dr Carter writes a progress note documenting the huddle, her pager erupts again, requesting orders for her other patient.
Pediatric severe sepsis in the United States costs $7.3 billion annually, one-fifth of pediatric hospitalization costs.1 Over the past 2 decades, professional organizations have built awareness, developed guidelines, and driven efforts to recognize sepsis and intervene early in its course.2,3 Early initiatives began in emergency departments (EDs), although state legal mandates and national quality improvement collaboratives have led to implementation hospital-wide, including in general inpatient units.4,5
Most guidelines frame their recommendations around sepsis response systems that incorporate clinical decision support tools (hereinafter referred to as sepsis scores) to identify suspected sepsis and trigger time-sensitive bundles of laboratory evaluation, intravenous fluid resuscitation, and broad-spectrum antibiotic administration.2,3 Sepsis response systems may increase value by improving outcomes and reducing costs associated with delayed recognition of pediatric sepsis.6,7 Conversely, they may decrease value if they overidentify sepsis, resulting in unnecessary interventions and care team dissatisfaction.8–10 Maintaining value in sepsis care in pediatric inpatient units presents unique challenges. In this commentary, we outline these challenges and encourage hospitals caring for children to thoughtfully consider them when implementing sepsis response systems in inpatient units.
Challenge 1: Sepsis Scores Are Used To Detect Something We Cannot Practically Define
Consensus pediatric definitions use the systemic inflammatory response syndrome (SIRS), sepsis, severe sepsis, and septic shock continuum11,12 ; however, SIRS has poor sensitivity for critical illness and high prevalence among children who are mildly ill.13 Other definitions use an intellectual construct: dysregulated host response to infection resulting in organ dysfunction.3,14 However, strategies for identifying a dysregulated immune system are nebulous with multiple criteria developed to assess for organ dysfunction (ie, Sequential [Sepsis-related] Organ Failure Assessment, Sequential [Sepsis-related] Organ Failure Assessment, PEdiatric Logistic Organ Dysfunction-2 scores).14–16 These SIRS and organ dysfunction criteria differ in their reliance on laboratory values, validation in children, and generalizability across care settings,16,17 complicating efforts to use a common definition to guide clinical decision-making, research, and quality improvement.15
Lacking accepted clinical definitions, researchers turned to intervention-based definitions that relied on provider behavior (eg, administering a sepsis bundle) rather than patient characteristics alone. Intervention-based definitions inherently overestimate sepsis score test performance because of confounding by medical intervention (positive alerts trigger interventions sufficient for the case definition regardless of whether the patient actually has sepsis) and inflate observed incidence. Balamuth et al18 found the rate of severe sepsis cases increased 1.5 times after implementation of a sepsis score when including an intervention-based definition in their analysis. Even with non–intervention-based definitions, screening with sepsis scores may trigger closer monitoring and laboratory evaluations that reveal otherwise insignificant abnormalities sufficient to meet consensus definitions of pediatric sepsis, resulting in higher observed incidence.
Challenge 2: Identifying Sepsis at a Single Point in Time in the ED Is Different From Identifying Sepsis Longitudinally During a Patient’s Hospitalization
Sepsis scores in the ED improve sensitivity compared to physician judgment, but they also generate numerous false-positives, with reported positive predictive values between 4% and 25%.6,18–21 Hospitalized children have less variable illness severities, which makes discriminating between patients with and without sepsis on the basis of subtle changes more challenging and error prone. Furthermore, repetitive screening throughout an entire hospitalization creates even more opportunities for increased testing and the discovery of mild, self-correcting abnormalities (eg, leukocytosis) that are sufficient in the setting of suspected infection for the care team to diagnose sepsis. Such overdiagnosis represents a true-positive epidemiologically, although the resultant interventions are unlikely to benefit the patient. Conversely, repetitive screening may increase false-positives, with one study finding positive predictive values 3 times lower in inpatient units compared to in the ED.21 Another study evaluating an exclusively inpatient sepsis score found that whereas only 4% of patients had possible or confirmed sepsis, 17% of all patients triggered alerts, 40% of whom alerted multiple times.22 Frequent and misdirected alerts from repetitive screening would undoubtedly contribute to alert fatigue.23
Challenge 3: Performance Measures Incentivize Rapid Treatment, Introducing Diagnostic Error
Hospitalized children do not present de novo; an initial evaluation is completed, and their responses to interventions are observed, with multiple opportunities to reassess and individualize treatment plans on the basis of a known illness trajectory. Yet sepsis performance measures incentivize rapid treatment relative to a time zero, defined by scores agnostic to the underlying etiology (ie, 2 children with fever, tachycardia, and leukocytosis trigger identical alerts despite one having bronchiolitis and the other meningitis).24 Clinicians experience tension when seeking to reconcile recommendations for prompt and aggressive treatment with the difficulty of identifying sepsis in real-time while also pursuing other facets of the art of inpatient medicine, such as observation over time, safely doing less, and antimicrobial stewardship.25 When sepsis scores misfire, these incentives contribute to diagnostic error, overtreatment, and a feeling among some that sepsis response systems promote an inflexible, “treat first, ask questions later” approach.25–27 Paradoxically, misfired sepsis alerts could delay care if care teams mistakenly anchor to a diagnosis of sepsis.
Approaches for Successful Implementation
Given the difficulties in defining sepsis and developing effective sepsis scores, hospitals caring for children must use caution when implementing and evaluating sepsis response systems. The following approaches may help address the above challenges.
Unbiased Case Detection
Sepsis scores should be developed and validated prospectively within populations not directly affected by their implementation. This can be accomplished by running the score “silently” in the electronic health record during development so that care teams do not see and act on it. After implementation, teams should monitor incidence rates unconfounded by medical intervention using recently developed pediatric sepsis surveillance definitions to assess their impact.28,29
Improved Score Performance
To address problems with repeated screening, hospitals may suppress alerts for a prespecified duration after each positive alert. This balances the need to reduce false-positives with the recognition that a patient’s condition may evolve, necessitating reevaluation of the diagnosis and plan. However, as demonstrated by Dr Carter’s patient, this may be insufficient over long hospitalizations. To improve the performance of sepsis scores for inpatient units, they should be developed by using longitudinal data from hospitalized children (incorporating patterns, eg, vital sign trends, and working diagnoses) rather than directly applying ED-derived, threshold-based scores. Newer machine-learning models may improve accuracy but have yet to be validated in practice.30–32
Another benefit of model-based approaches is the ability to report the likelihood of sepsis as a continuous probability, enabling graded rather than dichotomized responses.33 For example, low-risk patients could receive increased monitoring and supportive therapies (eg, antipyretics), with complete bundles reserved for the sickest patients with the highest likelihood of sepsis. Similar approaches to neonatal sepsis have reduced antibiotic use without adverse effects,34 and the most recent pediatric sepsis guidelines recommend 1-hour versus 3-hour time lines for antibiotic delivery on the basis of the presence or absence of shock.3 Defining acceptable risk thresholds and aligning performance measures with this approach will be important to ensure that prospective decisions made under uncertainty are not adjudicated against retrospectively assigned diagnoses lacking clinical context.35
A Systems Approach
Sepsis response systems are complex interventions.36 Early identification and early response not only are dependent on sepsis scores alone but also incorporate clinical intuition and care team communication, both of which are bolstered by formalized tools and processes. Even highly accurate sepsis scores may fail in practice if frontline users find them untrustworthy, struggle applying them to individual patients, and fail to integrate alerts into standard workflows.37 Successful implementation acknowledges this complex interplay of interdependent components, ensures that technology supports (not overrides) clinical judgment, is mindful of challenges encountered by frontline users, and progresses iteratively while adapting to local context.35,36,38 This requires a multidisciplinary team (eg, nurses, physicians, phlebotomists, pharmacists, etc), training and feedback processes, and a functioning data infrastructure. Finally, implementation plans should, at the outset, consider processes for deimplementation if efforts reveal no improvement or harm.36,39
Dr Carter’s experience reveals the burdens of alert fatigue and task switching associated with poorly performing sepsis scores. The care team in this case avoided the trap of automation bias and unnecessary interventions by thoughtfully considering the differential diagnosis and clinical implications of the alert before reflexively ordering tests and interventions. Overcoming these challenges requires sepsis response systems designed specifically for the patient populations and workflows found in inpatient settings. As such, sepsis scores should leverage longitudinal patient data to improve detection as opposed to treating it as a nuisance to overcome. The focus of preimplementation testing must be not only on achieving acceptable score performance but also on usability for frontline clinicians, and unbiased monitoring of postimplementation outcomes is necessary to ensure these efforts represent high-value care.
Dr Harrison conceptualized and designed the study, analyzed and interpreted the data, and drafted the initial manuscript; Dr Workman contributed to analysis and interpretation of the data and critically reviewed and revised the manuscript; Dr Bonafide contributed to analysis and interpretation of the data and critically reviewed the manuscript; Dr Lockwood conceptualized and designed the study and contributed to analysis and interpretation of the data, and critically reviewed and revised the manuscript; and all authors approved the final manuscript as submitted.
FUNDING: Supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (Award Number UL1TR002538), as well as by the Primary Children’s Hospital Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Funded by the National Institutes of Health (NIH).
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.