Following development and validation of a sepsis prediction model described in a companion article, we aimed to use quality improvement and safety methodology to guide the design and deployment of clinical decision support (CDS) tools and clinician workflows to improve pediatric sepsis recognition in the inpatient setting.
CDS tools and sepsis huddle workflows were created to implement an electronic health record-based sepsis prediction model. These were proactively analyzed and refined using simulation and safety science principles before implementation and were introduced across inpatient units during 2020-2021. Huddle compliance, alerts per non-ICU patient days, and days between sepsis-attributable emergent transfers were monitored. Rapid Plan-Do-Study-Act (PDSA) cycles based on user feedback and weekly metric data informed improvement throughout implementation.
There were 264 sepsis alerts on 173 patients with an 89% bedside huddle completion rate and 10 alerts per 1000 non-ICU patient days per month. There was no special cause variation in the metric days between sepsis-attributable emergent transfers.
An automated electronic health record-based sepsis prediction model, CDS tools, and sepsis huddle workflows were implemented on inpatient units with a relatively low rate of interruptive alerts and high compliance with bedside huddles. Use of CDS best practices, simulation, safety tools, and quality improvement principles led to high utilization of the sepsis screening process.
Systematic sepsis screening is recommended in the evaluation and treatment of sepsis and has been shown to reduce morbidity and mortality.1,2 Sepsis screening tools integrated into the electronic health record (EHR) have been successfully implemented in emergency department (ED) settings2–4 but have been difficult to develop in the inpatient setting because of lower incidence of sepsis in this environment, variable performance of the prediction models, and lower acceptance by clinicians.5–8 Following a prior failed implementation of an ED-based screening tool in the inpatient setting, our institution developed a 2-tiered EHR-based sepsis prediction model described in a companion article,9 which was integrated into clinical decision support (CDS) and clinical workflows specific to the acute care floors. The 2-tiered system was designed to improve situational awareness with a high sensitivity “aware” threshold and minimize the rate of false alerts leading to unnecessary bedside huddles with a low number needed to alert “alert” threshold. Shared situational awareness is a proven quality improvement (QI) intervention for reducing emergency ICU transfers.10–12 The prediction model, CDS tool, and clinical workflows required evaluation and refinement before and during implementation in the clinical setting.
The team hypothesized that the use of Failure Modes and Effects Analysis (FMEA) and simulation before implementation followed by rapid, data-driven Plan-Do-Study-Act (PDSA) cycles could improve implementation of our sepsis prediction model, CDS tools, and clinical workflows. Following deployment, our team set a process metric goal of at least 85% clinician compliance with sepsis huddle attendance and set an outcome metric goal of increasing the number of days between sepsis-attributable ICU transfers.
Our institution is a 364-bed, quaternary-care, freestanding children’s hospital. Resident teaching teams care for most hospitalized patients and, less frequently, hospitalist or nurse practitioner models are used for patient care. Acute care floors are mixed medical and surgical units and include oncology patients. Patients with complex cardiac disease or requiring intensive care, including vasoactive medications and new positive-pressure ventilation, are cared for in our cardiac, neonatal, and PICUs. When there is concern for clinical deterioration or a need for ICU care, anyone can call the Critical Assessment Team for an assessment by the ICU. This team consists of an ICU advanced practice nurse or physician, charge nurse, and respiratory therapist.
During this study, our hospital was participating in the Improving Pediatric Sepsis Outcomes (IPSO) Collaborative hosted by the Children’s Hospital Association. Our institution has a multidisciplinary sepsis QI team including physicians, nurses, data analysts, pharmacists, QI experts, and clinical informatics specialists, with oversight from a team of executive sponsors.
Human Subjects Protection
Our sepsis QI work was reviewed by the Institutional Review Board and deemed exempt systems improvement (IRB 2018-2143, approved 06-27-2018). Medical record review was performed using a secure password protected internal database and our hospital’s EHR. The reporting of this study followed the Standards for Quality Improvement Reporting Excellence guidelines.13
Planning the Intervention
A team of clinicians, data scientists, QI experts, and clinical informatics experts was formed. Team leaders first met with multidisciplinary groups to obtain feedback about a previous CDS implementation in 2017 based on the ED sepsis screening tool that was perceived as a failure and decommissioned. Key themes from focus groups included the need for a data-driven model specific to our inpatient setting to minimize alert fatigue, a process that leveraged, rather than added to, existing documentation, simple CDS tools accompanied by improved training, and a postimplementation monitoring system that included avenues for clinician feedback. Key drivers were identified and interventions mapped to each driver (Fig 1). Our team then consulted with other institutions in the IPSO Collaborative to learn more about existing acute care sepsis models. A 2-tiered sepsis prediction model was then developed and validated in silent mode as described in the companion paper.9
Design of CDS Tools and Clinical Workflows
The sepsis inpatient team developed CDS tools to implement the data-driven prediction model using established best practices for CDS development.14 The CDS tools included a sepsis screening score visible on Patient Lists, a Storyboard banner, and an automated alert or Best Practice Advisory (BPA) to prompt a sepsis huddle (Fig 2).
After implementation, the sepsis screening scores updated in real-time, and a patient’s score was visible on Patient Lists. Clinical workflows were developed for both the “aware” and “alert” tiers. When a patient’s score exceeded the “aware” threshold (score ≥ 15), the score was highlighted on Patient Lists so that these patients could be discussed during rounds, handoffs, and during twice daily huddles led by the unit charge nurse; if at any point there was a clinical concern for sepsis in an “aware” level patient, any member of a patient’s care team could request a bedside sepsis huddle. A score exceeding the “alert” threshold (score ≥ 30) was highlighted in a brighter color in the Patient Lists and triggered a BPA upon opening the patient EHR chart, which prompted a sepsis huddle. After a BPA was triggered and a huddle performed, the alert was silenced for the following 48 hours. During the lockout period for the interruptive alert or BPA, the sepsis score and sepsis storyboard notifications remained available to clinicians and huddles could be reconvened for clinical concern.
A successful sepsis huddle was defined as a timely, structured, bedside evaluation for sepsis by the existing clinical care team. An acute care sepsis recognition clinical care guideline was created (Fig 3). The team additionally created templated event notes and a sepsis navigator with links to existing sepsis care guidelines and order sets.
Before implementation, the CDS tools and workflows were tested with front-line clinicians through 19 in situ simulations and structured debriefing sessions across all acute care units over a 1-week period. Simulation cases were representative of typical patients on each unit; clinical informaticists built cases into an EHR training environment that used the new CDS tools, and mannequins were used for each case. Simulation participants included charge nurses, bedside nurses, advanced practice providers, residents, attending physicians, and medical students. Participants were able to view the simulated patient’s vital signs, history, medications, and laboratory data. The sepsis BPA alerted at the beginning of the case, prompting the nurse to contact the provider for a huddle at the bedside. CDS tools including the sepsis navigator, documentation tools, and order sets were available within the EHR training environment. Participants were provided with paper copies of draft huddle guidelines and resources to use during the simulation. The goal of each simulation was for the nurse and provider to use the new EHR-based tools to conduct a huddle, diagnose sepsis, and implement the first steps in medical management. A sepsis team member observed each simulation and recorded on a checklist if desired aspects of care were met. Simulation staff then concluded each simulation with a structured debrief using the Promoting Excellence And Reflective Learning in Simulation (PEARLS) framework.15
Using information from the simulation checklists and structured debriefings, we aggregated a list of how and when the new tools and associated workflows might fail and categorized them into 4 primary themes: communication, patient assessment, situational awareness, and CDS dependencies. Each theme encompassed several specific failure modes related to the new CDS tools and huddle process. The sepsis team then conducted an FMEA to prioritize these potential failure modes.
The FMEA session was an interactive, multidisciplinary session and included simulation participants, other front-line clinicians, and clinical informatics experts. Participants were divided between the 4 identified primary failure mode themes. Trained FMEA facilitators led each small group in a discussion of potential failure modes, failure effects, causes, and controls for each step. Groups used a scoring tool to determine the risk associated with each potential failure mode in the categories of severity, probability, and detection: “severity” indicating how large of an impact the failure would have if it occurred, “probability” indicating the likelihood of a failure occurring, and “detection” indicating how difficult it would be to identify the failure if it occurred. The group assigned a score of 1 to 4 for severity, probability, and detection for each failure mode, with 1 being the least potential impact and 4 being the most potential impact. These scores were then multiplied together to calculate the Risk Priority Number.16 The highest scoring failure modes included EHR functionality, huddle communication, huddle documentation, and timely vital sign documentation. Each group reported their findings to the larger group, stimulating a discussion about possible solutions to prevent or mitigate potential failures in this new workflow.
The sepsis team addressed the highest priority potential failures before pilot implementation of the tool and huddle process. Most improvements focused on creating a more robust sepsis huddle toolkit, which included communication and workflow guides for the sepsis huddle process as well as clinical pearls for sepsis care. Communication guidance included standardized paging verbiage to contact necessary huddle members, identification of roles and responsibilities in the huddle, huddle scripting, and scripting for discussion with families. Clinical pearls included specific guidance for how to evaluate for sepsis, including physical exam findings, laboratory findings, and signs of organ dysfunction. This toolkit was embedded within the sepsis clinical care guideline (Supplemental Fig 6).
The intervention was piloted on 1, 48-bed medical general pediatrics inpatient unit starting on August 25, 2020. Multimodal education for faculty and staff on the new sepsis tools for the pilot unit included didactic presentations, 1-on-1 teaching, unit signage, e-mail updates, and a video demonstrating the sepsis CDS tools and sepsis huddle process. After each huddle, the unit charge nurse completed a paper form documenting the huddle outcome, team members present, and time of the huddle. An e-mail was sent to huddle attendees to solicit feedback on the sepsis alert and huddle process. In addition to soliciting general feedback, the team specifically asked attendees if they felt huddle members were able to be readily identified and assembled, if there was clear communication during the huddle, and if they had sufficient education and resources to use the CDS and participate in the huddle. The sepsis team met weekly, reviewed each BPA event and any sepsis-attributable emergent transfers that occurred, and used data to inform PDSA cycles on the pilot unit. This included examining huddle compliance, clinician feedback, and patient outcomes. An early PDSA cycle addressed feedback that having hard copies of huddle materials readily available would be helpful, so the team created a sepsis binder that contained all huddle materials, workflows, and resources and enlisted charge nurses to bring the binder to each sepsis huddle. Initial data analysis revealed poor EHR documentation compliance and confusion regarding where to document, thus another PDSA cycle simplified the documentation for both nurses and providers within the EHR. To further increase huddle documentation compliance, a subsequent PDSA cycle included sending documentation tips in e-mail messages sent to providers following sepsis huddles.
As EHR documentation improved, the sepsis huddle paper form was discontinued to minimize redundant documentation. To address a trend of false alerts occurring shortly after transfer from our ED or ICU where the patient had already received treatment of sepsis and whose condition had improved, a 6-hour BPA lockout was enacted following internal transfers. Clinicians were encouraged to use the sepsis score on their Patient List in conjunction with their clinical judgement to determine the need for a sepsis huddle during the lockout period for the BPA.
Following stability in the process on the pilot unit, the intervention was spread to all non-ICU inpatient units on February 15, 2021. Hospital-wide clinician education included didactic presentations, 1-on-1 teaching, e-mail reminders, simulations, and mandatory online education. During implementation, the facilitation team continued to meet weekly to review data, discuss clinician feedback, and determine subsequent PDSA cycles. PDSA cycles resulted in a BPA lockout for patients with stem cell transplants within the past 100 days because of a high alert burden in this patient population, primarily because of bandemia that occurs during the engrafting process. This population is cared for by a small team with experience identifying sepsis in their patient population; they continued to use the sepsis screening score and call huddles for clinical concern for sepsis despite the BPA lockout. There were no BPA lockouts associated with sepsis-attributable emergent transfers in this patient population during the study period. Unit champions continued to provide feedback to clinical teams and shared data with their units. As these processes became more ingrained in routine clinical workflows, e-mail follow up was sent only after huddles with incomplete documentation.
The sepsis team identified outcome, process, and balancing measures. The primary outcome measure was days between emergent transfers to the ICU, a validated outcome metric for critical deterioration on acute care units,10–12,17 that were attributable to sepsis. A sepsis-attributable emergent transfer was defined as a patient who transferred from an inpatient unit to the ICU, who required intubation, vasopressors, or 3 or more fluid boluses in the first hour of, or before, ICU admission, and who met our sepsis intention-to-treat criteria. The primary process metric was compliance with the bedside sepsis huddle. Huddle compliance was initially measured by tracking paper forms completed by charge nurses; starting in October 2020, huddle occurrence was charted within the EHR and thus the metric was tracked in this manner. A balancing measure of alerts per 1000 non-ICU patient days was developed.
Study of the Intervention
During our observational time series study, data were collected on process measures to ensure consistency of sepsis huddles. During this time, data collection shifted from a checklist-based form completed by unit-based charge nurses to solely EHR-based documentation. The validity of process data were evaluated through weekly meetings with unit-based sepsis champions and project leaders. Sepsis-attributable emergent transfers were identified from the EHR and validated with physician chart review.
Primary analysis of both process and outcome measures was performed using statistical process control charts. Process metrics were tracked using p charts. Sepsis-attributable emergent transfers were tracked using days between events on a t chart because of the rarity of events. Alert days per 1000 non-ICU days were monitored using a u chart. We employed established rules for identifying special cause variation.18
In the pilot implementation phase on 1 medical unit and 6-month implementation study period, there were 264 sepsis electronic alerts on 173 unique patients. Thirty nine percent of alerts occurred on general medicine services, 33% on hematology and oncology services, 25% on medical subspecialty services, and 3% on surgical services. Seventy seven percent of patients with alerts had 1 alert during their inpatient counter, 15% had 2 alerts, 5% had 3 alerts, and 4% had 4 or more alerts. The maximum number of alerts per encounter was 8, which occurred in a patient with a stem cell transplant before the BPA exclusion for this patient population.
Completion of bedside sepsis huddles occurred 89% of the time and was stable over time (Fig 4). The mean number of alerts per 1000 non-ICU patient days per month was 10; range 4 to 18 (Supplemental Fig 7). During our 6-month implementation period, there was no special cause variation in our primary outcome metric, days between sepsis-attributable emergent transfers (Fig 5). The average sepsis screening score at time of ICU transfer for emergent transfers was 31 (range 18–55), that is, all patients were either in the “aware” or “alert” level before transfer.
Following implementation of our CDS tools and clinical workflows, we observed high compliance with sepsis huddle processes and minimal alert fatigue based on clinician feedback and average alerts per patient days per month. We attribute this high compliance to the combination of rigorous preimplementation testing, pilot testing with rapid cycle PDSAs, and a dedicated team with executive sponsor support.
The design and deployment of our CDS tools leveraged several best practices and lessons learned that may be generalizable to other settings and clinical problems. As we built the CDS tool and associated clinical workflows, we focused on best practices for optimizing clinician utilization and perception of value. This included leveraging existing EHR data and using combined active and passive alerts with suggested actions for the user.19,20 Further examples of CDS best practices that we employed included engagement of clinicians in the design process and testing of the CDS tools within the specific local context of each unit preimplementation during simulations.21,22 The combination of simulations and FMEA to prospectively analyze sepsis CDS tools and clinical workflows allowed for rigorous testing within the specific context of each unit, analysis, and improvement in the process before implementation, as well as widespread staff engagement. This was particularly beneficial given the low overall incidence of sepsis on the inpatient floors. The combination of simulation and FMEA has been described in the prospective evaluation of a new construction healthcare space as well as in the evaluation of emergency response procedures, but to our knowledge has not previously been used to proactively evaluate how clinicians interact with CDS tools.23,24,25 Finally, using several sources of real-time data to inform frequent PDSA cycles was critical to successful implementation and uptake of the huddle process across the institution. We used EHR BPA and huddle documentation data, clinician feedback, and data collected by charge nurses to drive rapid-cycle improvements. By regularly meeting as a multidisciplinary team, we were able to monitor and improve our processes in real time, incorporating clinician feedback as well as providing rapid targeted education as needed. The involvement of clinical informatics team members allowed for proactive identification of EHR issues as well as rapid implementation of EHR-based interventions as needed.
Our study has several limitations. First, this project was completed at an urban, single-center, quaternary children’s hospital with data science, QI, and clinical informatics expertise that supported development and implementation, which may limit its reproducibility in other settings. We did not see special cause variation in the number of days between sepsis-attributable emergent transfers during the 6 months of house-wide implementation, but this may be attributed to the overall low incidence of sepsis-attributable emergent transfers. However, at the conclusion of the implementation period there was a period of 4 months with no sepsis-attributable emergent transfers. We also acknowledge that factors outside the focus of this study may impact this metric, such as components of timely treatment of sepsis like antibiotic and fluid administration. Furthermore, emerging sepsis guidance encourages earlier consideration of vasopressors which may impact this outcome metric.1 It is possible that our CDS tool promoted earlier transfer to the ICU and aggressive therapy for patients with potential sepsis, thereby inadvertently affecting the validity of our primary outcome. This earlier transfer to the ICU and more aggressive early therapy may ultimately result in better outcomes, but it could also lead to unnecessary resource utilization and overtreatment and remains a topic that needs further study. Although mortality is a commonly-reported outcome for sepsis,26 it is impractical as an outcome metric for this study because of the rare nature of the event in our inpatient population.
As with any QI intervention, our prediction model and CDS tools need to be reevaluated regularly postimplementation to assess opportunities to improve the performance of our model and processes. Further monitoring is required to assess the longitudinal impact of our sepsis prediction model and screening on patient outcomes. Longitudinal multicenter evaluation of the impact of sepsis screening practices in acute care settings is also warranted given current QI and legislative recommendations for sepsis screening and the effort required to create and maintain these tools.
Using CDS best practices, simulation, safety tools such as FMEA, and data-driven rapid PDSA cycles allowed for successful deployment of a novel 2-tiered sepsis prediction model, CDS tools, and clinical workflows, with high compliance across our institution. Further study is required to assess the longitudinal clinical impact of sepsis screening in the acute care setting.
COMPANION PAPERS: Companions to this article can be found online at www.hosppeds.org/cgi/doi/10.1542/hpeds.2022-006964 and www.hosppeds.org/cgi/doi/10.1542/hpeds.2023-007285.
Drs Stephen, Lucey, and Sanchez-Pinto conceptualized and designed the study, coordinated and supervised data collection, and drafted the manuscript; Drs Carroll and Sanchez Pinto, and Mr Jones contributed to the acquisition of data, statistical analyses, and interpretation of the data; Mr Hoge, Ms Maciorowski, Ms O’Connell, Ms Schwab, and Ms Rojas contributed to the improvement project leadership and execution and the design of data collection instruments; and all authors reviewed and revised the manuscript and approved the final manuscript as submitted.
FUNDING: No external funding.
CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no conflicts of interest relevant to this article to disclose.