Pediatric hospitalists frequently interact with clinical decision support (CDS) tools in patient care and use these tools for quality improvement or research. In this method/ology paper, we provide an introduction and practical approach to developing and evaluating CDS tools within the electronic health record. First, we define CDS and describe the types of CDS interventions that exist. We then outline a stepwise approach to CDS development, which begins with defining the problem and understanding the system. We present a framework for metric development and then describe tools that can be used for CDS design (eg, 5 Rights of CDS, “10 commandments,” usability heuristics, human-centered design) and testing (eg, validation, simulation, usability testing). We review approaches to evaluating CDS tools, which range from randomized studies to traditional quality improvement methods. Lastly, we discuss practical considerations for implementing CDS, including the assessment of a project team’s skills and an organization’s information technology resources.

While caring for patients, pediatric hospitalists frequently interact with clinical decision support (CDS), such as order sets, data displays, and alerts. CDS tools are increasingly used in quality improvement (QI) and research studies to support the implementation of evidence-based practice and improve care processes. The complexity of CDS tools is also growing, with the application of predictive modeling and artificial intelligence in patient care.

Although CDS has been shown to enhance care processes and has the potential to improve outcomes, there remains significant variation in outcomes targeted by CDS and the availability of CDS for common conditions across pediatric hospitals.1,2  In addition, like any poorly designed or inadequately evaluated intervention, CDS can lead to negative outcomes and contribute to clinician burnout (eg, through alert fatigue).3  For hospitalists engaged in QI and research work, understanding the principles and science underpinning CDS is essential.

CDS is a term that describes tools designed to provide “clinical knowledge and patient-related information, intelligently filtered or presented at appropriate times, to enhance patient care.”4  Fundamentally, CDS must (1) provide clinical knowledge that is (2) targeted to the right users, patients, and times in the workflow and (3) presented in a user interface that ideally makes the right decision easy. The presence of these 3 components distinguishes CDS from other forms of information technology.

CDS can be categorized in several ways, including the type of intervention employed:

  1. Documentation forms or templates

  2. Relevant data displays

  3. Order suggestions or order sets

  4. Support for care protocols or pathways

  5. Reference information and guidance

  6. Reactive alerts and reminders

CDS can also be categorized by whether it interrupts a clinician’s workflow. Although interruptive CDS tools (eg, “pop-up” alerts) may be most recognizable, CDS tools are often subtler and non-interruptive, sometimes using behavioral economic strategies like default selections and choice reduction to nudge clinicians to do the right thing.

Whether a CDS tool is employed as part of QI, clinical research, or implementation science, the early consideration of design elements, implementation, and evaluation is crucial to achieving the best outcomes. What follows is an introduction and practical approach (Fig 1) to developing and evaluating CDS tools within the electronic health record (EHR).

FIGURE 1

Practical stepwise approach to CDS development and evaluation. SEIPS, Systems Engineering Initiative for Patient Safety; SPC, statistical process control.

FIGURE 1

Practical stepwise approach to CDS development and evaluation. SEIPS, Systems Engineering Initiative for Patient Safety; SPC, statistical process control.

Close modal

An essential precursor step to CDS design is defining the problem. Is your project targeting poor shared situational awareness across a multidisciplinary team? Are you tackling a lack of clinician uptake of a new evidence-based best practice? An effective CDS tool to improve shared situational awareness will look different from one that nudges an individual clinician to make a particular clinical choice. Traditional QI tools like the “5 Whys,” fishbone diagrams, and Pareto charts can help define the problem.5 

Effective CDS design begins with understanding the system. Project leaders should identify (1) the people doing the work and how they function within their environmental setting and the organizational structure, and (2) the desired or ideal process by which the work will get done. These 2 components then inform (3) the design of technology to support the work. Different approaches can be used to understand the work system, including process maps and the Systems Engineering Initiative for Patient Safety model.6  Notably, after evaluating the people and processes involved, a project team might determine that an EHR-based CDS tool may not be the best solution.

Like any other intervention, the success of a CDS tool should be measured against metrics that the project team defines early in the development process. Traditional CDS evaluation uses CDS use–implementation measures that describe how users interact with the CDS (Fig 2). Examples include order set utilization and alert responses. These measures are generally easy to develop because many CDS tools log timestamped events in EHRs, and data are retrievable through self-service applications. Once the desired immediate action for an alert or order set is defined (eg, removing an offending medication), the acceptance or override rate can be calculated and displayed over time. However, these metrics cannot easily distinguish the clinical appropriateness of user actions and do not measure whether the CDS is achieving its intended purpose. For example, a QI team may design an interruptive alert recommending the influenza vaccine that adds the vaccine as a pending order. When a clinician sees this alert, they may select “accept” to get out of the alert but then delete the vaccine order and never administer it to the patient. In this case, the alert will appear to have been “accepted” without achieving the intended outcome of influenza vaccine administration.

FIGURE 2

Metric development for CDS. If a QI team develops an order set aligned with bronchiolitis evidence-based practice,17  CDS use–implementation measures, such as order set use, may serve as a marker for implementation reach. Process measures, such as albuterol use or continuous pulse oximetry monitoring, serve as the anticipated mechanism of impact, with length of stay as the potential outcome measures of interest. Linking these measures together can provide the insight needed to adjust interventions. If continuous pulse oximetry rates in bronchiolitis (process measure) remain elevated while bronchiolitis order set usage rates (CDS use measure) are low, the next step may be to increase the adoption of the order set. By contrast, if the order set usage rates are high, the best next step may be to reexamine the theory of improvement and determine if the CDS needs to be redesigned to promote the appropriate behaviors, or if an alternative driver is responsible for the persistently high monitoring rates. RA, room air.

FIGURE 2

Metric development for CDS. If a QI team develops an order set aligned with bronchiolitis evidence-based practice,17  CDS use–implementation measures, such as order set use, may serve as a marker for implementation reach. Process measures, such as albuterol use or continuous pulse oximetry monitoring, serve as the anticipated mechanism of impact, with length of stay as the potential outcome measures of interest. Linking these measures together can provide the insight needed to adjust interventions. If continuous pulse oximetry rates in bronchiolitis (process measure) remain elevated while bronchiolitis order set usage rates (CDS use measure) are low, the next step may be to increase the adoption of the order set. By contrast, if the order set usage rates are high, the best next step may be to reexamine the theory of improvement and determine if the CDS needs to be redesigned to promote the appropriate behaviors, or if an alternative driver is responsible for the persistently high monitoring rates. RA, room air.

Close modal

By contrast, process–outcome measures of CDS effectiveness focus on the care processes and clinical outcomes for which the CDS was originally designed. In the example above, influenza vaccine administrations per eligible visit would more appropriately capture the intended goal of the CDS. Most process–outcome measures are specific to an individual project and often require greater resources to collect, such as the custom development of EHR data queries. Thus, institutional data and analytics resources may be a primary barrier restricting the pace and effectiveness of QI and the learning health system.7 

The tracking of balancing measures during CDS implementation can help teams identify unintended and potential negative effects.3  Balancing measures might include alert burden, inappropriate clinical decisions, or impact on clinical workflow. In the example above, the number of alerts per clinician or alerts per shift could serve as a balancing measure to assess alert burden.

CDS use–implementation measures, process–outcome measures, and balancing measures are necessary to evaluate complex interventions like CDS.8  Creating operational definitions ahead of interventions and establishing how the needed data will be obtained over time will enable iterative adjusting of CDS design.

CDS tool design is informed by the problem definition and understanding of the system. To systematically think through how to design a CDS tool that can best achieve the desired outcome, clinical informaticians often use a model known as the “5 Rights of CDS” (Table 1).4  Another frequently used framework is the “Ten Commandments” for effective CDS,9  which state that (1) users most value the speed of an information system, (2) the success of CDS depends on integration into clinician workflows, (3) simple interventions are better than complex ones, and (4) redirecting clinicians is easier than stopping them. Although there are other approaches that can be used to assist with the initial design, such as Nielsen’s 10 usability heuristics for user interface design,10  the design decisions and CDS tool formats chosen will depend entirely on the specific problem. Additionally, there may be multiple people and points in a process that would benefit from CDS, with different designs which can be identified by engaging diverse and relevant stakeholders during the design process.

TABLE 1

The 5 Rights of CDS

The RightThe QuestionExamples
Right information What information do people need to know to make the best decision? A new evidence-based guideline 
Risk score assessment 
New hospital protocol 
Next steps for a disease process 
Right person Who needs to know this information? A singular role, such as nurse, respiratory therapist, social worker, or physician 
A group of roles like all members of the care team 
Right channel How should they receive the information? EHR 
E-mail 
Printed report 
Right time When in their workflow is this information relevant? After the patient has met clinical criteria 
After certain medications are completed 
At a certain point in a disease process 
Every time the clinician is in the chart 
Right format What format should the information take to make the right thing(s) the easiest thing to do? A pop-up interruptive alert that suggests the next steps via a series of orders 
Order set suggestions with preselected orders that adhere to best practices 
Gentle nudges to the user to fill out certain documentation 
Link to allow for easier information retrieval 
The RightThe QuestionExamples
Right information What information do people need to know to make the best decision? A new evidence-based guideline 
Risk score assessment 
New hospital protocol 
Next steps for a disease process 
Right person Who needs to know this information? A singular role, such as nurse, respiratory therapist, social worker, or physician 
A group of roles like all members of the care team 
Right channel How should they receive the information? EHR 
E-mail 
Printed report 
Right time When in their workflow is this information relevant? After the patient has met clinical criteria 
After certain medications are completed 
At a certain point in a disease process 
Every time the clinician is in the chart 
Right format What format should the information take to make the right thing(s) the easiest thing to do? A pop-up interruptive alert that suggests the next steps via a series of orders 
Order set suggestions with preselected orders that adhere to best practices 
Gentle nudges to the user to fill out certain documentation 
Link to allow for easier information retrieval 

Human-centered design (HCD) is an International Organization of Standardization standard11  that, when incorporated into the design of CDS, can maximize the probability of effective CDS. Briefly, the steps of HCD as it relates to health care are to (1) understand and specify the clinical context, (2) clearly state the clinical user and organizational requirements, (3) produce multiple design solutions, and (4) iteratively test the design solutions against the requirements until (5) the design solution meets the stated requirements. The most important concept to take away from this approach is that the CDS design process should always be iterative and responsive to how people actually use CDS rather than how it is intended to be used. If a design solution does not meet the organization’s or user’s requirements, then a new design solution should take its place. This iterative process can be operationalized as plan–do–study–act cycles in QI.

Iterative testing and redesign should address both the user interface and the targeting of users, patients, and points in the workflow. Interfaces (eg, data displays, order sets) can be evaluated through usability testing or simulation to test the ease of use and user experience. Interface testing should be performed on a representative group of users. Targeting can be validated on real patient data, with CDS turned on in “background mode” to test behavior and performance without directly impacting patient care.12  One approach to assessing test results is to verify that each aspect of the 5 Rights has been satisfactorily met. Is the information conveyed in a clear manner? Is the tool targeting too many people and causing alarm fatigue? Is the tool poorly timed such that it interrupts the user’s workflow unnecessarily?

A potential barrier to iterative testing is a project team’s ability to iterate on the EHR if the team is relying on information technology resources for the build. The use of lower-fidelity prototypes, including drawings and wireframes, and testing of the CDS in an EHR testing environment before implementing the CDS in production are strategies to overcome this barrier. Even 1 round of testing and redesign can significantly improve CDS function.

Most CDS interventions do not undergo evaluation. Even when rigorous evaluation takes place, CDS has variable effectiveness. A recent meta-analysis of 122 CDS implementations revealed heterogeneity in the improvement of process metrics and no clear benefit for clinical outcomes, although pediatric CDS tended to be more effective.1  Thus, achieving the potential of CDS requires planned evaluation to determine if the implemented tools need further optimization or perhaps were ineffective and should be abandoned.

Once a project team has a fair degree of belief that the CDS design is having the intended effect after the iterative design and testing stage outlined above, the CDS tool should be implemented and studied. The usefulness of CDS evaluation depends on the rigor of the study design. A detailed explanation of CDS evaluation approaches is outside the scope of this article but, as with other interventions in the clinical setting, numerous study designs can be used. We recommend employing the most rigorous evaluation design possible with available resources.

Randomized trials of CDS are the highest-quality evidence for CDS effectiveness and have been applied at either the patient, clinician, or care setting level.13,14  Even in settings without the resources to conduct randomized trials, CDS interventions lend themselves to quasi-experimental evaluation; for example, most CDS formats, such as order sets or alerts, can be restricted by hospital or department, allowing for comparisons between intervention and control groups. Although patient or clinician populations across units within a health system may be intrinsically different, difference-in-difference analyses can help isolate the effect of the CDS on outcomes.15  The intervention can then be expanded in a stepwise fashion if successful. If sequential implementation is not practical, then traditional QI approaches using run charts or statistical process control charts remain viable and are a substantial improvement over no evaluation at all.

An example of CDS evaluation in the pediatric setting is a cluster-randomized trial conducted at 12 primary care sites to assess the effectiveness of CDS in improving adherence to asthma guidelines.16  Half of the primary care sites were randomly assigned to the CDS. The study team evaluated controller medication prescriptions, spirometry use, and up-to-date asthma care plans and found improved adherence to asthma guidelines in the CDS group.

Most CDS tools can be thought of as complex interventions in that they have multiple components, target multiple groups, and require behavior change.8  Like any complex implementation project, it is essential to consider organizational, environmental, and cultural barriers to implementation, as well as employ change management principles. No matter what, there will be a breadth of people in various roles who are affected by your new CDS tool in a health system, and it is important to ensure that your team and approach reflect these stakeholders, especially patients and families.

The next step is to determine if your improvement or research team already has the expertise to both design and implement CDS interventions. Although this article provides a useful overview of designing an effective CDS, there are decades of research on CDS interventions that a clinical informatician can help you apply to avoid pitfalls and more efficiently identify an effective solution. Human factors engineers or human–computer interaction specialists can add further design knowledge, especially when the CDS tools must support a complex cognitive task or decision. Data analysts can provide the data needed to evaluate project measures.

Lastly, the actual resourcing to build your CDS intervention may be a challenge. The time of the information technology workforce is limited, and every organization has a different approach to prioritization, so organizational learning is key. Understanding the alignment of project goals with organizational strategic priorities could make a difference in whether your CDS build gets resourced. Information technology leaders often benchmark their organization with peer institutions, so engaging peers to show the areas in which local CDS may be lagging or to obtain content from EHR vendor “community libraries” may accelerate the resource request.

CDS tools have great potential to improve the care of hospitalized children and, like any clinical intervention, require planning, thoughtful design, and rigorous evaluation. We hope this practical approach will equip pediatric hospitalists to leverage CDS and achieve desired outcomes.

Readers interested in learning more are encouraged to build a relationship with their local informatician or visit the American Medical Informatics Association, Healthcare Information and Management Systems Society, and EHR vendor Web sites.

  • CDS provides clinical knowledge that is targeted to the right users, patients, and times in the workflow and presented in a user interface that ideally makes the right decision easy.

  • Defining the problem and understanding the work system are important precursor steps to CDS design.

  • CDS use–implementation measures, process–outcome measures, and balancing measures are needed to evaluate the effectiveness of CDS tools.

  • CDS design is an iterative process and employs tools such as the 5 Rights of CDS and HCD.

  • Multiple study designs can be used to evaluate CDS tools.

  • Practical considerations to CDS implementation include change management needs, project team skillset, and organizational information technology resources.

We would like to acknowledge the input and feedback from members of the Pediatric Clinical Decision Support Collaborative, including Drs Lauren Hess, Swaminathan Kandaswamy, Lindsey Knake, and Allison McCoy.

Drs Molloy, Muthu, Orenstein, Shelov, and Luo conceptualized and drafted the initial manuscript and critically reviewed and revised the manuscript; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.

FUNDING: Funded by the National Institutes of Health (NIH). The contributions of Drs Shelov and Luo to this publication were supported in part by a cooperative agreement with the National Heart, Lung, and Blood Institute of the NIH under award U01HL159880. Drs Muthu and Molloy also serve as consultants to this award, but their effort contributing to this publication were not supported by the award. The funder had no role in any of the following: study design, the collection, management, analysis, or interpretation of data, the writing of the report, or the decision to submit the report for publication, nor did the funder have ultimate authority over any of these activities. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

CONFLICT OF INTEREST DISCLOSURES: Drs Muthu and Orenstein are cofounders and equity holders in Phrase Health, a clinical decision support analytics company. They receive no direct revenue but have served as principal and coinvestigators on a Phase 1 and Phase 2 STTR grant with Phrase Health from the National Library of Medicine (NLM) and National Center for Advancing Translational Science (NCATS). They received salary support from the NLM and NCATS. The other authors have indicated they have no potential conflicts of interest to disclose.

1
Kwan
JL
,
Lo
L
,
Ferguson
J
, et al
.
Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials
.
BMJ
.
2020
;
370
:
m3216
2
Carr
LH
,
Oluwalade
B
,
Muthu
N
, et al
.
Between-hospital variation in clinical decision support availability for common inpatient pediatric conditions: results of a national Pediatric Research in Inpatient Settings (PRĪS) Network survey
.
J Hosp Med
.
2023
;
18
(
7
):
617
621
3
Orenstein
EW
,
Kandaswamy
S
,
Muthu
N
, et al
.
Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics
.
J Am Med Inform Assoc
.
2021
;
28
(
12
):
2654
2660
4
Osheroff
JA
,
Pifer
EA
,
Sittig
DF
, et al
Clinical Decision Support Implementers’ Workbook
.
Healthcare Information Management and Systems Society
;
2004
5
Carroll
AR
,
Smith
CM
,
Frazier
SB
, et al
.
Designing and conducting scholarly quality improvement: a practical guide for improvers everywhere
.
Hosp Pediatr
.
2022
;
12
(
10
):
e359
e363
6
Holden
RJ
,
Carayon
P
.
SEIPS 101 and seven simple SEIPS tools
.
BMJ Qual Saf
.
2021
;
30
(
11
):
901
910
7
Woodcock
T
,
Liberati
EG
,
Dixon-Woods
M
.
A mixed-methods study of challenges experienced by clinical teams in measuring improvement
.
BMJ Qual Saf
.
2021
;
30
(
2
):
106
115
8
Reese
TJ
,
Liu
S
,
Steitz
B
, et al
.
Conceptualizing clinical decision support as complex interventions: a meta-analysis of comparative effectiveness trials
.
J Am Med Inform Assoc
.
2022
;
29
(
10
):
1744
1756
9
Bates
DW
,
Kuperman
GJ
,
Wang
S
, et al
.
Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality
.
J Am Med Inform Assoc
.
2003
;
10
(
6
):
523
530
10
Nielsen
J
;
Nielsen Normal Group
. 10 usability heuristics for user interface design. Available at: https://www.nngroup.com/articles/ten-usability-heuristics/. Accessed August 14, 2023
11
International Organization for Standardization
. Ergonomics of huma–-system interaction - Part 210: human-centered design for interactive systems. ISO standard no 9241-210:20192019. Available at: https://www.iso.org/standard/77520.html. Accessed August 14, 2023
12
Chaparro
JD
,
Beus
JM
,
Dziorny
AC
, et al
.
Clinical decision support stewardship: best practices and techniques to monitor and improve interruptive alerts
.
Appl Clin Inform
.
2022
;
13
(
3
):
560
568
13
Horwitz
LI
,
Kuznetsova
M
,
Jones
SA
.
Creating a learning health system through rapid-cycle, randomized testing
.
N Engl J Med
.
2019
;
381
(
12
):
1175
1179
14
Austrian
J
,
Mendoza
F
,
Szerencsy
A
, et al
.
Applying A/B testing to clinical decision support: rapid randomized controlled trials
.
J Med Internet Res
.
2021
;
23
(
4
):
e16651
15
Lee
B
,
Mafi
J
,
Patel
MK
, et al
.
Quality improvement time-saving intervention to increase use of a clinical decision support tool to reduce low-value diagnostic imaging in a safety net health system
.
BMJ Open Qual
.
2021
;
10
(
1
):
e001076
16
Bell
LM
,
Grundmeier
R
,
Localio
R
, et al
.
Electronic health record-based decision support to improve asthma care: a cluster-randomized trial
.
Pediatrics
.
2010
;
125
(
4
):
e770
e777
17
Schondelmeyer
AC
,
Dewan
ML
,
Brady
PW
, et al
.
Cardiorespiratory and pulse oximetry monitoring in hospitalized children: a Delphi process
.
Pediatrics
.
2020
;
146
(
2
):
e20193336