Diagnostic uncertainty may be a sign that a patient’s working diagnosis is incorrect, but literature on proactively identifying diagnostic uncertainty is lacking. Using quality improvement methodologies, we aimed to create a process for identifying patients with uncertain diagnoses (UDs) on a pediatric inpatient unit and communicating about them with the interdisciplinary health care team.
Plan-do-study-act cycles were focused on interdisciplinary communication, structured handoffs, and integration of diagnostic uncertainty into the electronic medical record. Our definition of UD was as follows: “you wouldn’t be surprised if the patient had a different diagnosis that required a change in management.” The primary measure, which was tracked on an annotated run chart, was percentage agreement between the charge nurse and primary clinician regarding which patients had a UD. Secondary measures included the percentage of patient days during which patients had UDs. Data were collected 3 times daily by text message polls.
Over 13 months, the percentage agreement between the charge nurse and primary clinician about which patients had UDs increased from a baseline of 19% to a median of 84%. On average, patients had UDs during 11% of patient days.
We created a novel and effective process to improve shared recognition of patients with diagnostic uncertainty among the interdisciplinary health care team, which is an important first step in improving care for these patients.
Diagnostic accuracy is emerging as a national patient safety priority. The impact of diagnostic errors is widespread, with estimates suggesting that 10% to 15% of diagnoses are incorrect.1 Building on early work that identified safety and quality gaps in health care systems leading to poor outcomes, the National Academies of Sciences, Engineering, and Medicine published an expert committee report titled “Improving Diagnosis in Health Care” in 2015.2,3 This report asserts that uncertainty is inherent in the diagnostic process and must be appropriately managed to prevent harms such as diagnostic error from premature closure or overtesting due to discomfort with uncertainty.4–8
Diagnostic uncertainty lacks a widely accepted definition, but the authors of a recent review proposed the following: “a subjective perception of an inability to provide an accurate explanation of the patient’s health problem.”9 It was additionally noted that although clinicians’ perceptions of uncertainty have been explored in many reports, there have been few studies in which an attempt to measure diagnostic uncertainty in real patient encounters has been made. Most of these publications identify diagnostic uncertainty through clinician expressions either directly to the patient, in the electronic medical record (EMR) or billing data, or through surveys after the clinical encounters.10–15 Although one group measured uncertainty about the diagnosis of heart failure during a hospitalization by physician survey, to our knowledge there is no research on integrating the identification of diagnostic uncertainty into the ongoing care of hospitalized patients.16
Diagnostic uncertainty may pose a particular risk for hospitalized patients. Often, patients are assigned a working diagnosis on admission that represents the clinician’s best understanding of the presentation. Ideally, this diagnosis evolves through a complex and iterative process that integrates observation and testing over time and is coproduced by the health care team, patients, and families. This team-based process of diagnosis is especially true in the inpatient setting, where different clinical staff care for a patient over multiple shifts.17 However, systems and environmental factors, such as time pressures, provider workload, and shift work, can lead the health care team to bypass this cognitive framework and falsely equate a patient’s working diagnosis with their final diagnosis during critical periods of communication, such as daily rounds or change-of-shift handoffs.18,19 When diagnostic uncertainty is not communicated with the entire health care team, including patients and families, team members may unknowingly fail to communicate relevant clinical information or fail to reconsider the working diagnosis despite new findings.
Our institution noted that gaps in communication of diagnostic uncertainty among health care team members may have contributed to safety events. Therefore, a quality improvement (QI) team was chartered to develop a standardized process to identify and communicate diagnostic uncertainty across the health care team on an inpatient pediatric unit. Given the precedent for measuring diagnostic uncertainty as a clinician’s subjective perception, we set out to measure agreement about diagnostic uncertainty across the health care team as a proxy for the process of communicating this uncertainty. Our specific aim was to increase the percentage of inpatients for whom the charge nurse and clinician agree that the diagnosis is uncertain from 19% to 80% within 6 months of the project launch.
Setting and Context
This QI work took place at the freestanding satellite campus of a quaternary care pediatric medical center in the Midwest. The project was focused on the hospital medicine (HM) and surgical teams on the 42-bed inpatient unit. The HM team includes residents, pediatric advanced practice providers (APPs), and a supervising attending physician. Surgical patients are cared for by APPs under the supervision of an attending surgeon. The HM attending physicians and surgical APPs provide 24-hour coverage over 3 shifts. The charge nurses work in 12-hour shifts and have a global perspective of the acuity, assessment, and plan for each patient; manage nursing assignments and patient flow; and speak with families who have patient care concerns.
Planning the Interventions
The QI team included 3 HM physicians, 1 pediatric emergency medicine physician, 1 surgical APP, a registered nurse (RN) who works at the bedside and as charge nurse, 2 clinical nurse managers who also serve as charge nurses, a clinical research coordinator, and a unit medical director.
The study team recognized 5 key drivers for appropriate identification and communication of diagnostic uncertainty (Fig 1). Observations of physician and nursing handoffs, as well as focus groups and interviews conducted with nurses, physicians, and APPs, informed the team’s interventions, which were tested in plan-do-study-act (PDSA) cycles.
Define Uncertain Diagnosis
There is no broadly accepted definition for diagnostic uncertainty, and our working definition evolved over the study period. Initially, we asked providers to use their best judgement, but if prompted, they were offered several options, including to indicate whether patients showed “signs/symptoms that you don’t expect with their diagnosis, ambiguity in their history, high degree of complexity, outside the norm, or not responding as you expect to interventions.” Clinician feedback and a review of patients labeled with diagnostic uncertainty led our team to clarify our definition. The refined description of uncertain diagnosis (UD), which we presented ∼2 months after the start of data collection, was as follows: “you wouldn’t be surprised if the patient had a different diagnosis that required a change in management” (PDSA cycle 4).
Posters in clinician and nursing break areas asked providers to maintain skepticism about the diagnosis and encouraged interdisciplinary communication (PDSA cycle 1; Supplemental Fig 4). We also raised awareness of the project with e-mails and presentations for physicians and nurses.
RN on Rounds
Nurses on the unit are encouraged to participate in daily rounds, but attendance varies. We attempted to increase bedside nurse presence on rounds, with the goal of engaging nurses in the diagnostic conversation (PDSA cycle 2). This intervention was difficult to maintain because of other tasks competing for nurses’ attention and was ultimately abandoned.
We instituted huddles between the unit charge nurse and the clinicians, focusing on identifying and formulating contingency plans for patients with UDs (PDSA cycle 3). Different times of day were trialed, with highest attendance and satisfaction in a meeting that occurred during the HM change-of-shift handoff at 10:30 pm.
Emergency Department–Inpatient Handoff
We attempted to increase awareness of diagnostic uncertainty earlier in the hospital stay by targeting the admission phone call to HM providers. We posted a script for the conversation by the phones that most emergency department (ED) physicians used to call HM providers about admissions (PDSA cycle 5). The script instructed ED physicians to state the diagnosis and differential and state whether there was any diagnostic uncertainty.
Charge RN Documentation
Charge nurses document patient information on index cards, which are referred to during their shift and used for handoff. We modified the card template, adding an area for diagnosis and a checkbox for UD to facilitate communication of this information (PDSA cycle 6).
Our HM physician verbal and EMR handoffs follow the I-PASS (illness severity, patient summary, action list, situation awareness and contingency plans, and synthesis by receiver) rubric.20 We initially attempted to improve the frequency with which a diagnosis and diagnostic uncertainty were discussed during verbal handoffs using posters and e-mail reminders (PDSA cycle 7). It was challenging to remind physicians to include a new piece of data (diagnosis or diagnostic uncertainty) in this often-busy exchange. It was also difficult for us as a study team to measure whether these components were stated in the verbal handoff. Therefore, we redirected our efforts and modified the EMR handoff template. After “illness severity,” we added a dropdown selection for either a specific diagnosis or UD (PDSA cycle 9). If UD was selected, there was a prompt to include a differential diagnosis. A similar modification was made to the handoff document used for surgical patients.
Our unit already used “situation awareness” labels in the EMR to communicate critical patient information, for example, flight risk or at risk for needing escalation in care.21 This framework was leveraged to create a UD label to be placed in the chart when any member of the health care team felt that a patient had a UD (PDSA cycle 8). The UD label was entered into the EMR by the charge nurse after a collaborative discussion with the clinician team. This label was visible to all team members and could be removed when appropriate.
Our primary measure was agreement between the charge nurse and clinician about which patients had a UD. This was calculated as the number of patients with UDs identified by both the charge nurse and clinician divided by the total number of patients with UDs identified by either party. If neither party identified patients with UDs, this did not appear in the data.
Text messaging via our enterprise secure messaging platform (Voalte Inc, Sarasota, FL) was used to collect baseline and intervention data from April 28, 2017, to May 20, 2018. Text message polls were distributed by the manager of patient services, who oversees patient flow at our satellite campus. Polls were sent 3 times daily to the charge nurse, HM attending, and surgical APP simultaneously, who were asked, “Which (if any) of your patients have an uncertain diagnosis?” (Fig 2). Text messaging was chosen because of ease of polling and its limited interruption of the health care team’s workflow. The polling times (noon, 9 pm, and midnight) were chosen such that 1 poll was conducted during each of the HM attending, surgical APP, and charge nurse shifts, allowing us to measure the reliability with which UD was communicated at change of shift. We chose to survey only the primary clinician (attending or APP) and the charge nurse for ease of data collection and because these team members were thought to be critical stakeholders in the dissemination of patient information. Lastly, because diagnosis is traditionally felt to be in the purview of the clinician, it is possible that a charge nurse acquiesced to the opinion of the clinician about whether a patient’s diagnosis is uncertain without full agreement, especially during in-person huddles. We attempted to avert the effects of this pressure in our results by making the responses to the text poll private and visible only to the third party who documented the results.
The patient census at the time of polling was used to calculate the percentage of patient days during which patients had UDs. This helped our team understand the frequency of UD among inpatients while accounting for seasonal variation in patient volumes.
Annotated run charts were used to track the effect of our interventions over time, with one point on the run chart representing 2 weeks of data. Although established rules for determining special cause variation require ≥6 points above or below the centerline, we used a more conservative threshold of 8 points in accordance with QI practices at our institution.22
This QI project is non–human subjects related on the basis of our institution’s institutional review board standards.
Preintervention data, collected for 4 weeks, indicated that baseline agreement between the charge nurse and primary clinician about patients with UDs was 19%. After ∼13 months of interventions, median agreement had risen to 84%. Our run chart reveals the percentage agreement about patients with UDs and the percentage of patient days during which patients had UDs (Fig 3). The centerline changed after PDSA cycles aimed at interdisciplinary huddles, inclusion of UD in the charge nurse documentation, and introduction of the UD label in the EMR. On average, patients had UDs during 11% of patient days.
Polling reliability decreased with time, and data were incomplete or not obtained for 38% (448 of 1164) of polling opportunities (Supplemental Fig 5). Among the data failures, 97% (433 of 448) were missed because the text was not initiated by the manager of patient services, 2% (11 of 448) were incomplete because a required respondent (charge RN, surgical APP, or HM attending) did not reply, and 1% (4 of 448) were incomplete because the census was not documented.
Using QI methods, we created a new process for the health care team to proactively identify and discuss hospitalized children with diagnostic uncertainty. Before this project, there was no shared definition for diagnostic uncertainty on this unit. This is reflected in the baseline rate of agreement of 19% between charge nurses and clinicians about patients with UDs. We acknowledge that our definition is colloquial, and the inclusion of “wouldn’t be surprised” seems to leave room for cognitive bias. We arrived at this definition through discussions with stakeholders about cases in which they had a “gut feeling” that something was askew but lacked a shared language and framework to state their concerns openly. We believe that, although imperfect, our definition enabled team members to overcome these barriers and express diagnostic concerns.
Our measure of agreement between providers was selected as a proxy for several concurrent critical processes: identification of patients with UDs, communication between clinicians and nurses about identified patients, and handoff of the diagnostic model at change of shift for both clinicians and nurses. This measure would not capture patients whose diagnoses should have been uncertain but were not recognized by either the clinician or charge nurse because of cognitive biases, such as anchoring. Because there is not a validated method to detect diagnostic uncertainty in patients, we decided the best way to identify these patients was to ask the clinicians and nurses directly caring for them. The relatively steady percentage of patient days during which patients had UDs supports the validity of our measure.
Interventions that built on existing workflow and promoted interdisciplinary communication were most successful in driving sustainable changes. Specifically, we restructured our existing charge nurse documentation template to include the patient’s diagnosis or UD. Because there are multiple charge nurse and clinician shifts every day, ensuring that the diagnosis and diagnostic uncertainty are consistently in the handoff is important to maintaining continuity of the diagnostic process. The HM physician handoff template was also modified to include UD, although there was less reliability in whether physicians completed this portion of the document. The physician handoff template intervention also occurred during a time of lower data collection and UD label use. We identified the overnight HM physician handoff as the optimal time for a huddle with the charge nurse because HM attending physicians were already discussing the patients and it was a convenient time for the charge nurse to participate. Whereas day shift nurses use daily rounds for patient updates, night shift nurses had no such standardized forum. Our nighttime huddle became a means for the charge RN to discuss not only the patients with UDs but also high acuity patients and discharge planning.
Our key intervention leveraged existing safety processes for situation awareness to create a label for patients with diagnostic uncertainty in our EMR. The UD label is readily viewable by all health care staff and prompts continued discussion of these patients. We believe the EMR label was pivotal in making our work sustainable, although it built on the huddles, which provided a forum to gain consensus on the meaning of UD and the value in identifying patients with UDs.
The National Academies of Sciences, Engineering, and Medicine report highlights the uncertainty present throughout the diagnostic process and the importance of clinicians acknowledging and managing this uncertainty.2 However, diagnostic uncertainty has proven difficult to define and operationalize in clinical care. To our knowledge, this is the first initiative that systematically incorporates the recognition of diagnostic uncertainty into the inpatient clinical workflow. Additionally, although physicians and APPs are traditionally viewed as leaders in the diagnostic process, our team intentionally included nurses in this process. Nurses spend more time at the bedside, so it is vital that they are aware of diagnostic uncertainty because it may change how they approach clinical changes in their patients.
This project has provided a culturally acceptable means to directly express diagnostic uncertainty, which is a key aspect of high-reliability organizations (HROs) as described by Weick and Sutcliffe.23 Before our work, we suspect that reluctance to share diagnostic uncertainty was due to multiple factors, including perceived cultural intolerance of uncertainty and the demand for simple, fast transfers of patient information. During one physician handoff we observed before our work, the off-going physician apologized to the on-coming physician for not knowing the patient’s diagnosis yet. In our focus groups, clinicians shared concern that expression of diagnostic uncertainty may be attributed to deficits in clinical experience or acumen. Our project created a structure that not only permits but also expects expressions of diagnostic uncertainty, an embodiment of the mindset of doubt seen in HROs that are preoccupied with failure. Furthermore, the UD EMR label is a more accessible way for nurses to express doubts about the patient’s diagnosis with clinicians and it has become common for nurses to propose that a patient be labeled UD. This process of “sensemaking…substituting discontinuous concepts for continuous perceptions” is crucial for an HRO to manage unexpected and complex situations.23
Even after development of an EMR label for diagnostic uncertainty, we only reached a median 84% agreement on which patients had UDs. There can be a lag between clinician recognition of diagnostic uncertainty and application of the UD label, and our polling sometimes occurred in that window. Application of the UD label is reliant on individual providers, who sometimes forget to use the label when diagnostic uncertainty is present. Finally, there can be disagreement about diagnostic uncertainty across providers over multiple shifts, and we observed a hesitancy to apply or remove a UD label by evening and overnight providers, who rotate more frequently than dayshift clinicians. This can result in the evening or overnight providers feeling uncertainty that is incongruent with what is documented in the EMR.
The generalizability of our findings may be limited by the setting of our project. The preexisting organizational culture, senior leadership support, triggering events, and QI team microsystem factors all likely contributed to our success.24 We had the advantage of developing this process within a small closed unit where clinicians and nurses worked in close proximity to each other. This facilitated successful in-person interdisciplinary huddles, which were critical in establishing a shared cognitive framework for patients with diagnostic uncertainty.
We only collected 4 weeks of baseline data before the start of interventions because we did not want to delay this improvement work. Therefore, there may be some additional variability in the degree of communication about diagnosis that our baseline did not capture.
Toward the end of data collection, there were dips in percentage agreement about UD and the absolute number of patients with UDs. This may have occurred because of the higher census and relative diagnostic homogeneity of respiratory illnesses seen in the winter months. In addition, the QI team inadvertently decreased its weekly e-mail reminders during this period.
Because of competing priorities on the unit, a significant number of polls were not completed. Data collection relied on polling that was initiated by the manager of patient services; staffing issues and a busy winter census likely contributed to decreased reliability of this person-dependent process. The vast majority of missed polls (97%) were due to the manager of patient services not initiating the poll; charge nurses and clinicians almost always responded because it was quick and easy to do. Although decreased polling may have hampered full insight into our process, sampling is a common QI strategy of evaluating system sustainability in the absence of continual monitoring.
Our measure only reveals increasing agreement between charge nurses and clinicians about patients with UDs. Other health care team members with important roles, including the bedside nurse, respiratory therapist, medical students, and residents, were not surveyed in our measure, although they had access to the EMR label for UD and were included in discussions about patients with UDs.
Now that we have a reliable process for identifying and discussing diagnostic uncertainty, we are pursuing standardized contingency planning for patients with UDs, improved management of diagnostic uncertainty through a clinician toolkit, and better communication of uncertainty with patients and families. In addition, we have begun to implement this process at our main hospital campus.
In this QI project, we developed a novel process for prospectively identifying diagnostic uncertainty in hospitalized children and increasing shared recognition of this uncertainty across the health care team. This is an important first step in improving care for these patients and learning about the impact of diagnostic uncertainty on hospital admissions.
We thank Rich Ruddy, Julie Zix, and Jillian Burkhardt for their support of this project.
Dr Ipsaro conceptualized, designed, and executed the study, coordinated the quality improvement team, designed the data collection instruments, gathered data, conducted the data analysis, and reviewed and revised the manuscript; Dr Patel, Mrs Rohrmeier, Ms Luksic, and Ms Bell participated in design and execution of the study, collected data, conducted the data analysis, and reviewed and revised the manuscript; Dr Warner participated in execution of the study, collected data, conducted the data analysis, and reviewed and revised the manuscript; Dr Marshall conducted the data analysis and reviewed and revised the manuscript; Mrs Richardson coordinated the quality improvement team, designed the data collection instruments, gathered data, conducted the data analysis, and reviewed and revised the manuscript; Mrs Kammer supervised the quality improvement team, participated in design and execution of the study, collected data, conducted the data analysis, and reviewed and revised the manuscript; Dr Hagedorn conceptualized and designed the study, supervised the quality improvement team, conducted the data analysis, drafted the initial manuscript, and reviewed and revised the manuscript; Dr Chan participated in design and execution of the study, collected data, conducted the data analysis, and reviewed and revised the initial submission of this manuscript before his passing; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.