Successful publication of quality improvement (QI) work is predicated on the use of established QI frameworks and rigorous analytical methods that allow teams to understand the impact of interventions over time. This article is meant to help QI teams disseminate their work more broadly through publication by providing tangible methods that many journals desire in QI articles with specific examples of published works referenced throughout the article. We introduce improvement frameworks that teams should identify early and use as a foundation throughout their projects. We review vital aspects of QI projects, such as team formation, creation of a succinct and clear aim statement, defining primary, process, and balancing measures, as well as QI tools like key driver diagrams, Ishikawa (fishbone) diagrams, and Pareto charts. Finally, we highlight the importance of analyzing data over time to understand the impacts of plan-do-study-act cycles on data. Annotated run charts or, more preferably, annotated statistical process control (or Shewhart) charts are both statistically sound methods to identify significant changes over time. Deliberate planning and execution of QI projects using these concepts will lead to improved chances of QI teams finding success in their project and eventual article acceptance.
Quality improvement (QI) or improvement science has been defined as the “systematic, data-guided activities designed to bring about immediate, positive changes in the delivery of health care.”1 QI methods to design, test, and implement changes to complex health care settings using real-time measurement of data can allow providers to quickly and equitably improve patient care. The purpose of this article is to review key concepts in QI and provide foundational tools for health care providers to develop and lead QI projects at their own institutions and increase success with the publication of their work in peer-reviewed journals.
Plan to Publish From the Beginning
Scholarly descriptions of medical QI work are often not published as frequently as “traditional” science. Failure to publish QI experiences can have several consequences, including (1) the inability to reproduce results, (2) lack of rigorous peer review which may impact the accountability of researchers, and (3) slowing of the dissemination of known effective innovations, which may waste time, effort, and money.2 Writing and sharing QI results are vital for the advancement of QI as a field and to ensure that patients and health care systems are benefiting from these efforts. The Standards for Quality Improvement Reporting Excellence (SQUIRE) 2.0 guidelines were developed for reporting QI initiatives and can serve as a blueprint for project planning.3 The sections of this article touch on key aspects of proper SQUIRE reporting. Determine the institutional review board policy regarding QI work at your institution and include whether a review was required and, if so, the results of that review in all published articles.
Choosing a QI Framework
Before starting a QI project, a framework should be selected. A widely used framework is the Model for Improvement (MFI), which centers around 3 main questions: (1) “What are we trying to accomplish?” (2) “How will we know that a change is an improvement?”, and (3) “What changes can we make that will result in an improvement?”.4 The first question provides an opportunity for reflection with stakeholders to establish a shared goal and to determine a target population and timeframe. This step should result in a project’s specific, measurable, achievable, relevant, time-bound (SMART)4,5 aim statement or overall project objective. An example of a SMART aim can be found in Lin et al’s article: “decrease the percentage of patient nights with a vital sign check between 12 AM and 6 AM in a low-risk population of patients discharged from hospital pediatrics from 98% to 70% by December 2020.”6 The second question in the MFI focuses on the importance of defining measures (primary outcome, process and balancing measures) so that data can be followed over time and teams are confident they are measuring what they intend to measure. The third question focuses on the importance of identifying potential changes based on a team’s hypotheses about what changes will result in improvement. After answering these questions, the MFI encourages the use of plan-do-study-act (PDSA) cycles as the means for employing and testing the team’s hypotheses about changes that will lead to improvements.
Other QI Tools
There are many QI tools that can complement improvement work as teams work through the MFI. Six Sigma is a method designed by Motorola that heavily incorporates statistical process control (SPC) in an effort to reduce variation.5 Statistical process control has been defined by Langley as a “philosophy, a strategy, and a set of methods for ongoing improvement of processes and systems to yield better outcomes.”4 Teams using Six Sigma will work through an improvement process referred to as DMAIC: define, measure, analyze, improve, control. In health care, we often see the term “Lean Six Sigma,” which incorporates Lean methodology. Lean is a manufacturing principle that strives to eliminate waste from a system to improve efficiency and reduce cost.5 These principles can inform improvement projects in health care around hospital or clinic flow, operating room turnover, supply chain, medication administration, and much more. Some health systems, such as Virginia Mason Medical Center, have incorporated Lean principles with great success.7
There is no “one-size-fits-all” framework or tool, but improvement projects will be much more successful if teams are deliberate about identifying a framework and QI tools that suit their project during the initial planning stages. The iterative testing encouraged within the MFI is why our authorship team uses the MFI as the overall framework for most QI projects and incorporates other tools like Six Sigma or Lean when appropriate. When sharing your work via publication in medical journals, articulating the framework that you chose will be important to help frame your theories and methods.
Forming Your Team
Quality initiatives in health care involve complex systems interactions, and the formation of the team is often the first discrete step in a QI project. Team members should, ideally, include key stakeholders (patients, patient family members, nurses, physicians, pharmacists, respiratory therapists, bioinformaticists, etc) from all parts of the system being addressed by a QI project. Engaging stakeholders early can identify and align priorities for patients, families, and the health care system. Articles that seek to reproducibly describe their project should include a brief description of their team composition. Thoughtful examples of team descriptions to provide context are found in many Hospital Pediatrics publications.6,8,9
Setting Aims and Identifying Targets for Interventions
The next step is rigorously examining the process involved to identify targets for future PDSA cycles. Key driver diagrams are often generated at this step and can be reported in QI articles.6,8 A key driver diagram serves as a visual representation of the SMART aim, theories behind a quality initiative, and potential interventions with relationships between each component visually expressed with arrows (Fig 1). Although the key driver diagram is usually formed early in a specific project, it is a living document that should be updated as the team learns more about its specific process.
Cause and effect diagrams (also known as Ishikawa diagrams or “fishbone” diagrams), Pareto charts, and failure modes and effects analysis can all inform a team’s plans for interventions. These tools can also be incorporated into an article to assist readers in understanding the QI journey. An excellent example of the use of a fishbone diagram can be found in Studenmund et al’s article.9 For more details about the use of these tools and more refer to Langley’s The Improvement Guide and Tague’s The Quality Toolbox.4,5
Defining Measures
After a SMART aim has been developed, careful attention to defining measures is key to ensuring data integrity and SMART aim alignment. A detailed, operational definition for the project’s primary outcome is essential because poor operational definitions can inhibit the ability to learn from your data or lead to incorrect conclusions. An operational definition should include the method of measurement or test and inclusion and/or exclusion criteria. For example, if the goal of a QI project is to reduce the number of catheter-acquired urinary tract infections, a precise definition of both a urinary tract infection (ie, what aspects of a urinalysis will the team consider to indicate a urinary tract infection) and an indwelling catheter (ie, how long does a catheter need to be in place to count as “indwelling”) need to be established at the outset. A process measure is one that captures changes to a specific component of the system. Oftentimes, these can relate directly to one of the key drivers, as in Lin et al’s article.6 Balancing measures can help quantify if the changes being made to 1 part of a system are resulting in new problems in other parts of the system, such as an increase in readmissions in a project to decrease length of stay. It is helpful for both the QI team and for publication to clearly define each of these measures and the data source. Christianson et al clearly define their primary, process, and balancing measures by definition and data source and present them as a table.8
Tracking the Data
Prospective, continuous monitoring of data is the cornerstone of rigorous QI and is a requirement for publication. Data are used to describe how a current system is working and allow researchers to assess outcomes when changes are applied and document successful performance. Baseline data are critical to determining if the changes you are making to a system are, in fact, resulting in an improvement. The source of data should be accessible with frequent updates and reviews to identify improvement. In other words, when possible, daily data are better than weekly data, but weekly is better than monthly, etc.
The key methodology underlying QI data is SPC, which includes measurement, data collection methods, and planned experimentation.4 Graphical tools to display data over time (ie, Shewhart control charts and run charts) are the backbone of QI methodology because they allow team members to understand processes, test hypotheses, and learn about intervention effectiveness (ie, learn from PDSA cycles). Run charts allow teams to identify “signals of change,” whereas Shewhart SPC charts include upper and lower control limits, which allow improvement teams to identify the common cause and special cause variation.10 Common cause variations are those causes that are inherent in the system or process, whereas special cause variations are causes of variations that are not inherent to the system.
Improvement teams use concepts of special and common cause variation to inform if there is a high degree of belief that PDSA cycles are resulting in meaningful change. Carroll et al provided examples in a recent commentary of how an improvement team identified special cause variation to drive improvement.11 In the original study, Liao et al annotated Shewhart statistical process control charts allowing readers to understand how interventions were related to process measure changes and how process measures were related to their outcome measures.12 A strong results section will depend on figures that are either run or Shewhart statistical process control charts with clear intervention annotations (Fig 2). The written portion of the results should provide interpretations of the data in relation to tested interventions displayed in the charts.
Testing Changes With PDSA Cycles
PDSA cycles are a key mechanism for the iterative application of the scientific method to improvement in a health care setting and help answer the third question of the MFI. PDSA cycles are small-scale, iterative, and mirror the 4 stages of the scientific method: formulating a hypothesis, collecting data to test the hypothesis, analyzing and interpreting results, and making inferences on the basis of the hypothesis.13 Although the term “PDSA” is often found in improvement literature, a systematic review by Taylor et al revealed that only ∼20% of the articles that met the criteria for inclusion fully documented the correct application of PDSA cycles.14 When reporting PDSA cycles, share some or all of the cycles in the results section with readers to highlight the improvement team’s failures and successes. Authors may describe PDSA cycles in the text of their article or can use tables, as demonstrated by Vater et al.15
By learning from your team’s PDSA cycles through real-time data monitoring using statistical process control charts, readers will be better positioned to incorporate and adapt interventions for their own populations, units, teams, or clinics. Cross-institutional adoption will be more successful when teams openly share their PDSA cycle results so readers can consider individual factors affecting their specific setting.
Conclusions
QI methodology, and the subsequent sharing of the results achieved, can help transform health care. By working within a framework such as the MFI, forming a multidisciplinary team, identifying a SMART aim specific to the process or project, and then testing interventions while temporally relating the tests to the analysis of the data, QI work can implement change that will benefit patients, providers, and systems (Table 1). The subsequent publication of the efforts and results achieved can shorten the time to improvement because other institutions or programs can learn from and adapt successful strategies already tested by the project team. The publication of QI efforts is vital to the advancement of the field and, ultimately, to improving patient care.
Pick a framework early and walk the reader through the components of the framework. |
Clearly state the project aim. |
Clearly define primary, secondary, and balancing measures with operational definitions such that anybody could replicate in their own system. |
Follow data prospectively through time with run charts at minimum, or control charts, when possible. |
Annotate control charts to help tell the story of how interventions influenced changes in data over time. |
If your team did PDSA cycles (and we highly recommend they do), share your learnings and how they built to your team’s eventual interventions. |
Pick a framework early and walk the reader through the components of the framework. |
Clearly state the project aim. |
Clearly define primary, secondary, and balancing measures with operational definitions such that anybody could replicate in their own system. |
Follow data prospectively through time with run charts at minimum, or control charts, when possible. |
Annotate control charts to help tell the story of how interventions influenced changes in data over time. |
If your team did PDSA cycles (and we highly recommend they do), share your learnings and how they built to your team’s eventual interventions. |
FUNDING: Dr Carroll was supported by grant number T32HS026122 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest relevant to this article to disclose.
Drs Carroll and Johnson initially conceived of the project, wrote sections of the manuscript, and drafted the manuscript; Drs Smith, Frazier, and Weiner helped with conception of the paper and wrote sections of the manuscript; and all authors revised the manuscript, approve the final manuscript as submitted and agree to be accountable for all aspects of the work.
Comments