Across the world, in medical journals, news media, social media, and personal discussions, one of the hottest topics of discussion is the coronavirus disease 2019 (COVID-19) vaccine. The rapid development of effective vaccines for COVID-19 is a remarkable success story of modern science, and it is a reminder for all of us about the significant health benefits of vaccines in general, benefits we often take for granted. Although the COVID-19 vaccine is the media star of the day, children receive numerous other routine vaccinations, including the hepatitis B vaccine, which significantly reduces the risk of hepatitis B virus infection. However, with all vaccines, including the hepatitis B vaccine, there are significant gaps in coverage, with high percentages of eligible children either not receiving the vaccine at all or receiving it late, thus exposing them to the risk of infectious diseases. A quotation from Orenstein1 has recently surfaced in discussions about the COVID-19 vaccine: “vaccines don’t save lives, vaccinations save lives.” Gaps in vaccine coverage generate the need for quality improvement (QI) projects to close these gaps.
In this issue of the journal, Sarathy et al2 describe the results of a QI project in which they aimed to increase the monthly percentage of eligible newborn infants receiving the first dose of the hepatitis B vaccine within 24 hours of life to ≥80% within 9 months of project initiation. This article contains some important lessons that deserve to be highlighted about the conduct, description, and publication of QI projects. In 2008, the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines were first published. They were updated in 2015 as the SQUIRE 2.0 guidelines. Although many journals that publish articles about QI require authors to follow these guidelines for articles about QI, many published articles still do not include important SQUIRE 2.0 elements. The article by Sarathy et al2 is a good example of an article written in the SQUIRE 2.0 format, as such an article should be.
In this project, the aim was to close a gap between optimal and actual practice. The gap existed because there was a clear evidence-based expert recommendation about when to administer the hepatitis B vaccine, and yet, in that institution, only 40% of eligible infants were receiving the vaccine as recommended. Such “gap-closure” QI projects are generally more straightforward than QI projects that start by targeting clinical outcomes, in which the aim is to change the frequency of a given outcome, such as bronchopulmonary dysplasia, intraventricular hemorrhage, or length of stay. With QI projects targeting clinical outcomes, the causal and contributory factors are often multiple and complex, the proportional contribution of each factor to the outcome is hard to determine accurately, and the evidence is often sparse and of low quality. In contrast, with gap-closure projects, the evidence is of high quality, and the clinical actions to be taken are clear. Other examples of gap-closure projects are those in which researchers attempt to improve hand hygiene, compliance with screening recommendations, prophylactic interventions, or nutritional intake. In such projects, the QI team has to understand why the gap between evidence-based recommendations and practice exists, determine the reasons for noncompliance with guidelines or recommendations, change the knowledge and attitudes of health professionals, remove barriers to implementing the practice, enhance facilitators of preferred practices (or use forcing functions, often within the electronic medical record [EMR]), and provide periodic feedback about guideline compliance to the health professionals. Sarathy et al,2 in their QI project, used a multifaceted set of such interventions that included education, reminders, policy changes, communication, barrier removal, audit and feedback, and public (extrainstitutional) recognition of good performance as a motivator.
In QI projects, interventions are tested serially, often in combination and over the life of the project; the dose and nature of the intervention may change and evolve over time. Unlike a clinical trial with a rigid inflexible protocol, QI improvement interventions involve reflexivity and adaptiveness as interventions are tested and changes in the system of care are measured. Therefore, over the life of a QI project, many changes, both simple and complex, may be made to processes, tools, workflows, resources, and the EMR. In describing these interventions in a QI article or conference presentation, authors often tend to either be too broad with their description or too detailed, often including numerous minor details. It can be challenging to summarize in writing the actions taken and changes made during a QI project succinctly yet with sufficient detail for the readers to understand what was done. In this respect, Sarathy et al2 achieved a good balance of succinctness and detail in describing their project methodology.
In modern QI projects, it is common for teams to use interventions related to the EMR, such as order sets, reflex orders, decision support, or forcing functions. With their report, Sarathy et al2 remind us that, by themselves, these EMR-based interventions might not be effective. In their hospital, a standing order for hepatitis B vaccine administration was included in the preapproved newborn admission order set. Nurses were supposed to activate this order and administer the vaccine, and the order would be cosigned by a provider later. This was not happening as intended. Nurses did not start activating the order regularly until formal hospital approval was approved for them to do it. This phenomenon illustrates the importance of not relying on seemingly simple technological solutions to ensure good clinical practice: they might not be effective until combined with education, process redesign, incentives, reassurance, and facilitators.
An aspect of QI articles that is often suboptimal is a description of the quantitative results. Although this has improved in the 10 years since the SQUIRE guidelines were published, in too many published QI articles today, researchers use traditional statistical methods and before-after comparisons to claim success. If they do use the preferred statistical process control (SPC) methodology for data analysis and depiction, they often make mistakes in this methodology. A common mistake is to change the control limits of the SPC chart on the basis of when an intervention was applied, rather than when a statistical signal of special cause variation was noted. In contrast, Sarathy et al2 used rigorous SPC methods and graphs to depict their results and appropriately readjusted the control limits on the SPC charts on the basis of when the special causes appeared. They did not provide a lot of detail about data collection methods, and their use of G charts (which are designed for rare events) to analyze the same data starting in a different way than the p charts while describing it as a secondary outcome seems like a redundant analysis. The data revealed a significant and sustained improvement in hepatitis B vaccine administration rates in the first 24 hours of life. However, this improvement commenced even before the first intervention was applied in the project (a special cause variation was observed in the data 3 months before the first test of change). The authors do not dwell on this finding, but this observation highlights the need for researchers, in all QI projects, to analyze data ordered over time (by day, week, or month) in SPC charts to identify signals that might be lost when data are combined into 2 before and after buckets and displayed as bar graphs.
It is important for QI teams not only to design and conduct QI projects in a rigorous and scientific manner; it is also important for them to publish their results. The SQUIRE 2.0 guidelines provide QI team members with a framework to design their project, document project details as it evolves, and draft an article for publication. The data in such projects should be analyzed and displayed over time in the form of SPC charts with selection of the correct type of chart and appropriate readjustment of the control limit when warranted. In contrast to publishing of traditional research, publishing about QI is still in its infancy. In their article, Sarathy et al2 provide a good example of how it should be done.
Acknowledgement
We thank Patrick Brady, MD, for his advice on editing the article.
Opinions expressed in these commentaries are those of the author and not necessarily those of the American Academy of Pediatrics or its Committees.
Dr Gautham conceptualized the editorial, drafted the initial manuscript, revised it, and approved the final manuscript as submitted.
FUNDING: No external funding.
COMPANION PAPER: A companion to this article can be found online at www.pediatrics/cgi/doi/10.1542/hpeds.2020.002766.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: The author has indicated he has no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The author has indicated he has no financial relationships relevant to this article to disclose.
Comments