Adverse drug events (ADEs) are a significant safety risk for hospitalized patients.1 Medical centers devote considerable resources to quality and safety initiatives aimed at mitigating these events, particularly those that are preventable and/or lead to patient harm. However, the precise nature of these events, their frequency, their severity, and the degree of harm they cause are not well understood, especially in pediatric patients. Although it is well documented that voluntary incident-reporting systems inadequately capture ADEs, consistent measurement approaches and reporting have remained elusive.2
In this issue of Pediatrics, Gates et al3 review the literature on pediatric preventable adverse drug events (pADEs) published between 2000 and 2017. Their well-designed and thorough article reveals many of the serious challenges facing this area of harm detection. Despite overall harm rates being low, the literature reviewed by the authors is heterogeneous in terms of the frequency of pADEs, and they make clear the difficulty in accurately describing the frequency and severity of harm experienced by pediatric patients. For example, Gates et al3 note that the range of incidence for pADEs spans from 0 to 43 per 100 admissions. Therefore, depending on the study, there are 0 pADEs during a study period or as many as 40% of all admissions having a pADE. The difference is not likely rooted in patient severity of illness, complexity, or hospital safety standards. More likely, this reveals the variability of measurement methods, definition application, and reporting standards within pediatrics.
Given the variability of the reported harm rates, a question arises from the authors’ review of the literature, namely, which rates of pADEs and subsequent harm are accurate? Certainly, this calls into question the methods being used to detect the harm. The inconsistency in research approaches to measuring pADEs suggests that inconsistent approaches are also likely across pediatric hospitals because of their operational understanding of pADEs. This variability has an impact on safety measurement, management, and interventions because inconsistent approaches may lead to inaccurate understanding about the safety of a hospital’s medication delivery system.
Another example of the impact these inconsistencies have on our understanding of pADEs is the fact that the authors are forced to collapse the numerous severity scales used in the original articles into 3 categories. Although this methodology is a reasonable and useful approach, the mere presence of so many different severity scales points to a lack of reporting standards in the field. Additionally, although it may seem reassuring that most of the harms described were categorized as minor, researchers in only 7 of the 22 articles reviewed used any method to rank the severity of harm resulting from these pADEs. With inconsistent use of severity scales, understanding the results broadly is challenging.
Recently, a group of authors noted poor agreement among novice medical record reviewers in the determination of the presence or absence of an adverse event.4 These authors note that with experience, the reviewers improve their consistency. Although not addressed in this work, we wonder if an inconsistent application of adverse event definitions as well as preventability definitions contribute heavily to the variability of pADE findings.
Only 12 of the 22 articles reviewed described the methods used to determine harm related to ADEs, which makes understanding the results of these studies and their lessons for future harm detection difficult. Among the studies that listed their harm detection methods, there were no indications that 1 methodology was plainly superior. Although voluntary reporting is clearly inadequate, there is no clear consensus regarding preferred methods, such as chart review (prospective or retrospective), direct observation, and trigger tools. Trigger tool methods have been shown to detect significantly higher levels of harm in the pediatric population than other methods, yet not all researchers agree.5,6 Recently, Maaskant et al7 compared a multifaceted review approach (chart review, reporting, pharmacy logs, and direct observation) to the use of a pediatric medication-focused trigger tool.8 The multifaceted approach detected more errors and associated harms than the trigger-tool methodology in a small sample. The inconsistencies in these findings highlight the importance of the methods used, the application of each method, the triggers used, as well as the training of the reviewers. In 2010, Long et al9 demonstrated that the rate of ADEs detected could be increased by specifically tailoring the trigger tool to the pediatric population.
Moving forward, the field needs precise, consistent methods for detecting pADEs in children, preferably those that are designed for the pediatric population and the unique harms they may experience. Furthermore, once pADEs are detected, researchers must reliably describe the severity of harm resulting from the pADEs. Data published with these attributes could help to reveal the impact pADEs have on morbidity and mortality, pediatric length of stay, and cost. Ultimately, more complete and consistent descriptions of pADEs in the pediatric population could also help drive more targeted and effective quality and safety initiatives, ideally resulting in safer hospital stays for children.
Opinions expressed in these commentaries are those of the authors and not necessarily those of the American Academy of Pediatrics or its Committees.
FUNDING: No external funding.
COMPANION PAPER: A companion to this article can be found online at www.pediatrics.org/cgi/doi/10.1542/peds.2018-0805.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: Dr Stockwell is a part-time employee of Pascal Metrics, a Patient Safety Organization; and Dr Schroeder has indicated he has no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments