Video Abstract

Video Abstract

Close modal
OBJECTIVES

To describe the quality of pediatric resuscitative care in general emergency departments (GEDs) and to determine hospital-level factors associated with higher quality.

METHODS

Prospective observational study of resuscitative care provided to 3 in situ simulated patients (infant seizure, infant sepsis, and child cardiac arrest) by interprofessional GED teams. A composite quality score (CQS) was measured and the association of this score with modifiable and nonmodifiable hospital-level factors was explored.

RESULTS

A median CQS of 62.8 of 100 (interquartile range 50.5–71.1) was noted for 287 resuscitation teams from 175 emergency departments. In the unadjusted analyses, a higher score was associated with the modifiable factor of an affiliation with a pediatric academic medical center (PAMC) and the nonmodifiable factors of higher pediatric volume and location in the Northeast and Midwest. In the adjusted analyses, a higher CQS was associated with modifiable factors of an affiliation with a PAMC and the designation of both a nurse and physician pediatric emergency care coordinator, and nonmodifiable factors of higher pediatric volume and location in the Northeast and Midwest. A weak correlation was noted between quality and pediatric readiness scores.

CONCLUSIONS

A low quality of pediatric resuscitative care, measured using simulation, was noted across a cohort of GEDs. Hospital factors associated with higher quality included: an affiliation with a PAMC, designation of a pediatric emergency care coordinator, higher pediatric volume, and geographic location. A weak correlation was noted between quality and pediatric readiness scores.

What’s Known on This Subject:

The majority of pediatric patients requiring resuscitation initially present to general emergency departments. Little is known about the quality of pediatric resuscitative care or what hospital-level factors are associated with higher quality of care in these departments.

What This Study Adds:

Gaps in the quality of pediatric resuscitative care were noted across 175 general emergency departments. Specific factors associated with higher quality included affiliation with a pediatric academic medical center, designation of pediatric emergency care coordinators, higher pediatric volume, and geography.

Each year, 30 million acutely ill and injured children receive care in >5000 US emergency departments (EDs).1  The majority (>90%) of these children initially present to general emergency departments (GEDs) that concurrently care for children and adults.2,3  There is a paucity of research describing the quality of pediatric resuscitative care in GEDs because of the low frequency of pediatric resuscitation in each ED and limited research infrastructure.4  Understanding the factors associated with higher-quality pediatric resuscitative care could help target the implementation of ongoing pediatric improvement initiatives such as the National Pediatric Readiness Project, trauma center verification, and/or designation of pediatric emergency care coordinators (PECCs).57  Previous studies have noted a higher quality of pediatric resuscitative care in pediatric EDs compared with GEDs, measured by adherence to performance checklists during the resuscitation of simulated pediatric patients.813  In that work, the analysis of hospital-level factors was limited because of the small sample size and limited geography.

This study aimed to use simulation as an investigative methodology to describe the quality of pediatric resuscitative care, using previously described standardized in situ simulations and performance checklists across a large sample of GEDs.11  It also aimed to explore GED modifiable and nonmodifiable factors associated with higher-quality pediatric resuscitative care. We hypothesized that the following factors would be associated with higher quality: affiliation with a pediatric academic medical center (PAMC), adult trauma center designation, higher pediatric readiness, the presence of PECCs, higher pediatric patient volume, and closer proximity to a PAMC.

Between 2013 and 2019, Improving Pediatric Acute Care Through Simulation, a collaborative pediatric simulation research group within the International Network for Simulation-based Pediatric Innovation, Research, and Education, completed this prospective, multicenter, observational study in GEDs in the United States and Canada.14  Each participating PAMC recruited 2 teams from a convenience sample of at least 1 GED in their respective geographic region. The PAMCs selected these hospitals on the basis of transfer patterns to the PAMC and existing relationships with staff at these hospitals. GEDs were defined as EDs staffed by physicians, not fellowship trained in pediatric emergency medicine, that cared for both children and adults. GEDs could not have clinical staffing from a PAMC, though hospitals could have staffing relationships outside of the ED environment (inpatient or newborn).

A team from each PAMC conducted a daylong, on-site visit to each participating GED, during which GED teams participated in simulations, and a pediatric readiness survey was completed at each site. The PAMC team included a physician, a nurse, research associates, and/or simulation technicians who set up the simulation equipment, conducted the simulations, collected data, and facilitated postsimulation debriefings. All teams followed a standard operating procedure with details on setting up the simulator, a scripted introduction to start each session, followed by a brief “warmup” simulation of an infant foreign body aspiration. Next, these teams facilitated 3 standardized study scenarios 15 to 20 minutes in duration, in the following order: (1) infant sepsis, (2) infant seizure, and (3) child cardiac arrest. The simulated patients presented in series with a parent actor and were cared for by interprofessional GED teams in the resuscitation bays of each participating ED. The GED teams were recruited by site champions using e-mails and signup sheets to volunteer for these sessions. GED teams were structured to mirror the team composition of typical respondents when a critically ill pediatric patient presented to the ED. This included 1 to 2 emergency physicians or advanced practice providers, 2 to 5 nurses, and additional staff (nursing assistants, respiratory therapists, emergency medical technicians). A scripted 20-minute debriefing was conducted by the PAMC team after each case. All participants were protected from other clinical responsibilities during these simulations. Simulation-based performance data specific to each case were collected by trained raters on paper data collection forms via direct observation during the simulations. After the simulations, data were entered into a centralized data platform. Before enrolling any teams, all site investigators participated in a standardized train-the-trainer session to calibrate scoring and ensure the accuracy of data collection and simulation logistics across all sites. Details on the simulation logistics, scenarios, checklists, and the train-the-trainer process are available at www.impactscollaborative.com and have been reported in a previous publication.11  The data from the 58 teams included in the previous publication are included in this study. Institutional review board approval was obtained from Yale University (Protocol ID 2000031706) and at each collaborating PAMC.

Definitions of Modifiable Hospital-Level Factors

  1. Affiliation with a PAMC: A contractual relationship between the GED and the PAMC. A PAMC was defined as a children’s hospital with a pediatric residency program and served as the pediatric site for medical education.

  2. Adult trauma center designation: Verification by the American College of Surgeons Verification, Review, and Consultation Program on the basis of the American College of Surgeons Directory.6 

  3. Weighted pediatric readiness score (WPRS): A summary score of adherence to the 2009 National Pediatric Readiness Project Joint Policy Statement for the Care of Children in the ED.15,16  The score was normalized to a 100-point scale using the published rubric to weight questions from the Pediatric Readiness Survey across 6 domains: (1) administration and coordination of care, (2) physician/nurse staffing, (3) quality improvement, (4) patient safety, (5) policies, procedures, and protocols, and (6) equipment, supplies, and medications.5  At each site, a GED manager, director, or educator not participating in the simulations completed the survey. Findings were verified in person by a trained member of the PAMC team on the day of the simulation for all scored items on the checklist (eg, locating each piece of equipment, reviewing policies and guidelines in paper or electronic forms). If the team could not locate the scored item during the in-person assessment, it was considered nonexistent.

  4. Designation of a physician and/or nurse PECC: A nurse and/or physician who is assigned the role of coordinating various administrative aspects of pediatric emergency care, measured by a response to the 2 questions related to a PECC (#22, #25) on the Pediatric Readiness Survey.5 

Definitions of Nonmodifiable Hospital-Level Factors

  1. Annual pediatric patient volume: Based on the site’s response to the Pediatric Readiness Survey classification system: (1) low volume: <1800 pediatric patients per year, (2) medium volume: 1800 to 4999 pediatric patients per year, (3) medium–high volume: 5000 to 9999 pediatric patients per year, and (4) high volume: >10 000 pediatric patients per year.17 

  2. Distance to a PAMC: The driving distance from each GED to the regional PAMC that conducted the simulation by using Google Maps (Google LLC, Menlo Park, CA).

  3. Geographic region: By the US Census Bureau as the following 4 regions (1) Northeast, (2) South, (3) Midwest, and (4) West.18  Canadian sites were excluded from the geographic region.

  4. Urban/rural classification: The 2010 Rural–Urban Commuting Area Codes system from the US Department of Agriculture Economic Research Service classifies levels 1 to 3 as urban, levels 4 to 6 as suburban, and levels 7 to 9 as rural.19 

Composite Quality Score: Simulation-Based Performance

The composite quality score (CQS) was calculated as the mean percentage of adherence to checklists created in a previous study11  on the basis of established guidelines for the 3 distinct case scores: (1) pediatric sepsis guidelines,20  (2) pediatric status epilepticus guidelines,21  and (3) pediatric cardiac arrest guidelines.22  The maximum CQS was a score of 100, for complete adherence to all elements of the guidelines, and the minimum score was a score of 0, for nonadherence to all elements of the guidelines.

Data Analysis

All data were manually entered into Qualtrics (Qualtrics, LLC, Provo, UT) and transferred into SPSS (v. 27.0; IBM SPSS Statistics, IBM Corporation), with which all statistical analyses were performed. Data were first assessed for completeness. The annual pediatric volume data were missing for 26 sites (15%) because the survey logic did not require a must-answer to this question. Missing data for pediatric volume for these sites were imputed by assessing each ED’s total patient volume using data available through Healthcare Cost and Utilization Project and multiplied by 20% to derive pediatric patient volume.1  This approach was based on the Healthcare Cost and Utilization Project data that report that pediatric patients constitute 20% of all ED visits in the United States.

Data were examined for normality and homogeneity in each analysis using Shapiro-Wilk tests. Differences in WPRS and CQS were examined by pediatric patient volume using bivariate analyses. Descriptive statistics were conducted for demographic variables by calculating frequency and percentages for categorical variables, and means and SDs for continuous variables. Variables were stratified by the hospital factors described above and conducted Pearson χ2 tests for categorical data and independent t tests for normal continuous data. Kruskal-Wallis tests and analysis of variance tests were used for nonparametric and normal tests, respectively, with >2 groups. Multiple comparisons were completed using a Bonferroni correction. Distance to the PAMC was calculated as a categorical variable consisting of quartiles. Generalized linear mixed models were used to model simulation performance as the dependent variable with a robust variance estimator. Potential covariates were included in the model, including PAMC affiliation, trauma center status, WPRS quartiles, designation of PECCs, annual pediatric patient volume, distance to PAMC, geographic region, and rurality. These predictors were chosen on the basis of statistical and clinical significance. The model accounted for multiple teams that were nested across the GEDs.

A total of 287 interprofessional teams from 175 GEDs were included in the analysis. Hospital, GED, and team characteristics are included in Table 1. Although all GEDs committed to enrolling 2 teams on recruitment, many were unable to schedule 2 full teams and only enrolled 1 team. There were no pediatric specialty trained staff that participated in any GED. Pediatric patient volume ranged from 100 to 45 000 pediatric patients per year, with a median of 5550 pediatric patients per year (interquartile range [IQR] 3753–9875). Forty-seven GEDs (26.9%) were classified as low pediatric patient volume, 58 (33.1%) were medium pediatric volume, 40 (22.9%) were medium–high volume, and 31 (17.7%) were high pediatric volume. The median distance from GEDs to the regional PAMC was 27.5 miles (IQR 12.8–60.0). The majority of GEDs (77.7%) were located in urban areas and the median WPRS was 60.1 (IQR 48.9–72.3). The map provided in Fig 1 demonstrates the distribution of GEDs in each geographic region: 51 (29.1%) in the Northeast, 44 (25.1%) in the South, 42 (24%) in the Midwest, and 33 (18.9%) in the West.

TABLE 1

Baseline Variables Across Spectrum of GEDs

Median (IQR) %
Institutional characteristics N = 175 GEDs 
ED volume (%)  
 Low <1800 per y 46 (26.3) 
 Medium 1800–4999 per y 58 (33.1) 
 Medium–high 5000–9999 per y 40 (22.9) 
 High >10 000 per y 31 (17.7) 
Hospital location (%)  
 Urban 136 (77.7) 
 Suburban 25 (14.3) 
 Rural/remote 8 (4.6)) 
Hospital region (%)  
 Northeast 51 (29.1) 
 South 44 (25.1) 
 Midwest 42 (24.0) 
 West 33 (18.9) 
 Canada 5 (2.9) 
Trauma center status (%)  
 None 122 (69.7) 
 Adult verified trauma center 53 (30.3) 
PAMC affiliation status (%)  
 Yes 55 (31.5) 
 No 120 (68.6) 
Distance to PAMC, miles, median (IQR) 27.5 (12.8–60.0) 
Team characteristics N = 287 teams 
Median team size 7 (5–9) 
Median team experience in y (IQR) 7.8 (5.1–12.0) 
ED pediatric readiness N = 175 GEDs 
WPRS 60.1 (48.9–72.3) 
 Coordination of care 50.0 (0–100) 
 Physician/nurse staffing 50.0 (0–100) 
 Quality improvement 0 (0–85.7) 
 Patient safety 75.0 (65.0–100.0) 
 Policies/procedures 47.3 (25.0–73.6) 
 Equipment and supplies 83.2 (75.9–90.2) 
Median (IQR) %
Institutional characteristics N = 175 GEDs 
ED volume (%)  
 Low <1800 per y 46 (26.3) 
 Medium 1800–4999 per y 58 (33.1) 
 Medium–high 5000–9999 per y 40 (22.9) 
 High >10 000 per y 31 (17.7) 
Hospital location (%)  
 Urban 136 (77.7) 
 Suburban 25 (14.3) 
 Rural/remote 8 (4.6)) 
Hospital region (%)  
 Northeast 51 (29.1) 
 South 44 (25.1) 
 Midwest 42 (24.0) 
 West 33 (18.9) 
 Canada 5 (2.9) 
Trauma center status (%)  
 None 122 (69.7) 
 Adult verified trauma center 53 (30.3) 
PAMC affiliation status (%)  
 Yes 55 (31.5) 
 No 120 (68.6) 
Distance to PAMC, miles, median (IQR) 27.5 (12.8–60.0) 
Team characteristics N = 287 teams 
Median team size 7 (5–9) 
Median team experience in y (IQR) 7.8 (5.1–12.0) 
ED pediatric readiness N = 175 GEDs 
WPRS 60.1 (48.9–72.3) 
 Coordination of care 50.0 (0–100) 
 Physician/nurse staffing 50.0 (0–100) 
 Quality improvement 0 (0–85.7) 
 Patient safety 75.0 (65.0–100.0) 
 Policies/procedures 47.3 (25.0–73.6) 
 Equipment and supplies 83.2 (75.9–90.2) 
FIGURE 1

PAMC and GED hospital locations. Map showing the location of PAMCs and GEDs that participated in this study.

FIGURE 1

PAMC and GED hospital locations. Map showing the location of PAMCs and GEDs that participated in this study.

Close modal

The median CQS for all teams across all cases was 62.8 (IQR 50.5–71.1). Table 3 includes the individual median scores for the 3 cases: sepsis 50 (IQR 33.3–66.7), cardiac arrest 61.5 (IQR 46.2–84.6), and seizure 57.1 (IQR 57.1–71.4). Factors associated with simulation performance in the specific cases are reported in Supplemental Table 4.

The unadjusted analyses of GED characteristics and CQS are reported in Supplemental Table 5. In this analysis, an affiliation with a PAMC was the only modifiable factor associated with higher CQS (P < .001). The affiliation with a PAMC was not colinear with annual pediatric volume. There were no associations between trauma center verification, the designation of a nurse and/or physician PECC, or WPRS with CQS (P = .858, P = .172, P = .378, respectively). Higher pediatric volume was significantly associated with higher CQS (P = .047). There were no associations between CQS and geographic region, urban/rural designation, or distance from a PAMC.

The adjusted analyses of GED characteristics and CQS are reported in Table 2. In this analysis, 3 of the 4 modifiable and 2 of the 3 nonmodifiable risk factors demonstrated significance. Two modifiable factors were associated with higher CQS: affiliation with a PAMC (b = 8.6, P = .020) and designation of both a physician and nurse PECC (b = 12.1, P = .025). Nonmodifiable factors associated with a higher CQS included medium (b = 18.3, P < .001) and medium to high pediatric patient volume (b = 9.7, P = .013) and location outside of the Southern region (b = −13.4, P = .013).

TABLE 2

Adjusted Analysis of CQS

EstimateSEP
(95% CI)
Nonmodifiable factors    
ED volume    
 Low (ref) — — — 
 Medium 18.3 (9.7–26.9) 4.3 <.001 
 Medium–high 2.8 (−4.3 to 9.7) 3.5 .428 
 High 9.7 (2.1–17.3) 3.9 .013 
Hospital location    
 Urban (ref) — — — 
 Suburban −3.3 (−10.0 to 3.4) 3.4 .335 
 Rural/remote 0.17 (−9.7 to 10.0) 5.0 .973 
Geographic region    
 Northeast (ref) — — — 
 South −13.4 (−23.9 to −2.8) 5.3 .013 
 Midwest 0.35 (−10.1 to 10.8) 5.3 .948 
 West −6.6 (−17.5 to 4.3) 5.5 .232 
Modifiable factors    
Trauma center verification    
 None (ref) — — — 
 Verified adult trauma center −2.59 (−7.8 to 2.6) 2.6 .328 
Affiliation with a PAMC    
 No affiliation with a PAMC (ref) — — — 
 Affiliation with a PAMC 8.57 (1.38 to 15.8) 3.5 .020 
Presence of PECC    
 None (ref) — — — 
 MD or nurse −2.0 (−10.0 to 6.0) 3.0 .619 
 MD and nurse 12.1 (1.6 to 22.6) 5.3 .025 
WPRS quartile    
 1 (ref) — — — 
 2 −2.5 (−10.3 to 5.3) 3.9 .524 
 3 −4.9 (−12.4 to 2.6) 3.8 .196 
 4 −13.8 (−26.3 to −1.2) 6.3 .032 
EstimateSEP
(95% CI)
Nonmodifiable factors    
ED volume    
 Low (ref) — — — 
 Medium 18.3 (9.7–26.9) 4.3 <.001 
 Medium–high 2.8 (−4.3 to 9.7) 3.5 .428 
 High 9.7 (2.1–17.3) 3.9 .013 
Hospital location    
 Urban (ref) — — — 
 Suburban −3.3 (−10.0 to 3.4) 3.4 .335 
 Rural/remote 0.17 (−9.7 to 10.0) 5.0 .973 
Geographic region    
 Northeast (ref) — — — 
 South −13.4 (−23.9 to −2.8) 5.3 .013 
 Midwest 0.35 (−10.1 to 10.8) 5.3 .948 
 West −6.6 (−17.5 to 4.3) 5.5 .232 
Modifiable factors    
Trauma center verification    
 None (ref) — — — 
 Verified adult trauma center −2.59 (−7.8 to 2.6) 2.6 .328 
Affiliation with a PAMC    
 No affiliation with a PAMC (ref) — — — 
 Affiliation with a PAMC 8.57 (1.38 to 15.8) 3.5 .020 
Presence of PECC    
 None (ref) — — — 
 MD or nurse −2.0 (−10.0 to 6.0) 3.0 .619 
 MD and nurse 12.1 (1.6 to 22.6) 5.3 .025 
WPRS quartile    
 1 (ref) — — — 
 2 −2.5 (−10.3 to 5.3) 3.9 .524 
 3 −4.9 (−12.4 to 2.6) 3.8 .196 
 4 −13.8 (−26.3 to −1.2) 6.3 .032 

CI, confidence interval; MD, doctor of medicine; ref, reference. —, not applicable.

TABLE 3

CQS Measured by Simulation

Median (IQR) %
Team-level, simulation-based performance, percentage N = 287 teams 
CQS, median (IQR) 62.8 (50.5–71.1) 
 Sepsis adherence 50.0 (33.3–66.7) 
 Cardiac arrest adherence 61.5 (46.2–84.6) 
 Seizure adherence 57.1 (57.1–71.4) 
Median (IQR) %
Team-level, simulation-based performance, percentage N = 287 teams 
CQS, median (IQR) 62.8 (50.5–71.1) 
 Sepsis adherence 50.0 (33.3–66.7) 
 Cardiac arrest adherence 61.5 (46.2–84.6) 
 Seizure adherence 57.1 (57.1–71.4) 

With regard to the WPRS, EDs affiliated with PAMCs had higher WPRS (b = 0.087, P < .001), whereas GEDs located in the South and West regions had lower WPRS (b = −10.10, P = .07 and b = −12.32, P = .035, respectively). The presence of 1 or 2 PECCs was also associated with higher WPRS (P < .001). Other variables were not associated with statistically significant higher WPRS, as summarized in Supplemental Tables 6 and 7. Surprisingly, GEDs with WPRS in the highest quartile (quartile 1, ≤48.9; quartile 2, 49.0–60.1; quartile 3, 60.2–72.3; quartile 4, ≥79.4) had significantly lower CQS (b = −13.8, P = .032). Plotting WPRS versus CQS showed a weak linear correlation (r2 = 0.016, P = .028) (Supplemental Fig 2).

This is one of the largest studies describing the quality of pediatric resuscitative care, measured using simulation in GEDs. Overall, a low quality of care was noted, measured as overall adherence to guidelines for the 3 studied clinical conditions during in situ simulations, with a CQS of 62.8 of 100 (IQR 50.5–71.1) across 287 resuscitation teams from 175 GEDs. This score contrasts with a CQS of 82 of 100 for 15 PAMCs that was reported in a previous study.11  The adjusted analysis revealed that higher-quality resuscitative care was associated with modifiable factors of affiliation with a PAMC and designation of both a nurse and physician PECC. These data support the development of PAMC affiliations and the designation of both physician and nurse PECCs in GEDs as components of improvement programs. In addition, higher-quality pediatric resuscitative care was associated with the nonmodifiable factors of higher pediatric patient volumes and location in the Northeast, Midwest, and West. These data suggest prioritizing the implementation of improvement interventions in low pediatric-volume GEDs, especially among those in the Southeastern region.

The finding of an association of higher-quality pediatric resuscitative care, measured using simulation, in GEDs with an affiliation with a regional PAMC is novel. These affiliations are becoming more common with the consolidation of health care into regional health care systems, including mergers and acquisitions. This work supports the potential positive impact of these affiliations on the quality of pediatric care.23  Heterogenous models of affiliation were noted in this cohort, with some involving only cobranding with a logo of the PAMC and others involving PAMC implementing clinical pathways, conducting education, and/or designating PECCs.7  Although telemedicine consultation was available during all simulations, it was not used by any of the teams. In the debriefing, the participating sites with telemedicine capabilities stated that they do not regularly use this technology for pediatric cases. The expanded use of telemedicine during clinical care could serve as a potential method to mitigate the poor quality because there is evidence that telemedicine can enhance the quality of pediatric resuscitative care.24  Details on the specific elements of the PAMC affiliation were not collected as part of the study, limiting our ability to analyze specific attributes of the affiliation that are associated with quality. Previous work in small cohorts of PAMCs affiliated with GEDs has described longitudinal collaborative educational programs as a component of an affiliation that results in improvements in the quality of resuscitative care delivered to real and simulated pediatric patients with diverse conditions (sepsis, status epilepticus, supraventricular tachycardia, cardiac arrest, diabetic ketoacidosis, congenital heart disease, child abuse).7,8,10,13,2528 

The finding of an association of higher quality of care with the designation of both a nurse and physician PECC compared with 1 or no PECC is consistent with previous work.17  We did not collect information on the details of these and other pediatric improvement programs from participating GEDs. However, during the study, it came to our attention that some PECCs in our cohort were participating in additional improvement activities, such as statewide pediatric medical facility recognition programs and regional quality improvement collaboratives.29 

On the basis of these findings, we suggest that PECCs should be designated at the initiation of this type of work. Without a PECC, the PAMC may struggle to sustain interventions because they cannot feasibly visit all sites on a regular basis. The designation of a PECC and affiliation with a PAMC together can be powerful tools to improve pediatric readiness. We recommend collaboration between PAMCs and PECCs to create sustainable collaborative local interventions.

Surprisingly, our study did not find an association between the quality of resuscitative care and WPRS. The adjusted analysis found that GEDs in the highest quartile for WPRS had lower CQS. This contrasts with previous studies in which lower WPRS was associated with higher mortality in critically ill pediatric patients presenting to a cohort of EDs, and critically injured pediatric patients presenting to trauma centers.30,31  The associations of WPRS and survival in both of these studies were only significant when comparing the highest WPRS sites with the lowest WPRS sites. The association was not strong when comparing GEDs at the lower range of WPRS. None of the GEDs in our study would fall into the highest quartiles in either study.26,27  The lower WPRS noted in this study could be attributable to our exclusion of pediatric centers. In addition, it is important to note that both previous studies used an independent self-reported assessment methodology for the WPRS, whereas our study required an independent in-person verification by the PAMC team. The lowest performance was noted for the sepsis case, followed by seizure and cardiac arrest. The gaps in performance most often related to medication dosing and administration for these 2 conditions.

Looking at nonmodifiable factors, the association of higher annual pediatric volume with a higher quality of pediatric care is consistent with previous reports describing disparities in pediatric emergency care and superior patient outcomes in EDs with higher pediatric patient volumes.3,31  Our study is different from previous work in that it was conducted among a cohort of GEDs exclusively and did not include pediatric EDs. Additionally, the large geographic distribution of this project identified a disparity in quality, with lower-quality pediatric resuscitative care associated with a geographic location in the Southeastern region compared with other parts of the country. This finding is particularly important because previous work using census data identified that only 14.9% of children living in the Southeastern region have timely access to high-scoring pediatric-ready EDs.32  The lack of an association between quality of care and the distance from the PAMC or urbanicity in this study contrasts with current efforts by our team to target improvement interventions in GEDs in rural regions and/or those at a long distance from the PAMC. This may be attributable to the limited enrollment of rural EDs in this cohort.

This study has several limitations. First, our recruitment of GEDs required a commitment to engaging 2 teams in a full-day in situ simulation during which staff were protected from clinical duties. Although GEDs committed to recruiting 2 teams, some could only schedule one full team for enrollment in the study. This could have biased our findings toward lower-performing GEDs that are interested in additional support from PAMCs and/or higher-performing GEDs that are already engaged in pediatric improvement efforts. These selection biases may limit the generalizability of our findings and could be a threat to the validity of our results. We did not collect data on details of the specific elements of each GED’s affiliation with the PAMC. This limited our ability to examine specific aspects of affiliation that improve quality, such as telemedicine. There was an unequal distribution of GEDs geographically and an underrepresentation of rural EDs in this cohort. Finally, we used in situ simulation as a tool to measure the quality of care, which may not accurately translate to the quality of clinical care provided to actual patients. There is a growing body of evidence supporting the validity of using simulation to measure the quality of care when properly developed and validated simulation-based assessments provide a robust measurement of the processes of patient care and adherence to guidelines.33,34  The assessment instruments used in this study have limited validity evidence demonstrating a direct association of their performance with clinical quality.

In a large cohort of 175 GEDs, a low quality of pediatric resuscitative care, measured using simulation, was noted. Modifiable factors associated with a higher quality of pediatric resuscitative care included affiliation with a PAMC and the designation of PECCs. Nonmodifiable factors associated with higher quality were higher pediatric volume and geographic location of the GED. A weak correlation was noted between quality and pediatric readiness scores. Future work is needed to examine the specific elements of affiliation that lead to improved quality and to develop and test pediatric emergency care improvement initiatives targeting lower pediatric volume EDs.

The Improving Pediatric Care Through Simulation collaborative thanks the GED frontline providers and their interprofessional teams who worked collaboratively with their regional pediatric academic medical center teams. We also thank the members of the International Network for Simulation-based Pediatric Innovation, Research, and Education (inspiresim.org) who have helped to shape this project, and the Society for Simulation in Healthcare and the International Pediatric Simulation Society for providing the International Network for Simulation-based Pediatric Innovation, Research, and Education with space at their annual meetings for our research group. Additionally, we thank Snimarjot Kaur and Ambika Bhatnagar for their support as research associates on this project.

Drs Auerbach and Abulebda, Mr Whitfill, and Ms Montgomery conceptualized and designed the study, designed the data collection instruments, collected data, coordinated and supervised data collection, conducted the initial analysis, and drafted the initial manuscript; Drs Leung, Kessler, Gross, Walsh, Hamilton, Kant, Janofsky, Brown, Alletag, Walls, Artega, Keilman, Van Ittersum, Rutman, Zaveri, Schoen, Lavoie, Mannenbach, Bigham, Dudas, Rutledge, Okada, Anderson, Tay, Scherzer, Vora, Fenster, Jones, Aebersold, Berg, Makharashvili, Katznelson, Mathias, Lutfi, Abu-Sultaneh, Jafri, Crellin, Burns, Padlipsky, Butler, Alander, and Thomas, and Ms Gawel, Ms Sessa, Ms Good, Ms Moegling, Ms Gaither, Ms Chatfield, Ms Knight, and Ms Lee critically reviewed and revised the manuscript, conceptualized and designed the study, collected data, and contributed to the drafting of the initial manuscript for important intellectual content; and all authors critically reviewed and revised the manuscript, approved the final manuscript as submitted, and agree to be accountable for all aspects of the work.

FUNDING: Supported by a grant from RBaby Foundation (Rbabyfoundation.org) to Yale University to support project administration, coordination, and statistical analysis (Dr Auerbach). Other sources of funding included the Indiana University Health Values Grant, Riley Children Foundation (Dr Abulebda), the University of California San Francisco Medical Centre (Dr Kant), and the University of Alabama at Birmingham.

CONFLICT OF INTEREST DISCLOSURES: Dr Whitfill is a board director of a medical device company, 410 Medical Inc. (Durham, NC), which commercializes a device for fluid resuscitation; the other authors have indicated they have no potential conflicts of interest to disclose.

CQS

composite quality score

ED

emergency department

GED

general emergency department

IQR

interquartile range

PAMC

pediatric academic medical center

PECC

pediatric emergency care coordinator

WPRS

weighted pediatric readiness score

1
McDermott
KW
,
Stocks
C
,
Freeman
WJ
.
HCUP statistical breif #242: Overview of pediatric emergency department visits, 2015
.
2
Whitfill
T
,
Auerbach
M
,
Scherzer
DJ
,
Shi
J
,
Xiang
H
,
Stanley
RM
.
Emergency care for children in the united states: epidemiology and trends over time
.
J Emerg Med
.
2018
;
55
(
3
):
423
434
3
Hudgins
JD
,
Monuteaux
MC
,
Bourgeois
FT
, et al
.
Complexity and severity of pediatric patients treated at united states emergency departments
.
J Pediatr
.
2017
;
186
:
145
149.e1
4
Barata
I
,
Auerbach
M
,
Badaki-Makun
O
, et al
.
A research agenda to advance pediatric emergency care through enhanced collaboration across emergency departments
.
Acad Emerg Med
.
2018
;
25
(
12
):
1415
1426
5
National Pediatric Readiness Assessment
.
Available at: https://www.pedsready.org/. Accessed May 11, 2021
6
American College of Surgeons
.
Verification review and consultation program for excellence in trauma centers
.
7
Barata
IA
,
Stadnyck
JM
,
Akerman
M
, et al
.
Novel approach to emergency departments’ pediatric readiness across a health system
.
Pediatr Emerg Care
.
2020
;
36
(
6
):
274
276
8
Abu-Sultaneh
S
,
Whitfill
T
,
Rowan
CM
, et al
.
Improving simulated pediatric airway management in community emergency departments using a collaborative program with a pediatric academic medical center
.
Respir Care
.
2019
;
64
(
9
):
1073
1081
9
Abulebda
K
,
Abu-Sultaneh
S
,
White
EE
, et al
.
Disparities in adherence to pediatric diabetic ketoacidosis management guidelines across a spectrum of emergency departments in the state of Indiana: an observational in situ simulation-based study [published online ahead of print April 24, 2018]
.
Pediatr Emerg Care
.
doi:10.1097/PEC.0000000000001494
10
Alsaedi
H
,
Lutfi
R
,
Abu-Sultaneh
S
, et al
.
Improving the quality of clinical care of children with diabetic ketoacidosis in general emergency departments following a collaborative improvement program with an academic medical center
.
J Pediatr
.
2022
;
240
:
235
240.e1
11
Auerbach
M
,
Whitfill
T
,
Gawel
M
, et al
.
Differences in the quality of pediatric resuscitative care across a spectrum of emergency departments
.
JAMA Pediatr
.
2016
;
170
(
10
):
987
994
12
Kessler
DO
,
Walsh
B
,
Whitfill
T
, et al
.
Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study
.
J Emerg Med
.
2016
;
50
(
3
):
403
415.e1–3
13
Lutfi
R
,
Montgomery
EE
,
Berrens
ZJ
, et al
.
Improving adherence to a pediatric advanced life support supraventricular tachycardia algorithm in community emergency departments following in situ simulation
.
J Contin Educ Nurs
.
2019
;
50
(
9
):
404
410
14
Improving Pediatric Acute Care Through Simulation
.
Available at: https://www.impactscollaborative.com. Accessed May 1, 2021
15
Remick
K
,
Gausche-Hill
M
,
Joseph
MM
,
Brown
K
,
Snow
SK
,
Wright
JL
.
American Academy of Pediatrics, Committee on Pediatric Emergency Medicine, Section on Surgery
;
American College of Emergency Physicians, Pediatric Emergency Medicine Committee
;
Emergency Nurses Association, Pediatric Committee
.
Pediatric readiness in the emergency department
.
Ann Emerg Med
.
2018
;
72
(
6
):
e123
e136
16
American Academy of Pediatrics
;
Committee on Pediatric Emergency Medicine
;
American College of Emergency Physicians
;
Pediatric Committee
;
Emergency Nurses Association Pediatric Committee
.
Joint policy statement–guidelines for care of children in the emergency department
.
Pediatrics
.
2009
;
124
(
4
):
1233
1243
17
Gausche-Hill
M
,
Ely
M
,
Schmuhl
P
, et al
.
A national assessment of pediatric readiness of emergency departments
.
JAMA Pediatr
.
2015
;
169
(
6
):
527
534
18
United States Census Bureau
.
19
US Department of Agriculture
.
USDA Economic Research Service Rural–Urban Community Area Codes
.
20
Dellinger
RP
,
Levy
MM
,
Rhodes
A
, et al.
Surviving Sepsis Campaign Guidelines Committee including the Pediatric Subgroup
.
Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012
.
Crit Care Med
.
2013
;
41
(
2
):
580
637
21
Glauser
T
,
Shinnar
S
,
Gloss
D
, et al
.
Evidence-based guideline: treatment of convulsive status epilepticus in children and adults: report of the guideline committee of the American Epilepsy Society
.
Epilepsy Curr
.
2016
;
16
(
1
):
48
61
22
Kleinman
ME
,
Chameides
L
,
Schexnayder
SM
, et al
.
Part 14: pediatric advanced life support: 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care
.
Circulation
.
2010
;
122
(
18
Suppl 3
):
S876
S908
23
Fleishon
HB
,
Itri
JN
,
Boland
GW
,
Duszak
R
Jr
.
Academic medical centers and community hospitals integration: trends and strategies
.
J Am Coll Radiol
.
2017
;
14
(
1
):
45
51
24
Marcin
JP
,
Nesbitt
TS
,
Kallas
HJ
,
Struve
SN
,
Traugott
CA
,
Dimand
RJ
.
Use of telemedicine to provide pediatric critical care inpatient consultations to underserved rural Northern California
.
J Pediatr
.
2004
;
144
(
3
):
375
380
25
Abulebda
K
,
Lutfi
R
,
Whitfill
T
, et al
.
A collaborative in situ simulation-based pediatric readiness improvement program for community emergency departments
.
Acad Emerg Med
.
2018
;
25
(
2
):
177
185
26
Abulebda
K
,
Thomas
A
,
Whitfill
T
,
Montgomery
EE
,
Auerbach
MA
.
Simulation training for community emergency preparedness
.
Pediatr Ann
.
2021
;
50
(
1
):
e19
e24
27
Abulebda
K
,
Whitfill
T
,
Montgomery
EE
, et al
.
Improving pediatric readiness in general emergency departments: a prospective interventional study
.
J Pediatr
.
2021
;
230
:
230
237.e1
28
Abulebda
K
,
Whitfill
T
,
Mustafa
M
, et al.
Improving Pediatric Acute Care Through Simulation (ImPACTS)
.
Improving pediatric readiness and clinical care in general emergency departments: a multicenter retrospective cohort study
.
J Pediatr
.
2022
;
240
:
241
248.e1
29
Whitfill
TM
,
Remick
KE
,
Olson
LM
, et al
.
Statewide pediatric facility recognition programs and their association with pediatric readiness in emergency departments in the United States
.
J Pediatr
.
2020
;
218
:
210
216.e2
30
Newgard
CD
,
Lin
A
,
Olson
LM
, et al.
Pediatric Readiness Study Group
.
Evaluation of emergency department pediatric readiness and outcomes among US trauma centers
.
JAMA Pediatr
.
2021
;
175
(
9
):
947
956
31
Ames
SG
,
Davis
BS
,
Marin
JR
, et al
.
Emergency department pediatric readiness and mortality in critically ill children
.
Pediatrics
.
2019
;
144
(
3
):
e20190568
32
Ray
KN
,
Olson
LM
,
Edgerton
EA
, et al
.
Access to high pediatric-readiness emergency care in the United States
.
J Pediatr
.
2018
;
194
:
225
232.e1
33
Bond
WF
,
Lammers
RL
,
Spillane
LL
, et al.
Society for Academic Emergency Medicine Simulation Task Force
.
The use of simulation in emergency medicine: a research agenda
.
Acad Emerg Med
.
2007
;
14
(
4
):
353
363
34
Brydges
R
,
Hatala
R
,
Zendejas
B
,
Erwin
PJ
,
Cook
DA
.
Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis
.
Acad Med
.
2015
;
90
(
2
):
246
256

Supplementary data