Video Abstract
Pediatric emergencies can occur in pediatric primary care offices. However, few studies have measured emergency preparedness, or the processes of emergency care, provided in the pediatric office setting. In this study, we aimed to measure emergency preparedness and care in a national cohort of pediatric offices.
This was a multicenter study conducted over 15 months. Emergency preparedness scores were calculated as a percentage adherence to 2 checklists on the basis of the American Academy of Pediatrics guidelines (essential equipment and supplies and policies and protocols checklists). To measure the quality of emergency care, we recruited office teams for simulation sessions consisting of 2 patients: a child with respiratory distress and a child with a seizure. An unweighted percentage of adherence to checklists for each case was calculated.
Forty-eight teams from 42 offices across 9 states participated. The mean emergency preparedness score was 74.7% (SD: 12.9). The mean essential equipment and supplies subscore was 82.2% (SD: 15.1), and the mean policies and protocols subscore was 57.1% (SD: 25.6). Multivariable analyses revealed that independent practices and smaller total staff size were associated with lower preparedness. The median asthma case performance score was 63.6% (interquartile range: 43.2–81.2), whereas the median seizure case score was 69.2% (interquartile range: 46.2–80.8). Offices that had a standardized process of contacting emergency medical services (EMS) had a higher rate of activating EMS during the simulations.
Pediatric office preparedness remains suboptimal in a multicenter cohort, especially in smaller, independent practices. Academic and community partnerships using simulation can help address gaps and implement important processes like contacting EMS.
The American Academy of Pediatrics has published a policy statement on preparedness for emergencies in the pediatric primary care office. Little is known about adherence to emergency preparedness in pediatric primary care offices and its correlation with the quality of care.
In a national cohort of pediatric primary care offices, there was suboptimal adherence to American Academy of Pediatrics Policy, especially in smaller, independent practices. Academic and community partnerships using simulation can help as an effective strategy to improve pediatric offices’ preparedness.
Children with emergent medical needs can first present to pediatric primary care offices,1,2 which are a common entry point into the emergency care continuum. Many offices often see emergencies: the incidence of a child requiring emergent stabilization in an individual office ranges from weekly to monthly,1,3,4 and seizures and respiratory distress are the most common office-based emergencies.1
Pediatric office emergency preparedness is defined as the ability to provide high-quality care to children who have life-threatening illnesses or injuries before being transferred to an emergency department (ED).5 A patient presenting to an unprepared office may experience harm because of errors during acute stabilization or delays in the activation of emergency medical services (EMS). An American Academy of Pediatrics (AAP) Policy Statement on preparation for emergencies in pediatric offices was first issued in 20072 and provided recommendations on personnel, equipment, medications, education, policies, and protocols to optimize emergency preparedness.
In previously published research, authors have reported that many pediatric offices are not adequately prepared for emergencies.6,7 Specific identified gaps included providers’ resuscitation skills, availability of equipment and medications, and written plans for pediatric emergencies.6,8 In the existing research measuring pediatric office emergency preparedness, researchers used self-reported surveys to assess adherence to the AAP guidelines or to assess providers’ comfort.6,9
A more robust assessment of pediatric office emergency care is needed. Simulation (especially in situ) is a useful tool to measure clinical care processes and identify safety threats to serve as targets for future interventions.10,11 In situ simulation contributes to realism and accuracy of measurement by bringing the simulator into the clinical environment to measure clinical processes of care by using real-world teams, equipment, and supplies.12,13 It also serves as a tool to identify deficiencies in clinical systems and provider teams’ knowledge and skills.14
Driven by the AAP Policy Statement and highlighting the pediatric office’s vital role in emergency care, our network Improving Pediatric Acute Care Through Simulation launched a multiphase improvement initiative to measure and improve pediatric office emergency preparedness nationwide. In a pilot study conducted between a regional academic medical center (AMC) collaborating with 12 pediatric offices in the District of Columbia metropolitan area, researchers demonstrated wide variability in adherence to the AAP Policy Statement. In addition, it noted latent safety threats and gaps in clinical care processes measured during in situ simulations. The pilot study highlighted the need for a national assessment and improvement effort to optimize office emergency preparedness.15
In this article, we report on the implementation of this initiative across a cohort of pediatric offices partnering with regional AMCs. The aim of this study was to describe pediatric office emergency preparedness, as measured by adherence to the AAP Policy Statement. Our secondary aim was to measure the quality of pediatric emergency care in participating offices, measured during in situ simulations. An exploratory aim was to describe the correlation between simulated quality of care and office preparedness measures.
Methods
We conducted a multicenter, observational study over a 15-month period (December 2018 to March 2020), which included the following components:
Measurement of adherence to the AAP Policy Statement for pediatric office emergency preparedness by using an in-person survey.
Measurement of the quality of care for 2 simulated pediatric patients with emergencies.
Institutional review board approval was obtained from each collaborating site on the basis of each participating AMC’s requirements; the majority of reviews were deemed exempt.
Study Setting and Population
Investigators from 9 pediatric AMCs each recruited a minimum of 2 pediatric primary care offices in their respective geographic regions. Offices were excluded if they provided subspecialty care or were physically connected to an ED or urgent care center. Urban and/or suburban setting was defined by an estimated EMS response time of <15 minutes on the basis of recorded EMS response times in previously categorized pediatric emergencies.1
Study Protocol
All lead investigators and research coordinators from the participating AMCs participated in online train-the-trainer sessions to ensure standardization of the study protocol execution. These sessions were conducted via Zoom (Zoom Video Communications, Inc, San Jose, CA) by the study principal investigators with each participating AMC. Each session lasted 90 minutes and involved reviewing the study protocol, each simulation scenario, performance and preparedness measurement checklists, and standardization of all data entry into a centralized database via Qualtrics (Qualtrics Inc, Provo, UT). The AMC team members included pediatric emergency physicians, pediatric critical care physicians, registered nurses, respiratory therapists, medics, and nurse practitioners. The script of these sessions is provided in Appendix 1 in the Supplemental Information.
The recruitment and selection of pediatric offices occurred through multiple methods, including AMC physician liaisons, personal connections, and phone calls and/or e-mails distributed to selected sites. Each pediatric office identified a champion to serve as the site contact who worked with the AMC team to coordinate all study phases.
Measurement of Adherence to AAP Policy Statement
Each AMC conducted an in-person site visit to each participating office and completed a pediatric emergency preparedness checklist-based tool. During this measurement, a trained member of the AMC study team completed a checklist for each office with the pediatric office champion. These 2 individuals directly identified all the items on the checklist (eg, locating each piece of equipment and reviewing policies and protocols). If the champion and study team were unsure or unable to locate the scored item during the measurement, no credit was given for that item in the tool.
Measurement of the Quality of Simulated Emergency Care
The in situ simulation-based session was conducted to measure the quality of emergency care provided in these offices and to help identify target areas for improvement. Teams from each office were recruited for the simulations to mirror their typical team composition. These teams were composed of general pediatricians (1–2 physicians), advanced practice providers, registered nurses, medical assistants, and administrative staff. Participants were protected from clinical responsibilities during these simulations. Champions recruited providers at each site via an E-mail sent 1 month before the simulation.
All sessions were conducted in the actual office space to promote realism. Teams were required to find the appropriate resources, equipment, and medications within their office. However, these items were replaced by equipment and medications provided by the simulation team to prevent the participating office from incurring costs or using of their limited supplies.
Details of the simulation cases are summarized in our previously published work.15 Briefly, each simulation session consisted of 2 scenarios: a child aged 7 years presenting with asthma and a child aged 5 years presenting with seizure. A standardized and scripted orientation was used to introduce the project and the AMC team and described the format and expectations for the day. At each office, 1 or 2 teams participated. In offices with small numbers of staff, the same team of providers participated in both simulations. In larger offices, the staff were separated into 2 teams, with one caring for the patient and the other team observing. Both teams participated in the debriefings for each case. No incentives were given for participation in the simulated sessions. Other details of the simulation setup, the cases, and checklists are presented in Appendix 2A and 2B in the Supplemental Information.
Within 48 hours of completing the preparedness checklist and simulation-based measurements, each AMC team entered the collected data into a centralized data collection form in Qualtrics (Qualtrics Inc., Provo, Utah). These data were compiled into a database to collate data of all participating offices.
Measures
Office Emergency Preparedness Scores
We recorded measures of office preparedness for pediatric emergencies at each participating office by using a checklist derived from the AAP Policy Statement. This checklist included equipment, supplies, medications, policies, and protocols. Items in this checklist are considered in the AAP guideline as either essential for all offices or strongly suggested for offices with EMS response times of >10 minutes.
Essential equipment and supplies checklist (20 items)
Policies and protocols checklist (9 items)
Strongly suggested equipment and medications checklist (32 items)
Items on all 3 checklists were not weighted, and a dichotomous response of yes or no was given on the basis of the availability of each item. Each checklist score was normalized to a 100-point scale. A total emergency preparedness score was calculated on the basis of the essential equipment and supplies checklist and the policies and protocols checklist. All sites’ demographics were also collected, including EMS response time, distance to the nearest ED, number of staff in the office (staff size), affiliation with an AMC, annual patient volume, and other demographics. Annual patient volume was divided into 4 quartiles.
Simulation Performance
These scenarios and checklists were created by content experts in pediatric emergency medicine and critical care by using evidence-based guidelines and best practices. Content validity was obtained by using a consensus-based approach among experts. Developed scenarios and checklists were piloted and iteratively adapted in simulations at independent sites that did not participate in this study.
All data were manually entered into Qualtrics and transferred into SPSS, v.27.0 (IBM SPSS Statistics, IBM Corporation), with which all statistical analyses were performed. For categorical variables, frequencies and percentages were calculated. For continuous variables, medians and interquartile ranges (IQRs) were calculated. Bivariate analyses were used to explore associations between practice characteristics and pediatric preparedness scores, which included independent t tests or 1-way analysis of variance tests for normal continuous data. Bivariate analyses were also used to describe the association between the pediatric preparedness checklist (eg, regular emergency drills and/or practice and EMS activation) and the simulation checklist by using χ2 tests. We used additional bivariate analyses to explore associations between practice characteristics and simulation scores using Mann–Whitney U tests.
Finally, we used a generalized linear mixed model to model emergency preparedness scores as the dependent variables with a robust variance estimator to account for within-practice correlation to examine which variables explain higher emergency preparedness. Potential covariates in the model (eg, patient volume, staff size, AMC affiliation, and type of practice) were introduced if bivariate analyses were significant at P < .10. This model accounts for the nesting of teams within each site. Unstandardized β coefficients were reported.
Results
Office Characteristics
Forty-two offices from 9 states participated in the study. Sixteen (38%) offices were recruited from the state of Indiana; 10 (24%) offices were recruited from the state of Maryland (Table 1). The median annual patient volume was 6000 patients, the median staff size was 17, and the median EMS response time was 5 minutes. The quartiles for the annual patient volume were quartile 1: ≤3919 patients; quartile 2: 3920 to 6000 patients; quartile 3: 6001 to 8819 patients; and quartile 4: ≥8820 patients. Fifteen (36%) of the offices were independent practices (ie, not part of a larger group).
Site and Team Demographics
Variable . | N = 42 . |
---|---|
Site characteristics | |
Location, n (%) | |
Connecticut | 4 (9.5) |
District of Columbia | 2 (4.8) |
Delaware | 4 (9.5) |
Indiana | 16 (38.1) |
Massachusetts | 1 (2.4) |
Maryland | 10 (23.8) |
Pennsylvania | 1 (2.4) |
Virginia | 2 (4.8) |
Washington | 2 (4.8) |
EMS response time, min, median (IQR) | 5 (5–10) |
Distance to the ED, miles, median (IQR) | 3.3 (1.8–7.0) |
Staff size, median (IQR) | 17 (13–24) |
Learners present, n (%) | 22 (52.4) |
AMC affiliation, n (%) | 22 (52.4) |
Independent practice, n (%) | 15 (35.7) |
Urban and/or suburban setting, n (%) | 42 (100) |
Shared waiting room, n (%) | 3 (7.1) |
Annual patient volume, median (IQR) | 6000 (3919–8819) |
Annual patient volume quartile, n (%) | |
1 (0–31 919 patients) | 10 (23.8) |
2 (31 920–5999 patients) | 15 (35.7) |
3 (6000–8818 patients) | 6 (14.3) |
4 (≥8819 patients) | 10 (23.8) |
Team characteristics (N = 48) | |
No. team members, median (IQR) | 6 (4–7) |
Proportion of team composed of physicians, median (IQR) | 0.2 (0.14–0.33) |
Variable . | N = 42 . |
---|---|
Site characteristics | |
Location, n (%) | |
Connecticut | 4 (9.5) |
District of Columbia | 2 (4.8) |
Delaware | 4 (9.5) |
Indiana | 16 (38.1) |
Massachusetts | 1 (2.4) |
Maryland | 10 (23.8) |
Pennsylvania | 1 (2.4) |
Virginia | 2 (4.8) |
Washington | 2 (4.8) |
EMS response time, min, median (IQR) | 5 (5–10) |
Distance to the ED, miles, median (IQR) | 3.3 (1.8–7.0) |
Staff size, median (IQR) | 17 (13–24) |
Learners present, n (%) | 22 (52.4) |
AMC affiliation, n (%) | 22 (52.4) |
Independent practice, n (%) | 15 (35.7) |
Urban and/or suburban setting, n (%) | 42 (100) |
Shared waiting room, n (%) | 3 (7.1) |
Annual patient volume, median (IQR) | 6000 (3919–8819) |
Annual patient volume quartile, n (%) | |
1 (0–31 919 patients) | 10 (23.8) |
2 (31 920–5999 patients) | 15 (35.7) |
3 (6000–8818 patients) | 6 (14.3) |
4 (≥8819 patients) | 10 (23.8) |
Team characteristics (N = 48) | |
No. team members, median (IQR) | 6 (4–7) |
Proportion of team composed of physicians, median (IQR) | 0.2 (0.14–0.33) |
Providers and Teams Characteristics
A total of 48 teams participated in the simulation across 42 offices. There was a median of 6 members per team, and the median ratio of physicians to team members was 0.2 (IQR: 0.14–0.33) (Table 1).
Emergency Preparedness Scores
The offices’ mean emergency preparedness score across the 42 offices was 74.7% (SD: 12.9). The mean essential equipment and supplies score was 82.2% (SD: 15.1). All participating offices had an oxygen source; pediatric oxygen masks; pediatric bag valve masks, nebulizers, and albuterol; a pulse oximeter; and blood pressure cuffs. The least available items were infant bag valve masks, cardiac arrest boards, and oral airways in 18%, 43%, and 47% of offices, respectively. The mean policies and protocols score was 57% (SD: 25.6). Only 33% of offices had policies for regular self-assessment, and only 43% conducted regular emergency drills (Table 2). The mean preparedness score for the additional equipment was 38% (SD: 28.3) (Supplemental Table 1).
Essential Equipment and Supplies and Policies and Protocols
Item . | Availability . |
---|---|
Essential equipment and supplies | |
Oxygen source, n (%) | 42 (100) |
Oxygen flowmeter, n (%) | 41 (97.6) |
Nasal Cannula, pediatric, n (%) | 31 (73.8) |
Nasal Cannula, adult, n (%) | 30 (71.4) |
Oxygen masks, infant, n (%) | 29 (69.0) |
Oxygen masks, pediatric, n (%) | 42 (100) |
Oxygen masks, adult, n (%) | 40 (95.2) |
Oral airways (sizes 00–5), n (%) | 20 (47.6) |
BVM, infant (250 mL), n (%) | 35 (18.3) |
BVM, child (450 mL), n (%) | 42 (100) |
BVM, adult (1000 mL), n (%) | 40 (95.2) |
Suction device, n (%) | 27 (64.3) |
Nebulizer (or MDI), n (%) | 42 (100) |
Pulse oximeter, n (%) | 42 (100) |
BP cuffs, n (%) | 42 (100) |
Cardiac arrest board, n (%) | 18 (42.9) |
Splints, n (%) | 30 (71.4) |
Sterile dressings, n (%) | 36 (85.7) |
Albuterol, n (%) | 42 (100) |
Epinephrine (1:1000), n (%) | 31 (73.8) |
Color-coded tape or preprinted drug doses, n (%) | 23 (54.8) |
Equipment subtotal percentage, mean (SD) | 82.2 (15.1) |
Policies and protocols | |
Regular self-assessment of the office (at least yearly), n (%) | 14 (33.3) |
Presence of plans for emergency response, n (%) | 19 (45.2) |
Maintain emergency equipment, including process and checklist for checking that items are working, not expired, and available, n (%) | 26 (61.9) |
Maintain emergency medications, including process and checklist for checking that items are working, not expired, and available, n (%) | 31 (73.8) |
Identified individual (or individuals) who maintains equipment and medications, n (%) | 31 (73.8) |
Conduct regular emergency drills and/or practice (at least yearly), n (%) | 18 (42.9) |
Standardized process of contacting EMS and providing essential information about patient and location, n (%) | 26 (61.9) |
Standardized process of contacting local ED and providing essential information about transferred patient, n (%) | 33 (78.6) |
Written protocols for emergency response, n (%) | 18 (42.9) |
Policies subtotal percentage, mean (SD) | 57.1 (25.6) |
Total percentage, mean (SD) | 74.7 (12.9) |
Item . | Availability . |
---|---|
Essential equipment and supplies | |
Oxygen source, n (%) | 42 (100) |
Oxygen flowmeter, n (%) | 41 (97.6) |
Nasal Cannula, pediatric, n (%) | 31 (73.8) |
Nasal Cannula, adult, n (%) | 30 (71.4) |
Oxygen masks, infant, n (%) | 29 (69.0) |
Oxygen masks, pediatric, n (%) | 42 (100) |
Oxygen masks, adult, n (%) | 40 (95.2) |
Oral airways (sizes 00–5), n (%) | 20 (47.6) |
BVM, infant (250 mL), n (%) | 35 (18.3) |
BVM, child (450 mL), n (%) | 42 (100) |
BVM, adult (1000 mL), n (%) | 40 (95.2) |
Suction device, n (%) | 27 (64.3) |
Nebulizer (or MDI), n (%) | 42 (100) |
Pulse oximeter, n (%) | 42 (100) |
BP cuffs, n (%) | 42 (100) |
Cardiac arrest board, n (%) | 18 (42.9) |
Splints, n (%) | 30 (71.4) |
Sterile dressings, n (%) | 36 (85.7) |
Albuterol, n (%) | 42 (100) |
Epinephrine (1:1000), n (%) | 31 (73.8) |
Color-coded tape or preprinted drug doses, n (%) | 23 (54.8) |
Equipment subtotal percentage, mean (SD) | 82.2 (15.1) |
Policies and protocols | |
Regular self-assessment of the office (at least yearly), n (%) | 14 (33.3) |
Presence of plans for emergency response, n (%) | 19 (45.2) |
Maintain emergency equipment, including process and checklist for checking that items are working, not expired, and available, n (%) | 26 (61.9) |
Maintain emergency medications, including process and checklist for checking that items are working, not expired, and available, n (%) | 31 (73.8) |
Identified individual (or individuals) who maintains equipment and medications, n (%) | 31 (73.8) |
Conduct regular emergency drills and/or practice (at least yearly), n (%) | 18 (42.9) |
Standardized process of contacting EMS and providing essential information about patient and location, n (%) | 26 (61.9) |
Standardized process of contacting local ED and providing essential information about transferred patient, n (%) | 33 (78.6) |
Written protocols for emergency response, n (%) | 18 (42.9) |
Policies subtotal percentage, mean (SD) | 57.1 (25.6) |
Total percentage, mean (SD) | 74.7 (12.9) |
BP, blood pressure; BVM, bag valve mask.
Bivariate analyses revealed that several variables were associated with pediatric preparedness scores (Fig 1). Independent practices had lower pediatric preparedness scores compared with those that were part of a larger group (β = −11.89, 95% confidence interval [CI]: −19.33 to −4.45). Higher annual patient volume and larger total staff size were associated with higher scores (β = .001; 95% CI: 0.00 to 0.001; P = .017 and β = .51; 95% CI: 0.19 to 0.83; P = .002, respectively). AMC affiliation and the presence of learners were not associated with higher scores. Looking at a multivariable regression model, higher annual patient volume was no longer significantly associated with higher preparedness. Independent practices were associated with lower preparedness scores, whereas larger total staff size was associated with higher scores in the multivariable model (β = −10.52; 95% CI: −17.74 to −3.29; P = .005 and β = .41; 95% CI: 0.09 to 0.73; P = .014, respectively). The results of these analyses are in Table 3.
Office preparedness score by practice characteristics. A, Office preparedness score by quartile of annual patient volume: quartile 1 (≤3919 patients), quartile 2 (3920–6000 patients), quartile 3 (6001–8819 patients), and quartile 4 (≥8820 patients). B, Office preparedness score by independence or part of a larger group. C, Office preparedness score by learners present. D, Office preparedness score by AMC affiliation. E, Office preparedness score by total staff size. Gray lines indicate 95% CI. Gray boxes indicate mean and range; individual data points indicate individual pediatric offices. NS, not significant.
Office preparedness score by practice characteristics. A, Office preparedness score by quartile of annual patient volume: quartile 1 (≤3919 patients), quartile 2 (3920–6000 patients), quartile 3 (6001–8819 patients), and quartile 4 (≥8820 patients). B, Office preparedness score by independence or part of a larger group. C, Office preparedness score by learners present. D, Office preparedness score by AMC affiliation. E, Office preparedness score by total staff size. Gray lines indicate 95% CI. Gray boxes indicate mean and range; individual data points indicate individual pediatric offices. NS, not significant.
Multivariable Analysis of Pediatric Preparedness
Variable . | Unadjusted . | Adjusteda . | |||
---|---|---|---|---|---|
β (95% CI) . | P . | β (95% CI) . | SE . | P . | |
Learners present | |||||
No | Reference | — | Reference | — | — |
Yes | 6.86 (−1.12 to 14.83) | .09 | 1.69 (−5.61 to 8.99) | 3.62 | .64 |
Independent practice | |||||
Part of larger group | Reference | — | Reference | — | — |
Yes, independent | −11.89 (−19.33 to −4.45) | .002 | −10.52 (−17.74 to −3.29) | 3.58 | .005 |
Patient volume | 0.001 (0.00 to 0.001) | .01 | 0.00 (0.000 to 0.001) | 0.00 | .22 |
Total staff | 0.51 (0.19 to 0.83) | .002 | 0.41 (0.09 to 0.73) | 0.16 | .01 |
AMC affiliation | |||||
No | Reference | — | — | — | — |
Yes | −2.44 (−10.77 to 5.89) | .55 | — | — | — |
Variable . | Unadjusted . | Adjusteda . | |||
---|---|---|---|---|---|
β (95% CI) . | P . | β (95% CI) . | SE . | P . | |
Learners present | |||||
No | Reference | — | Reference | — | — |
Yes | 6.86 (−1.12 to 14.83) | .09 | 1.69 (−5.61 to 8.99) | 3.62 | .64 |
Independent practice | |||||
Part of larger group | Reference | — | Reference | — | — |
Yes, independent | −11.89 (−19.33 to −4.45) | .002 | −10.52 (−17.74 to −3.29) | 3.58 | .005 |
Patient volume | 0.001 (0.00 to 0.001) | .01 | 0.00 (0.000 to 0.001) | 0.00 | .22 |
Total staff | 0.51 (0.19 to 0.83) | .002 | 0.41 (0.09 to 0.73) | 0.16 | .01 |
AMC affiliation | |||||
No | Reference | — | — | — | — |
Yes | −2.44 (−10.77 to 5.89) | .55 | — | — | — |
—, not applicable.
Generalized linear mixed model adjusting for within-practice effects, presence of learners, independent practice, patient volume, and total staff.
Simulation-Based Performance
The median performance score of the asthma case was 63.6% (IQR: 43.2–81.2), whereas the median score of the seizure case was 69.2% (IQR: 46.2–80.8). Details of performance with the subcomponents of each case-based checklist are reported in Table 4. We stratified the simulation performance by practice characteristics in Supplemental Table 2.
Simulation-Based Measures
. | Teams . |
---|---|
Asthma case | |
Patient assessed immediately on request for help, staff member verbalizes concern for rapid assessment, n (%) | 27 (41.7) |
Staff member asks for help and/or activates code/emergency response, n (%) | 33 (68.8) |
Airway and breathing assessed, n (%) | 31 (64.6) |
Documentation of event initiated, n (%) | 13 (27.1) |
Appropriate equipment brought into room, n (%) | 32 (66.7) |
Circulation assessed, n (%) | 31 (64.6) |
Oxygen started, n (%) | 33 (68.8) |
Inhaled albuterol started, n (%) | 32 (66.7) |
Second assessment of airway and/or breathing, n (%) | 32 (66.7) |
Staff member designated to activate EMS, n (%) | 28 (58.3) |
EMS activation, n (%) | 32 (66.7) |
Asthma score, median % (IQR) | 63.6 (43.2–81.2) |
Seizure case | |
Patient assessed immediately and staff member verbalizes concern for seizure, n (%) | 34 (70.8) |
Staff member asks for help and/or activates code and/or emergency response, n (%) | 36 (75.0) |
Patient moved to a safe position, n (%) | 31 (64.6) |
Time of seizure and events documented, n (%) | 16 (33.3) |
Airway and breathing assessed, n (%) | 34 (70.8) |
Patient positioned appropriately to help open airway, n (%) | 25 (52.1) |
Appropriate equipment brought into room, n (%) | 35 (72.9) |
Pulse oximeter applied and reading obtained and/or verbalized, n (%) | 31 (64.6) |
Oxygen started if hypoxemia (NC and facemask), n (%) | 34 (70.8) |
Medications administered if 1 (diastat), n (%) | 14 (29.2) |
Second assessment of airway and/or breathing, n (%) | 36 (75.0) |
Staff member designated to activate EMS, n (%) | 31 (64.6) |
EMS activation, n (%) | 27 (56.3) |
Seizure score, median % (IQR) | 69.2 (46.2–80.8) |
. | Teams . |
---|---|
Asthma case | |
Patient assessed immediately on request for help, staff member verbalizes concern for rapid assessment, n (%) | 27 (41.7) |
Staff member asks for help and/or activates code/emergency response, n (%) | 33 (68.8) |
Airway and breathing assessed, n (%) | 31 (64.6) |
Documentation of event initiated, n (%) | 13 (27.1) |
Appropriate equipment brought into room, n (%) | 32 (66.7) |
Circulation assessed, n (%) | 31 (64.6) |
Oxygen started, n (%) | 33 (68.8) |
Inhaled albuterol started, n (%) | 32 (66.7) |
Second assessment of airway and/or breathing, n (%) | 32 (66.7) |
Staff member designated to activate EMS, n (%) | 28 (58.3) |
EMS activation, n (%) | 32 (66.7) |
Asthma score, median % (IQR) | 63.6 (43.2–81.2) |
Seizure case | |
Patient assessed immediately and staff member verbalizes concern for seizure, n (%) | 34 (70.8) |
Staff member asks for help and/or activates code and/or emergency response, n (%) | 36 (75.0) |
Patient moved to a safe position, n (%) | 31 (64.6) |
Time of seizure and events documented, n (%) | 16 (33.3) |
Airway and breathing assessed, n (%) | 34 (70.8) |
Patient positioned appropriately to help open airway, n (%) | 25 (52.1) |
Appropriate equipment brought into room, n (%) | 35 (72.9) |
Pulse oximeter applied and reading obtained and/or verbalized, n (%) | 31 (64.6) |
Oxygen started if hypoxemia (NC and facemask), n (%) | 34 (70.8) |
Medications administered if 1 (diastat), n (%) | 14 (29.2) |
Second assessment of airway and/or breathing, n (%) | 36 (75.0) |
Staff member designated to activate EMS, n (%) | 31 (64.6) |
EMS activation, n (%) | 27 (56.3) |
Seizure score, median % (IQR) | 69.2 (46.2–80.8) |
NC, nasal cannula.
Relationships Between Preparedness Scores, Offices Characteristics, And Simulation Scores
We looked at simulation scores stratified by 2 of the checklist items, regular emergency drills and/or practice (essential checklist 6) and a standardized process of contacting EMS (essential checklist 7). The asthma simulation score was lower at sites that had policies for regular drills: 82% (IQR: 64–91) for those without a policy for regular drills versus 50% (IQR: 36–64) for those with a policy (P = .002). The difference was nonsignificant for the seizure scores: 69% (IQR: 62–85) vs 54% (17–77) (P = .302). Additionally, offices that had a standardized process of contacting EMS had a higher rate of activating EMS for the simulation cases (72% vs 47%; P = .014).
Discussion
This study revealed variability in both pediatric emergency preparedness (adherence to the AAP Policy Statement) and the quality of emergency care measured by in situ simulations in a national sample of pediatric primary care offices. This is the first multicenter study to directly measure pediatric office emergency preparedness and quality of emergency care. These measurements provide the first step in improvement efforts aimed to ensure optimal care for children presenting to offices with emergencies. These data can be used to guide the development of interventions to improve emergency preparedness and care delivery in pediatric offices.
We found that nonindependent offices, with a larger staff size and with higher annual patient volume, had higher preparedness scores. However, on multivariable analysis, only larger staff size and nonindependent practices were significantly associated with higher preparedness scores. This higher preparedness could be secondary to additional staff to focus on this topic and additional resources available as a part of a larger system of practices. Larger staff size may correlate with higher patient volume and subsequently more exposure to pediatric patients, which could contribute to the higher preparedness score.
Despite the AAP Policy Statement being reaffirmed multiple times since its initial publication, pediatric office emergency preparedness remains highly variable. With this study, we add to the evidence reported in previous studies in which poor pediatric office preparedness was noted through self-reported surveys that are prone to bias. Notably, the in-person direct observation survey methods conducted in this study are less prone to biases.7,16,17
The mean preparedness score of essential equipment and supplies was 82%, reflecting a higher score compared with what has been reported in our previous pilot report of 64%.15 Although some equipment items are rarely used in everyday office-based clinical care, it is concerning that 82% of offices did not have an infant bag valve mask and would therefore need to wait for EMS arrival to administer life-saving ventilation to an infant. This highlights the need to have this equipment available and maintain the skills necessary to care for patients in respiratory distress, the most common emergency encountered in the office setting. A cardiac arrest board is another example of a potentially life-saving piece of equipment that was not available in the majority of offices, likely because of the extremely rare occurrence of cardiac arrests in the office setting. Lack of a board may lead to poor cardiopulmonary resuscitation quality before the arrival of EMS. The mean preparedness score for the additional equipment, noted as essential only if EMS response time was >10 min, was much lower (38%). This may again be attributed to its rare use in the office setting. In future work, researchers should explore the benefit of these items to potentially guide changes to the existing guidelines’ designation of essential equipment.
The mean preparedness score of policies and protocols was low at 57%, with common deficiencies in conducting a regular self-measurement and regular emergency drills and/or practice and having written protocols for emergency response. Despite the AAP recommendation of performing regular mock codes in the office, our findings were aligned with previously published surveys in which 20% to 40% presence of regular mock codes in offices was reported. This highlights major opportunities for future improvement through providing templates for standardized policies.6,8,16
Surprisingly, we did not find a correlation between office preparedness scores with simulation performance scores. This could be attributed to a small sample size or the fact that the presence of certain equipment and supplies does not necessarily translate to high-quality care. We noted that offices with policies for regular drills had lower asthma performance scores. This could be secondary to the poor quality of the simulation drills conducted by pediatric offices or the lack of rigorous validity of the simulation checklists used. We also noted that offices with a standardized process of contacting EMS had a higher rate of activating EMS during the simulations. This is an important finding because easy accessibility and contact of EMS will ensure timely transfer and definitive resuscitative care.
All participating sites received a customized preparedness report of office-based emergency preparedness and the quality of simulated care (Appendix 3 in the Supplemental Information). Additionally, all offices received clinical and educational resources and continued to collaborate with the academic medical centers to support improvement efforts. This collaborative model mirrors the components of our published emergency department (ED) readiness improvement collaborative situ simulations.10,11,15,18,19 In our future work, we will focus on developing, implementing, and evaluating improvement interventions involving AMCs collaborating with regional offices.
Our study has a few limitations. Our recruitment method may have led to selection bias with the recruited office sites being more engaged in emergency preparedness, which may limit the generalizability of findings. We did not recruit any rural offices with low patient volume or offices that provided care to both children and adults. However, to mitigate this limitation, we recruited a spectrum of sites to represent the range of offices in the nation. Second, the emergency preparedness checklists we used have limited validity evidence, and the items are not weighted. However, these checklists are derived from an AAP Policy Statement and represent the best checklists available in the literature. Similarly, the simulated case checklists we used have limited validity evidence regarding internal structure and consequences. Lastly, we did not obtain interrater reliability of the checklist scoring because only 1 study personnel performed the on-site measurement. However, all lead investigators and research coordinators underwent a train-the-trainer session to ensure consistency and standardization.
Conclusions
This study revealed variability in the emergency preparedness and the quality of simulated emergency care provided in pediatric primary care offices. Essential life-saving equipment, such as an infant bag valve mask, was missing in most offices, highlighting the need for efforts to assess and improve pediatric office emergency preparedness. Many offices did not have emergency policies and procedures. Academic and community partnerships are a promising strategy to address these gaps in preparedness because they were already found to be effective in the ED setting. With this study, we inform future efforts and initiatives to work collaboratively to update the current policy statement for pediatric offices preparedness for emergencies and serves as a baseline for developing interventions to improve emergency preparedness and emergency care in the pediatric office.
Acknowledgments
We thank the members of the ImPACTS group, including Ellie Hamburger, MD, Kathleen Kadow, MD, and Kathy Prestidge from Children’s National Pediatricians and Associates; Kimberly Dawson, MSN, RN, CPEN, TCRN, EMT, Heather Sobolewski, MSN, RN-BC, CHSE, from Nemours/Alfred I. duPont Hospital for Children; and Theresa A. Walls, MD, MP H, from Center for Simulation, Advanced Education, and Innovation, Children’s Hospital of Philadelphia, Philadelphia, PA. We also thank the pediatric offices and their champions who facilitated the study and worked collaboratively with their pediatric AMC team; the members of the International Network for Simulation-based Pediatric Innovation, Research and Education who have helped to shape this project with their contributions; and the International Pediatric Simulation Society for providing the International Network for Simulation-based Pediatric Innovation, Research and Education with space at their annual meetings for our research group.
Drs Abulebda, Auerbach, and Yuknis conceptualized and designed the study, drafted the initial manuscript, and reviewed and revised the manuscript; Dr Whitfill executed the data analyses, drafted the initial manuscript, and reviewed and revised the manuscript; Mrs Montgomery, Mrs Pearson, Mrs Rousseau, Drs Diaz, Brown, Wing, Tay, and Lavoie, and Mrs Good, Drs Malik, Makharashvili, Garrow, and Zaveri, Mrs Thomas, and Dr Burns made substantial contributions to the study’s conception and design and critically revised the manuscript; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
FUNDING: Drs Yuknis and Abulebda received support from the Indiana University Health Values Grant VFE-342 to MY for study design and periodic meetings with the collaborators; Dr Auerbach received support from the RBaby Foundation (https://www.rbabyfoundation.org/) for data management and analysis; and the other authors have indicated they have received no external funding.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: Dr Whitfill is a board observer of a medical device company, 410 Medical, Inc (Durham, NC), which commercializes a device for fluid resuscitation; and the other authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments