There is an urgent need to prepare pediatricians to care for children with behavioral and mental health (B/MH) conditions. In this study, we evaluate the perceived competence of pediatric residents and recent graduates in the assessment and treatment of B/MH conditions, characterize variation in competence across residency programs, and identify program characteristics associated with high competence.
Cross-sectional survey of applicants for the initial certifying examination in pediatrics. Questions were focused on (1) who should be competent in B/MH skills, (2) institutional support around B/MH training, and (3) perceived competence in 7 B/MH assessment skills and 9 treatment skills. Competence was rated on a 5-point scale, and high levels of assessment and treatment competence were defined as scores of ≥4. Composite measures for B/MH assessment and treatment were calculated as mean scores for each domain. We examined variation in residents’ self-reported competence across programs and used linear regression to identify factors associated with high levels of competence at the program level.
Of applicants, 62.3% responded to the survey (n = 2086). Of these, 32.8% (n = 595) reported high competence in assessment skills and 18.9% (n = 337) in treatment skills. There were large variations in reported competence across programs. Respondents from smaller programs (<30 trainees) reported higher competence in assessment and treatment than those from large programs (P < .001).
Current and recent pediatric trainees do not report high levels of perceived competence in the assessment and treatment of children with B/MH conditions. The substantial variation across programs indicates that the pediatric community should create standards for B/MH training.
Pediatric residents have historically reported low competence in assessing and treating behavioral and mental health conditions. In 2009, the American Academy of Pediatrics published mental health competencies for pediatric primary care, incentivizing improvements in pediatric training.
As of 2018, pediatric trainees still report low competence in the assessment and treatment of children with behavioral and mental health conditions. Trainees at smaller programs report higher competence, although substantial variation exists across all residency program sizes.
There is an urgent need to advance pediatric training and prepare pediatricians to more effectively care for children with behavioral and mental health (B/MH) concerns. The prevalence of pediatric B/MH conditions increased by 20.9% over the last decade, whereas chronic physical illnesses decreased by 11.8%.1 Although 1 in 5 children has a B/MH condition, only half receive appropriate treatment, partly because of a shortage of B/MH specialists.2,3 Additionally, children with chronic medical conditions may have comorbid B/MH conditions, which negatively impact disease control, quality of life, and costs.4 It is critical that pediatricians improve B/MH care to address this crisis.5,6
Despite decades of acknowledging the gap in pediatric B/MH training, little is known about how to prepare pediatricians to competently address these problems. In 1997, the Accreditation Council for Graduate Medical Education mandated a 4-week developmental and behavior rotation with the expectation that a formal rotation would increase future pediatricians’ competence in B/MH skills.7 However, a national survey of graduating residents conducted in 2007, 10 years after the Accreditation Council for Graduate Medical Education mandate, revealed that few residents felt competent in the assessment and treatment of B/MH conditions.8 Specifically, 17% of respondents had very good or excellent perceived competence in diagnosing anxiety and 8% in dosing with antidepressant or anxiety medications.
In 2009, the American Academy of Pediatrics (AAP) articulated B/MH competencies for primary care pediatricians, indicating that training to achieve these skills needed to extend beyond a single 4-week subspecialty rotation.9 Unfortunately, in 2011, less than half of pediatric residency program directors (PDs) knew about these competencies, and most reported that the majority of B/MH skills were taught during a single rotation.10 PDs rarely rated their residents’ knowledge in the assessment and treatment of B/MH conditions as very good or excellent, with the exception of attention-deficit/hyperactivity disorder (ADHD).
Recently, the American Board of Pediatrics (ABP) declared gaps in B/MH training a crisis and called for improved pediatric B/MH training.11 Given the urgency, we aimed to (1) assess the perceived competence of pediatric residents and recent graduates in the assessment and treatment of B/MH conditions, (2) characterize variation in reported competence across residency programs, and (3) identify program characteristics associated with high levels of perceived competence.
Methods
Sample and Data Source
Residents and recent graduates applying for the first time for the initial certifying examination in general pediatrics from January 2018 to May 2018 were offered a B/MH-focused survey at the end of the application. Applicants were informed that participation had no bearing on their application and that results would be only shared in the aggregate. Excluded from this analysis were trainees from residency programs in Canada and trainees in combined pediatrics-psychiatry residency programs. The ABP’s Institutional Review Board of record deemed the survey exempt.
With the B/MH training questions, respondents were inquired about attitudes regarding how competent primary care and subspecialty trainees should be with B/MH concerns and about perceived competence in assessment and treatment of the B/MH issues listed below. Specifically, the survey was used to assess the extent to which respondents agreed that trainees going in to primary care and/or subspecialty care should be competent in the assessment, treatment, and comanagement or referral of patients with common B/MH problems and to assess perceptions of their program and faculty’s commitment to B/MH training (strongly agree, agree, disagree, strongly disagree). Respondents were asked to rate their competence in 7 assessment skills: eliciting parental B/MH concerns; using screening tools to identify B/MH concerns; using disorder-specific rating scales to help with diagnosis; diagnosing ADHD, anxiety, or depression by using criteria in the Diagnostic and Statistical Manual of Mental Health Disorders, Fifth Edition12 ; and assessing suicidality. Respondents were also asked to rate their competence in 9 treatment skills: using evidence-based communication and engagement strategies; behavioral management counseling; nonmedication strategies for ADHD, depression, or anxiety; dosing ADHD or depression medications; titrating medications; safety counseling; and comanagement with mental health specialists. Competence was measured on a 5-point scale (poor [1], fair [2], good [3], very good [4], and excellent [5]), consistent with previous studies.8 The survey development was informed by previous studies.8,10
Descriptor Variables
Individual characteristics examined included age, sex, race and/or ethnicity, years since training, planned future clinical role (general pediatrics, subspecialist, hospitalist, or other/unsure), and medical school location (American versus international medical school). Program characteristics were also available from the ABP’s resident tracking database and included the size of the program (small: 0–30 trainees; medium: 31–60 trainees; large: >60 trainees), program type (categorical or combined programs such as medicine-pediatrics), and US Census Bureau regions.
Outcomes
The primary outcome was perceived competence in assessment and treatment of common B/MH conditions. For this analysis, we created binary variables, with high self-reported competence defined as a score of ≥4 (very good and excellent) compared with lower self-reported competence (poor, fair, and good). We chose this method of dichotomization for consistency with previous studies, allowing us to assess whether competence ratings were improved.8 We calculated B/MH assessment and treatment composite scores for each respondent with complete data by calculating the mean score for the 7 B/MH assessment submeasures and the 9 treatment submeasures, respectively.
Statistical Analysis
To identify individual and program-level factors associated with high self-reported competence, we calculated differences in baseline characteristics between respondents with high competence in B/MH assessment and treatment and those reporting less competence using χ2 tests and t tests, as appropriate. We then examined variation in B/MH assessment and treatment competence across residency programs by calculating the percentage of respondents at each residency program who had composite assessment and treatment scores ≥4. This resulted in program-level assessment and treatment measures on a 0 to 100 scale. To provide stable estimates, we limited this analysis to residency programs with at least 5 respondents. We examined ranges and coefficients of variation (CVs) ([SD/mean] × 100) in small, medium, and large programs to identify the degree of variation (small: CV < 10; moderate: CV 10–<20; high: CV ≥ 20).13 We also examined differences in these measures across and within small, medium, and large residency programs using analysis of variance.
To identify factors associated with these program-level assessment and treatment measures, we first developed unadjusted linear regression with residency program as the unit of analysis. We then developed multivariable models, adjusting for program size, geographic region, and program type (categorical pediatrics or medicine-pediatrics) as well as program-level mean values of individual factors (age, sex, race and/or ethnicity, and medical school location). As a sensitivity analysis, we repeated these regression analyses using a more liberal definition of perceived competence, calculating measures on the basis of the percentage of respondents at each residency program who had composite assessment and treatment scores ≥3. All analyses were conducted by using Stata 15 (Stata Corp, College Station, TX),14 by using 2-sided tests, and by defining P < .05 as statistically significant.
Results
A total of 2086 individuals participated in this survey (response rate: 62.3%). There were no significant differences between respondents and nonrespondents with respect to age, sex, medical school location, training program size, and geographic region (Supplemental Table 3).
Almost all respondents reported that they believed that all residents entering primary care as well as subspecialties should be competent in the assessment, treatment, referral, and comanagement of common childhood B/MH concerns. Specifically, there was high agreement (ie, ≥98.0% agreeing or strongly agreeing with each skill) for residents entering primary care. The majority also agreed that residents going in to pediatric subspecialty fellowships required competence in assessment (96.7%) and referral or comanagement (95.8%) of B/MH conditions, but less (86.3%) agreed they required competence in the treatment of B/MH conditions. More than 90% (n = 1536; 92.6%) agreed or strongly agreed that their training program was committed to ensuring that graduating residents can address B/MH problems, whereas 92.6% (n = 1537) agreed or strongly agreed that faculty and staff promoted the importance of B/MH by showing their concern and incorporating conversations about B/MH into each interaction.
A smaller proportion of respondents self-reported very good or excellent (high) competence with their own B/MH assessment and treatment skills. The distribution of perceived competence across the B/MH assessment and treatment submeasures is shown in Fig 1. Within the assessment domain, eliciting parent and patient concerns about B/MH problems was the most highly rated competency, with 65.6% of respondents (n = 1196) reporting high competence. However, less than half of respondents reported high competence in diagnosing anxiety (43.4%, n = 773). In the treatment domain, the relative distribution of responses was shifted left, with a low of 26.6% (n = 474) of respondents reporting high competence in dosing medications to treat anxiety or depression and a high of 51.7% (n = 943) reporting this degree of competence in using evidence-based tools, such as motivational interviewing, to encourage engagement in treatment.
Respondent-reported competence in behavioral and mental health assessment (A) and treatment (B) domains.
Respondent-reported competence in behavioral and mental health assessment (A) and treatment (B) domains.
Overall, one-third (32.8%; n = 595) of respondents had mean B/MH assessment composite scores ≥4. The characteristics of these respondents, relative to those with mean B/MH composite scores <4, are shown in Table 1. Respondents who reported competence with B/MH assessment were older, more frequently men, and graduates of international medical schools. A smaller proportion (18.9%; n = 337) of respondents had mean B/MH treatment composite scores of ≥4. As with assessment competence, respondents reporting treatment competence were older, more frequently men, and graduates of international medical schools; in addition, treatment competence was reported more frequently by respondents who anticipated subspecialist careers. With respect to program-level characteristics, both assessment and treatment competence was reported significantly more often by graduates of smaller residency programs, graduates of programs in the Northeast, and graduates from combined residency programs (eg, medicine-pediatrics).
Individual and Residency Program Characteristics of Respondents Overall and According to Mental Health Assessment and Treatment Composite Scores
. | Full Sample (N = 2086) . | Mental Health Assessment Composite . | Mental Health Treatment Composite . | ||||
---|---|---|---|---|---|---|---|
Less Competent (n = 1188) . | Competent (n = 595) . | P . | Less Competent (n = 1446) . | Competent (n = 337) . | P . | ||
Respondent characteristics | |||||||
Age, y, mean (SD) | 31.4 (2.8) | 31.3 (2.7) | 31.7 (3.3) | <.01 | 31.3 (2.8) | 32.0 (3.5) | <.01 |
Female sex, n (%) | 1499 (71.9) | 876 (73.7) | 408 (68.6) | .02 | 1070 (74.0) | 214 (63.5) | <.01 |
Race and/or ethnicity, n (%) | .55 | .06 | |||||
White | 1119 (60.1) | 718 (60.4) | 353 (59.3) | — | 889 (61.5) | 182 (54.0) | — |
Hispanic, Latino, or Spanish origin | 163 (8.8) | 102 (8.6) | 59 (9.9) | — | 125 (8.6) | 36 (10.7) | — |
African American | 97 (5.2) | 65 (5.5) | 23 (3.9) | — | 75 (5.2) | 13 (3.9) | — |
Asian American | 354 (19.0) | 218 (18.4) | 121 (20.3) | — | 260 (18.0) | 79 (23.4) | — |
>1 race | 41 (2.2) | 27 (2.3) | 13 (2.2) | — | 33 (2.3) | 7 (2.1) | — |
Other or missing | 87 (4.7) | 58 (4.9) | 26 (4.4) | — | 64(4.4) | 20 (5.9) | — |
Medical school location, n (%) | .01 | <.01 | |||||
American medical school | 1687 (80.9) | 982 (82.7) | 459 (77.2) | — | 1202 (83.1) | 239 (70.9) | — |
International medical school | 399 (19.1) | 206 (17.3) | 136 (22.8) | — | 244 (16.9) | 98 (29.1) | — |
Years since training completion, n (%) | .03 | .09 | |||||
Resident or chief resident | 1480 (79.4) | 929 (78.2) | 485 (81.5) | — | 1144 (79.1) | 270 (80.1) | — |
<1 y | 208 (11.2) | 149 (12.5) | 50 (8.4) | — | 171 (11.8) | 28 (8.3) | — |
≥1 y | 175 (9.4) | 110 (9.3) | 60 (10.1) | — | 131 (9.0) | 39 (11.6) | — |
Planned clinical role, n (%) | .31 | .03 | |||||
General pediatrics | 669 (35.9) | 441 (37.1) | 199 (33.4) | — | 531 (36.7) | 109 (32.3) | — |
Subspecialist | 671 (36.0) | 415 (34.9) | 229 (38.5) | — | 506 (35.0) | 138 (40.9) | — |
Hospitalist | 222 (11.9) | 148 (12.5) | 68 (11.4) | — | 186 (12.9) | 30 (8.9) | — |
Other/unsure | 301 (16.4) | 184 (15.5) | 99 (16.6) | — | 223 (15.5) | 60 (17.8) | — |
Residency program characteristics, n (%) | |||||||
Residency program size | <.01 | <.01 | |||||
≤30 residents | 512 (24.7) | 267 (22.6) | 175 (29.7) | — | 322 (22.4) | 120 (35.8) | — |
31–60 residents | 729 (35.2) | 424 (35.9) | 208 (35.3) | — | 511 (35.6) | 121 (36.1) | — |
>60 residents | 829 (40.0) | 490 (41.5) | 206 (35.0) | — | 602 (42.0) | 94 (28.1) | — |
Region | .09 | <.01 | |||||
Northeast | 561 (26.9) | 311 (26.2) | 169 (27.7) | — | 374 (25.9) | 102 (30.3) | — |
Midwest | 494 (23.7) | 278 (23.4) | 139 (23.4) | — | 327 (22.6) | 90 (26.7) | — |
South | 686 (32.9) | 388 (32.7) | 209 (35.1) | — | 492 (34.0) | 105 (31.2) | — |
West | 322 (15.4) | 200 (16.8) | 73 (12.3) | — | 240 (16.6) | 33 (9.8) | — |
Training program type | .02 | .01 | |||||
Categorical pediatrics | 1866 (89.5) | 1077 (90.7) | 519 (87.2) | — | 1308 (90.5) | 288 (85.5) | — |
Other | 219 (10.5) | 110 (9.3) | 76 (12.8) | — | 137 (9.5) | 49 (14.5) | — |
. | Full Sample (N = 2086) . | Mental Health Assessment Composite . | Mental Health Treatment Composite . | ||||
---|---|---|---|---|---|---|---|
Less Competent (n = 1188) . | Competent (n = 595) . | P . | Less Competent (n = 1446) . | Competent (n = 337) . | P . | ||
Respondent characteristics | |||||||
Age, y, mean (SD) | 31.4 (2.8) | 31.3 (2.7) | 31.7 (3.3) | <.01 | 31.3 (2.8) | 32.0 (3.5) | <.01 |
Female sex, n (%) | 1499 (71.9) | 876 (73.7) | 408 (68.6) | .02 | 1070 (74.0) | 214 (63.5) | <.01 |
Race and/or ethnicity, n (%) | .55 | .06 | |||||
White | 1119 (60.1) | 718 (60.4) | 353 (59.3) | — | 889 (61.5) | 182 (54.0) | — |
Hispanic, Latino, or Spanish origin | 163 (8.8) | 102 (8.6) | 59 (9.9) | — | 125 (8.6) | 36 (10.7) | — |
African American | 97 (5.2) | 65 (5.5) | 23 (3.9) | — | 75 (5.2) | 13 (3.9) | — |
Asian American | 354 (19.0) | 218 (18.4) | 121 (20.3) | — | 260 (18.0) | 79 (23.4) | — |
>1 race | 41 (2.2) | 27 (2.3) | 13 (2.2) | — | 33 (2.3) | 7 (2.1) | — |
Other or missing | 87 (4.7) | 58 (4.9) | 26 (4.4) | — | 64(4.4) | 20 (5.9) | — |
Medical school location, n (%) | .01 | <.01 | |||||
American medical school | 1687 (80.9) | 982 (82.7) | 459 (77.2) | — | 1202 (83.1) | 239 (70.9) | — |
International medical school | 399 (19.1) | 206 (17.3) | 136 (22.8) | — | 244 (16.9) | 98 (29.1) | — |
Years since training completion, n (%) | .03 | .09 | |||||
Resident or chief resident | 1480 (79.4) | 929 (78.2) | 485 (81.5) | — | 1144 (79.1) | 270 (80.1) | — |
<1 y | 208 (11.2) | 149 (12.5) | 50 (8.4) | — | 171 (11.8) | 28 (8.3) | — |
≥1 y | 175 (9.4) | 110 (9.3) | 60 (10.1) | — | 131 (9.0) | 39 (11.6) | — |
Planned clinical role, n (%) | .31 | .03 | |||||
General pediatrics | 669 (35.9) | 441 (37.1) | 199 (33.4) | — | 531 (36.7) | 109 (32.3) | — |
Subspecialist | 671 (36.0) | 415 (34.9) | 229 (38.5) | — | 506 (35.0) | 138 (40.9) | — |
Hospitalist | 222 (11.9) | 148 (12.5) | 68 (11.4) | — | 186 (12.9) | 30 (8.9) | — |
Other/unsure | 301 (16.4) | 184 (15.5) | 99 (16.6) | — | 223 (15.5) | 60 (17.8) | — |
Residency program characteristics, n (%) | |||||||
Residency program size | <.01 | <.01 | |||||
≤30 residents | 512 (24.7) | 267 (22.6) | 175 (29.7) | — | 322 (22.4) | 120 (35.8) | — |
31–60 residents | 729 (35.2) | 424 (35.9) | 208 (35.3) | — | 511 (35.6) | 121 (36.1) | — |
>60 residents | 829 (40.0) | 490 (41.5) | 206 (35.0) | — | 602 (42.0) | 94 (28.1) | — |
Region | .09 | <.01 | |||||
Northeast | 561 (26.9) | 311 (26.2) | 169 (27.7) | — | 374 (25.9) | 102 (30.3) | — |
Midwest | 494 (23.7) | 278 (23.4) | 139 (23.4) | — | 327 (22.6) | 90 (26.7) | — |
South | 686 (32.9) | 388 (32.7) | 209 (35.1) | — | 492 (34.0) | 105 (31.2) | — |
West | 322 (15.4) | 200 (16.8) | 73 (12.3) | — | 240 (16.6) | 33 (9.8) | — |
Training program type | .02 | .01 | |||||
Categorical pediatrics | 1866 (89.5) | 1077 (90.7) | 519 (87.2) | — | 1308 (90.5) | 288 (85.5) | — |
Other | 219 (10.5) | 110 (9.3) | 76 (12.8) | — | 137 (9.5) | 49 (14.5) | — |
—, not applicable.
Of the 271 programs included in our analysis, 158 programs had at least 5 respondents to the survey and were included in our program-level analysis, leaving 153 categorical programs and 5 medicine-pediatrics programs (representing 1467 respondents). Figure 2 reveals the variation in assessment and treatment competence, as well as program-level mean scores, across the 33 small residency programs, 76 medium programs, and 49 large programs. For both of these outcomes, mean program-level scores were highest for small programs and lowest for large programs. Specifically, the program-level assessment mean scores were 43.4% for small programs, 33.6% for medium programs, and 28.7% for large programs; treatment mean scores were 28.7% for small programs, 19.6% for medium, programs and 13.0% for large programs (P < .01 for the difference in means for both measures). However, within each program-size group, we observed substantial variation, with a range from 0% of residents in the program reporting competence to a high of 100% in small and medium programs and a high of 80% in large programs for B/MH assessment skills. For treatment skills, the ranges were similar: 0% to 100% for small programs, 0% to 80% for medium programs, and 0% to 43% for large programs. CVs for assessment skills were 52.5, 59.7, and 46.9 across small, medium, and large residency programs, respectively, indicating high variation across all program-size strata.13 CVs for treatment skills were similarly high, with values of 77.5, 78.1, and 77.4 across small, medium, and large residency programs, respectively.
Variation in program-level mental health assessment (A) and treatment (B) skills measure across small, medium, and large residency programs; each line represents 1 residency program.
Variation in program-level mental health assessment (A) and treatment (B) skills measure across small, medium, and large residency programs; each line represents 1 residency program.
Regression results revealing pairwise comparisons are presented in Table 2. In unadjusted analyses, both medium and large residency programs had significantly lower assessment and treatment program scores than small programs. When adjusting for other program and individual characteristics, these differences remained for B/MH assessment. Specifically, when adjusting for other covariates, medium-sized programs had, on average, program-level assessment scores 8.9 points lower than those of small programs (−8.9; 95% confidence interval [CI] −17.1 to −0.8), whereas large programs had program-level assessment scores 10.3 points lower than those of small programs (−10.3; 95% CI −19.5 to −1.0). For B/MH treatment, there was no significant difference in adjusted scores between small- and medium-sized programs; however, the significant difference between small and large residency programs remained. As shown in Table 2, large programs had mean program-level assessment scores 8.1 points lower than those of small programs (−8.1; 95% CI −15.3 to −0.9). With the exception of lower treatment composite scores in the West relative to the Midwest, we did not identify any other program factors associated with program scores in our multivariable models.
Results of Unadjusted and Adjusted Linear Regression Models Revealing Associations Between Residency Program Characteristics and Program-Level Mental Health Assessment and Treatment Composite Measures
Residency Program Characteristics (n = 158 Programs) . | Program-Level Mental Health Assessment Composite Score . | Program-Level Mental Health Treatment Composite Score . | ||
---|---|---|---|---|
Unadjusted Regression Coefficient (95% CI) . | Adjusted Regression Coefficient (95% CI)a . | Unadjusted Regression Coefficient (95% CI) . | Adjusted Regression Coefficient (95% CI)a . | |
Residency program size | ||||
≤30 | Reference | Reference | Reference | Reference |
31–60 | −9.7 (−17.6 to −2.0)* | −8.9 (−17.1 to −0.8)* | −9.1 (−15.6 to −2.7)* | −5.3 (−11.7 to 1.0) |
>60 | −14.7 (−23.1 to −6.2)* | −10.3 (−19.5 to −1.0)* | −15.7 (−22.7 to −8.7)* | −8.1 (−15.3 to −0.9)* |
Region | ||||
Midwest | Reference | Reference | Reference | Reference |
Northeast | 0.5 (−7.9 to 8.8) | −2.8 (−11.2 to 5.6) | 0.3 (−6.7 to 7.2) | −4.2 (−10.7 to 2.4) |
South | −1.1 (−9.4 to 7.1) | 1.8 (−6.3 to 10.0) | −6.5 (−13.5 to 0.3) | −3.1 (−9.4 to 3.3) |
West | −8.0 (−18.1 to 2.2) | −7.8 (−18.3 to 2.6) | −10.4 (−18.8 to −2.0)* | −10.4 (−18.6 to −2.2)* |
Training program type | ||||
Categorical pediatrics | Reference | Reference | Reference | Reference |
Other | −6.1 (−23.6 to 11.4) | −2.4 (−21.1 to 16.3) | 1.6 (−13.3 to 16.5) | 6.3 (−8.3 to 20.9) |
Residency Program Characteristics (n = 158 Programs) . | Program-Level Mental Health Assessment Composite Score . | Program-Level Mental Health Treatment Composite Score . | ||
---|---|---|---|---|
Unadjusted Regression Coefficient (95% CI) . | Adjusted Regression Coefficient (95% CI)a . | Unadjusted Regression Coefficient (95% CI) . | Adjusted Regression Coefficient (95% CI)a . | |
Residency program size | ||||
≤30 | Reference | Reference | Reference | Reference |
31–60 | −9.7 (−17.6 to −2.0)* | −8.9 (−17.1 to −0.8)* | −9.1 (−15.6 to −2.7)* | −5.3 (−11.7 to 1.0) |
>60 | −14.7 (−23.1 to −6.2)* | −10.3 (−19.5 to −1.0)* | −15.7 (−22.7 to −8.7)* | −8.1 (−15.3 to −0.9)* |
Region | ||||
Midwest | Reference | Reference | Reference | Reference |
Northeast | 0.5 (−7.9 to 8.8) | −2.8 (−11.2 to 5.6) | 0.3 (−6.7 to 7.2) | −4.2 (−10.7 to 2.4) |
South | −1.1 (−9.4 to 7.1) | 1.8 (−6.3 to 10.0) | −6.5 (−13.5 to 0.3) | −3.1 (−9.4 to 3.3) |
West | −8.0 (−18.1 to 2.2) | −7.8 (−18.3 to 2.6) | −10.4 (−18.8 to −2.0)* | −10.4 (−18.6 to −2.2)* |
Training program type | ||||
Categorical pediatrics | Reference | Reference | Reference | Reference |
Other | −6.1 (−23.6 to 11.4) | −2.4 (−21.1 to 16.3) | 1.6 (−13.3 to 16.5) | 6.3 (−8.3 to 20.9) |
Regression coefficients indicate the magnitude of difference, on a 0–100 scale, between the reference group and the comparison group in program-level composite scores.
Adjusted for program size, region, and program type and program-level mean values for age, sex, race and/or ethnicity, medical school location, and years since training completion.
P < .05.
In our sensitivity analysis, when using composite assessment and treatment scores ≥3, significant differences in program-level treatment measure scores remained. When adjusting for other individual- and program-level characteristics, medium-sized residency programs had mean program-level treatment scores 8.6 points lower than those of small programs (−8.6; 95% CI −16.9 to 0.3), whereas large program had mean program-level treatment scores 17.9 points lower than those of small programs (−17.9; 95% CI −27.3 to −8.4). In our sensitivity analysis of program-level assessment measures, differences according to residency program characteristics were not statistically significant (Supplemental Table 4).
Discussion
In this large national survey, respondents strongly endorsed pediatric trainee competence in B/MH management for residents entering primary care as well as subspecialty careers. Despite this, a small fraction of respondents reported high levels of competence in B/MH assessment, and an even smaller fraction perceived themselves to be competent in treatment. Furthermore, we observed substantial variation in trainee competence across residency training programs. Differences between assessment and treatment remained on sensitivity analyses. These findings have important implications for the ongoing development of pediatric B/MH training initiatives.
Respondent beliefs about which trainees should be competent in addressing mental health problems are encouraging. Of respondents, 86% agreed or strongly agreed that those going into subspecialty care should be competent in the treatment of common childhood B/MH concerns. In a previous survey of PDs, only half agreed or strongly agreed that their graduating residents pursuing subspecialty careers should be competent in B/MH treatment. These data further illustrate the importance of educating all residents and fellows about B/MH.15
Although a relatively small proportion of respondents reported competence in B/MH assessment and treatment, these numbers have improved over time from previous studies.8 For instance, 52% of our sample reported very good or excellent competence in diagnosing depression, which is higher than the 24% reported in the previously mentioned national survey of graduating residents conducted in 2007. Similarly, only 17% of respondents previously reported high competence in diagnosing anxiety, as opposed to 44% reporting high competence in this sample.8 Perceived competence in providing behavioral counseling also increased from 16% to 40%. Almost 60% of respondents reported high competence in assessing suicidality, and half reported high competence in safety counseling. However, because suicide is the second leading cause of death for children starting at age 10, ongoing efforts to educate pediatric residents in this domain are necessary to make an impact on this common and potentially life-threatening condition.16
National initiatives, such as publication guidelines for depression17,18 (2007) and maladaptive aggression19,20 (2012), in primary care may explain why perceived competence has improved over time. ADHD guidelines have existed since the year 2000, and ADHD-related practices and skills have consistently been rated higher than other B/MH conditions in surveys of residents and practicing pediatricians.8,10,21 That said, less than one-third of respondents reported high levels of competence for B/MH assessment, and less than one-fifth reported high levels of competence for B/MH treatment. Recognizing that >7.7 million children have B/MH conditions, these numbers are disconcertingly low.2 Rather than relying on passive diffusion of guidelines, concerted efforts across the pediatric community to integrate guideline recommendations for B/MH care in pediatric training and practice may further improve B/MH competence.
Respondents’ beliefs about their competence in B/MH assessment as compared with B/MH treatment are noteworthy. Whereas more than half reported having high competence in diagnosing depression, only 1 in 4 reported competence in dosing medications to treat depression. Given the shortage of child and adolescent psychiatrists in the country, pediatricians need to be competent in prescribing evidence-based psychopharmacologic medications. Resources such as the following can be used to guide curriculum development for common evidence-based B/MH treatments: the newly updated AAP policy statement,22 published guidelines for the treatment of adolescent depression in primary care,19 and sample motivational interviewing videos on the AAP Web site.18,23
We observed substantial differences in mean program-level B/MH assessment and treatment scores across small, medium, and large residency programs, with smaller programs consistently having higher mean scores. These differences add to previous literature about how training experiences differ by program size. In previous research, PDs from smaller programs were more likely to implement learning activities around B/MH.10 Also, residents who trained at smaller programs more frequently go into primary care and have reported feeling more prepared to do so than those trained in larger programs.24 However, the variation in program-level assessment and treatment competence measures within each size strata is notable. Although, overall, smaller programs had higher composite assessment scores, the range of scores was wide, and CVs were high in all 3 program-size strata. These results reveal opportunities for improvement across residency programs of all sizes; the development of learning collaboratives grouped by residency program size may enable high-performing programs to share best practices with lower-performing programs with similar resources and interests in primary versus subspecialty care.
Limitations of this study include our evaluation of perceived competence, not measured competence, as our outcome variable.25 However, directly observed competence would require time, money, and validated assessment tools that currently do not exist.26 A study of this magnitude with a similar sample size is not feasible. We dichotomized our 5-point Likert scale to exclude the midpoint, “good,” from our definition of high perceived competence. This approach is consistent with a previous study and allowed us to compare rates of competence over time.8 Horwitz et al8 similarly justified dichotomizing competence in this manner to account for a social response bias because previous research also suggests that practitioners are inclined to over-report their performance.27 Additionally, the Dunning-Kruger effect is a cognitive bias in which individuals are unaware of their lack of competence and will rate their abilities higher than they are.28 Correspondingly, many trainees may be unaware of all of the knowledge and skills needed to appropriately address pediatric BH/MH problems and therefore are likely to overrate their competence.
Because this study was focused on B/MH skills, we do not know how respondents would rate their competence in other pediatric skills. However, previous studies have revealed that a majority of pediatric residents report high perceived competence in dietary assessment and counseling around dietary changes for obesity and that more than two-thirds report very good to excellent perceived preparedness for medical home activiites.29,30 Furthermore, 62% of pediatricians have reported that they would have benefitted from additional B/MH training compared with less than one-quarter reporting this need for other subspecialty conditions.31 Finally, we cannot assume causation for any of our observed associations given our cross-sectional design. These limitations should be interpreted in the context of our study’s strengths, including a large sample of trainees representing all regions of the United States and all residency program sizes.
Conclusions
This large national study of future pediatricians reveals ongoing training needs to improve trainees’ perceived competence in the assessment and treatment of B/MH problems. It is promising that rates of perceived competence have increased over time, suggesting that national or institutional interventions may be having positive impacts. However, the large variation across small, medium, and large programs in trainee B/MH competence also suggests that national standards are needed for all programs to consistently provide effective B/MH training. Interventions may need to be tailored to program size and be designed to reach trainees regardless of future career aspirations.
Dr Green conceptualized and designed the study, helped to create the data collection instrument, interpreted the analyses, created a draft of the manuscript, and critically reviewed the manuscript for important intellectual content; Dr Leyenaar conceptualized the study and aims, analyzed the data, and critically reviewed and revised the manuscript; Mr Turner conceptualized and designed the study, coordinated and supervised data collection, created the data collection instrument, interpreted the analyses, and critically reviewed the manuscript for important intellectual content; Dr Leslie conceptualized and designed the study, created the data collection instrument, interpreted the analysis, and critically reviewed the manuscript for important intellectual content; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
FUNDING: Supported in full by the American Board of Pediatrics (ABP) Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official view of the ABP or the ABP Foundation.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: Drs Green and Leyenaar were contracted by the American Board of Pediatrics (ABP) Foundation to conduct this research; Dr Leslie and Mr Turner are staff at the ABP.
Comments