In several states, payers penalize hospitals when an inpatient readmission follows an inpatient stay. Observation stays are typically excluded from readmission calculations. Previous studies suggest inconsistent use of observation designations across hospitals. We sought to describe variation in observation stays and examine the impact of inclusion of observation stays on readmission metrics.
We conducted a retrospective cohort study of hospitalizations at 50 hospitals contributing to the Pediatric Health Information System database from January 1, 2018, to December 31, 2018. We examined prevalence of observation use across hospitals and described changes to inpatient readmission rates with higher observation use. We described 30-day inpatient-only readmission rates and ranked hospitals against peer institutions. Finally, we included observation encounters into the calculation of readmission rates and evaluated hospitals’ change in readmission ranking.
Most hospitals (n = 44; 88%) used observation status, with high variation in use across hospitals (0%–53%). Readmission rate after index inpatient stay (6.8%) was higher than readmission after an index observation stay (4.4%), and higher observation use by hospital was associated with higher inpatient-only readmission rates. When compared with peers, hospital readmission rank changed with observation inclusion (60% moving at least 1 quintile).
The use of observation status is variable among children’s hospitals. Hospitals that more liberally apply observation status perform worse on the current inpatient-to-inpatient readmission metric, and inclusion of observation stays in the calculation of readmission rates significantly affected hospital performance compared with peer institutions. Consideration should be given to include all admission types for readmission rate calculation.
Readmission rates are used as a quality marker for children’s hospitals and are used to inform pay-for-performance programs and penalties. Observation stays are generally excluded from these calculations despite representing a growing number of encounters.
Use of observation status for admissions varies across US children’s hospitals. Inclusion of these encounters in the calculation of readmission rates substantially changes this metric and hospital rank.
Over the past 2 decades, health care systems have been driven by metrics set by payers to promote more efficient and cost-effective care. Hospital readmission rates, controversially considered a quality-of-care indicator, represent one such metric, and reducing readmissions has gained an important role in the fight to reduce excessive health care costs.1 More recently, the Center for Medicare and Medicaid Services and some state Medicaid programs have begun to incentivize readmission reduction efforts by imposing financial penalties on hospitals with readmission rates higher than their peer institutions for selected conditions.2–5
Patients may be admitted to hospital units under inpatient or observation status. The use of observation status in hospitals has evolved to become a billing designation for patients requiring additional monitoring but not meeting inpatient admission criteria for a diagnosis-related group.6 Published guidelines, such as Milliman and InterQual,7–10 include criteria for determining observation versus inpatient status; however, despite this, observation versus inpatient designation varies greatly across hospitals, without direct relationship to the true level of care provided.11–14 This variance may be due to factors related to payers, state policies, and local hospital environments and impacts the categorization of hospitalization by databases used for benchmarking and comparison of patient cohorts.
Hospitals are strongly driven to succeed at various metrics, yet little is known about the impact that observation has on measures of readmission. Patients hospitalized under observation status, either on an index admission or subsequent readmission, are typically excluded from readmission rate calculations.15 Additionally, the variation in use of observation status suggests there may be considerable overlap with inpatient-status patients in the actual care provided and resources used between hospitals. Given the variability in the use of observation status across pediatric institutions, we sought to determine the impact of observation status on readmission rates across children’s hospitals. Because observation status is generally used for lower-acuity patients, we hypothesize that including observation-status hospital stays will lower calculated readmission rates and alter hospital performance on readmission metrics relative to peer children’s hospitals. Although improving care is paramount, our research will inform how billing practices can impact quality-of-care metrics (eg, readmission rate) and have a direct effect on hospital reimbursement penalties unrelated to the clinical care delivered.
Methods
Study Design and Database
We conducted a retrospective cohort study of hospitalizations from 50 tertiary children’s hospitals (representing 28 US states and all census divisions) participating in the Pediatric Health Information System (PHIS) (Children’s Hospital Association, Lenexa, KS) from January 1, 2018, to December 31, 2018. PHIS is an administrative and billing database containing information on all inpatient, observation, ambulatory surgery, and emergency department encounters at participating hospitals. Importantly, an observation-status encounter in PHIS indicates that the encounter was considered an outpatient encounter by the payer and payed at an observational rate but is not necessarily reflective of separate observational care. The use of a unique, consistently encrypted medical record allows patients to be tracked across encounters within the same hospital. We considered all hospitalizations as index events for a potential readmission. For each encounter in PHIS, we collected demographic characteristics; International Classification of Diseases, 10th Revision, Clinical Modification diagnosis and procedure codes; and daily detailed billing information. This study was reviewed by the Institutional Review Board at Children’s Mercy Hospital and considered to be non–human subject research.
Outcome Variable
We used the pediatric all-cause readmission (PACR) metric endorsed by the National Quality Forum at 30 days postindex encounter.16 In short, the algorithm excludes some index encounters (eg, mental health), removes readmissions considered to be likely planned, and risk adjusts hospital readmission rates by using age, sex, and the existence of some chronic conditions.
Covariates
Statistical Analyses
We compared demographic and clinical characteristics of inpatient and observation encounters using χ2 or Wilcoxon rank tests, as appropriate. The PACR algorithm was run independently by using inpatient encounters only and then again by using inpatient and observation encounters. We calculated, ranked, and put into quintiles each hospital’s unadjusted and adjusted 30-day readmission rates. We determined the change in adjusted readmission rates and quintiles for each hospital when considering inpatient-only encounters and when including observation encounters.
To understand how observation status relates to readmission metric performance, we first described the percentage of observation encounters (of the total numbers of inpatient and observation encounters) for each hospital. We then examined the association between this percentage of observation encounters and the inpatient-only case-mix index, which serves as a marker of illness severity, using linear regression. Similarly, we used linear regression to determine the relationship between the percentage of observation encounters and the change in the hospital’s adjusted readmission rates with and without inclusion of observation encounters. We then ranked hospitals by adjusted 30-day all-condition readmission rate and grouped hospitals by quintiles, first including only inpatient-status hospitalizations and second including both inpatient and observation-status hospitalizations.
All statistical analyses were performed by using SAS version 9.4 (SAS Institute, Inc, Cary, NC), and P values <.05 were considered statistically significant.
Results
There were 728 104 index hospitalizations, of which 514 226 (70.6%) hospitalizations were designated inpatient status and 213 878 (29.4%) were observation status. There were 44 433 observation and inpatient all-cause 30-day readmissions. A total of 44 of the 50 hospitals (88%) in the sample included at least 1 index hospitalization using observation status. The index hospitalization’s median duration was 25 hours (interquartile range [IQR]: 19–34) for observation status and 63 hours (IQR: 39–119) for inpatient status (Table 1). There was significant variation in the proportion of index hospitalizations designated as observation status across hospitals (Fig 1).
Demographic and Clinical Characteristics of Included Cohort
. | Index Hospitalization Type . | ||
---|---|---|---|
Overall . | Inpatient . | Observation . | |
Total, No. (%) | 728 104 (100) | 514 226 (70.6) | 213 878 (29.4) |
Readmission location, n (%) | |||
Inpatient | 32 478 (4.5) | 28 405 (5.5) | 4073 (1.9) |
Observation | 11 955 (1.6) | 6553 (1.3) | 5402 (2.5) |
Not readmitted | 683 671 (93.9) | 479 268 (93.2) | 204 403 (95.6) |
Age, y, n (%) | |||
<1 | 230 607 (31.7) | 191 303 (37.2) | 39 304 (18.4) |
1–4 | 179 355 (24.6) | 112 200 (21.8) | 67 155 (31.4) |
5–9 | 121 583 (16.7) | 77 289 (15.0) | 44 294 (20.7) |
10–18 | 196 559 (27.0) | 133 434 (25.9) | 63 125 (29.5) |
Sex, n (%) | |||
Male | 394 721 (54.3) | 277 311 (54.0) | 117 410 (54.9) |
Female | 332 345 (45.7) | 236 020 (46.0) | 96 325 (45.1) |
Race, n (%) | |||
Non-Hispanic white | 352 876 (48.5) | 242 240 (47.1) | 110 636 (51.7) |
Non-Hispanic Black | 130 304 (17.9) | 89 170 (17.3) | 41 134 (19.2) |
Hispanic | 151 294 (20.8) | 109 810 (21.4) | 41 484 (19.4) |
Asian American | 24 390 (3.3) | 19 100 (3.7) | 5290 (2.5) |
Other | 69 240 (9.5) | 53 906 (10.5) | 15 334 (7.2) |
Payer, n (%) | |||
Government | 404 224 (55.5) | 286 037 (55.6) | 118 187 (55.3) |
Private | 298 700 (41.0) | 211 764 (41.2) | 86 936 (40.6) |
Other | 25 180 (3.5) | 16 425 (3.2) | 8755 (4.1) |
Complex chronic conditions classification system version 2.0, No. per patient (%) | |||
0 | 475 708 (65.3) | 306 823 (59.7) | 168 885 (79.0) |
1 | 153 767 (21.1) | 120 494 (23.4) | 33 273 (15.6) |
2+ | 98 629 (13.5) | 86 909 (16.9) | 11 720 (5.5) |
Length of stay, h, median (IQR) | 46 (26–89) | 63 (39–119) | 25 (19–34) |
HRISK, mean (95% CI) | 2.3 (2.3–2.3) | 2.9 (2.9–2.9) | 1 (1–1) |
. | Index Hospitalization Type . | ||
---|---|---|---|
Overall . | Inpatient . | Observation . | |
Total, No. (%) | 728 104 (100) | 514 226 (70.6) | 213 878 (29.4) |
Readmission location, n (%) | |||
Inpatient | 32 478 (4.5) | 28 405 (5.5) | 4073 (1.9) |
Observation | 11 955 (1.6) | 6553 (1.3) | 5402 (2.5) |
Not readmitted | 683 671 (93.9) | 479 268 (93.2) | 204 403 (95.6) |
Age, y, n (%) | |||
<1 | 230 607 (31.7) | 191 303 (37.2) | 39 304 (18.4) |
1–4 | 179 355 (24.6) | 112 200 (21.8) | 67 155 (31.4) |
5–9 | 121 583 (16.7) | 77 289 (15.0) | 44 294 (20.7) |
10–18 | 196 559 (27.0) | 133 434 (25.9) | 63 125 (29.5) |
Sex, n (%) | |||
Male | 394 721 (54.3) | 277 311 (54.0) | 117 410 (54.9) |
Female | 332 345 (45.7) | 236 020 (46.0) | 96 325 (45.1) |
Race, n (%) | |||
Non-Hispanic white | 352 876 (48.5) | 242 240 (47.1) | 110 636 (51.7) |
Non-Hispanic Black | 130 304 (17.9) | 89 170 (17.3) | 41 134 (19.2) |
Hispanic | 151 294 (20.8) | 109 810 (21.4) | 41 484 (19.4) |
Asian American | 24 390 (3.3) | 19 100 (3.7) | 5290 (2.5) |
Other | 69 240 (9.5) | 53 906 (10.5) | 15 334 (7.2) |
Payer, n (%) | |||
Government | 404 224 (55.5) | 286 037 (55.6) | 118 187 (55.3) |
Private | 298 700 (41.0) | 211 764 (41.2) | 86 936 (40.6) |
Other | 25 180 (3.5) | 16 425 (3.2) | 8755 (4.1) |
Complex chronic conditions classification system version 2.0, No. per patient (%) | |||
0 | 475 708 (65.3) | 306 823 (59.7) | 168 885 (79.0) |
1 | 153 767 (21.1) | 120 494 (23.4) | 33 273 (15.6) |
2+ | 98 629 (13.5) | 86 909 (16.9) | 11 720 (5.5) |
Length of stay, h, median (IQR) | 46 (26–89) | 63 (39–119) | 25 (19–34) |
HRISK, mean (95% CI) | 2.3 (2.3–2.3) | 2.9 (2.9–2.9) | 1 (1–1) |
All comparisons significant at P < .001. CI, confidence interval; HRISK, Hospitalization Resource Intensity Scores for Kids.
Illness Severity and Observation-Status Designation
Compared to inpatient stays, observation-status index hospitalizations were less common for infants (18.4% vs 37.2%; P < .001) and children with complex chronic conditions (21.1% vs 40.3%; P < .001) (Table 1). In bivariate analysis, non-Hispanic white and male subjects were more likely to be observation-status index hospitalizations than inpatient-status index encounters. As noted by the increase in inpatient-only case-mix index (P < .001), the percentage of observation stays directly correlated with the severity of inpatient hospitalizations at the hospital level (Fig 2).
Relationship between inpatient-only case-mix index at each PHIS hospital and associated percentage of observation-status hospitalizations.
Relationship between inpatient-only case-mix index at each PHIS hospital and associated percentage of observation-status hospitalizations.
Readmission Rate
The 30-day all-condition readmission rate, when including all hospitalizations, was 6.1% (Table 1). The readmission rate was higher after an index hospitalization designated as inpatient status (5.5% readmitted to inpatient; 1.3% readmitted to observation), as compared to after an index hospitalization designated as observation status (1.9% readmitted to inpatient; 2.5% readmitted to observation) (P < .001). Readmissions often maintained the same inpatient or observation designation used in the index hospitalization, although this was more pronounced for index inpatient-status hospitalizations (80.9% of readmissions maintained inpatient status) compared to index observation-status hospitalizations (56.8% of readmissions maintained observation status).
Changes to Readmission Rates When Including Index Observation-Status Hospitalizations
Compared to hospital rankings calculated including only inpatient-status index hospitalizations, 60% (n = 30) of hospitals experienced movement of at least 1 quintile and 20% (n = 9) of hospitals moved ≥2 quintiles when both inpatient- and observation-status index hospitalizations were included (Fig 3). There was a statistically significant positive relationship between the percentage of observation-status index hospitalizations and the change in the adjusted inpatient-only readmission rate at the hospital (P < .001) (Fig 4). For example, for every 10% increase in the percentage of observation-status index hospitalizations at the hospital, the hospital’s adjusted readmission rate for inpatient-only index encounters increased by an average of 0.3% (range: 0.1–0.5).
Quintile ranks for readmission rate for inpatient index hospitalizations only versus inpatient + observation index cases.
Quintile ranks for readmission rate for inpatient index hospitalizations only versus inpatient + observation index cases.
Relationship between the percentage of observation index hospitalizations versus the change in adjusted readmission rates by hospital (inpatient and observation index case readmission rate minus the inpatient-only index case readmission rate).
Relationship between the percentage of observation index hospitalizations versus the change in adjusted readmission rates by hospital (inpatient and observation index case readmission rate minus the inpatient-only index case readmission rate).
Discussion
In our study, we provide new information about the impact of excluding observation-status stays from traditional pediatric readmission rate calculations. Including observation stays in readmission rate calculations significantly changed hospital readmission ranking, with 60% of the hospitals moving at least 1 quintile. The impact of observation status on these metrics and the composition of patients included in hospital data are profound, although, currently, these encounters are excluded from most readmission rate analyses. Policy makers and payers setting readmission rate penalties that are based on national data are encouraged to include observation-status stays to ensure more standardized, equitable, and meaningful hospital comparisons.
Our study agrees with previous literature suggesting that observation status is applied variably across children’s hospitals.12 This variation is likely driven by regional payer contracts, local practice patterns, and state laws. Because of the differential application of observation across hospitals, it is difficult to accurately define the designation in a clinically meaningful way. The PHIS hospitals represented in the current study are tertiary children’s medical centers treating a wide but similar range of pediatric health conditions. It is likely that hospitals billing a lower percentage of observation cases may have more short-stay, lower-acuity encounters designated as inpatient status, whereas similar cases might be designated as observation status in greater numbers at other institutions. This variability could significantly impact comparisons of hospitals on a wide array of metrics beyond readmission rates. However, we are not aware of other large, multihospital analyses published to date in which authors assess the impact of observation status on other metrics, making this a fruitful topic for future study.
This issue is particularly pressing when considering large administrative data sets, such as the Nationwide Readmission Database, which does not include observation stays.19 Variable use of observation status results in arbitrary inclusion or exclusion of hospital encounters from these databases based on an administrative label, which is applied to encounters on the basis of inconsistent interpretation of third-party criteria (Milliman and InterQual). Including all hospital encounters (inpatient and observation) would allow for consistent comparison across hospitals for benchmarking and population analyses.
Our findings are also potentially problematic for the financial health of children’s hospitals because inpatient-to-inpatient readmission rates may be used by payer and state Medicaid pay-for-performance programs and/or penalties. Hospitals with high observation use effectively exclude lower-acuity patients from their overall inpatient population. Siphoning off lower-acuity patients into observation status explains our finding that hospitals with high observation use have higher inpatient case-mix index and higher inpatient-to-inpatient readmission rates. Conversely, hospitals with low observation use may have comparatively lower inpatient-to-inpatient readmission rates because lower-acuity patients are being included in the inpatient population and subsequent calculations instead of being labeled as observation.
Readmission metrics as indicators of quality may possess some value; however, the differential use of observation status across hospitals alters inpatient populations, making comparisons challenging and casting doubt on the validity of readmission rate comparisons as the basis for financial penalties. Most readmission metrics use some form of risk adjustment to help account for these differences; however, claims-based risk-adjustment models cannot adjust for the exclusion of observation encounters. In addition, we found that the risk adjustment for a particular hospital is sensitive to whether observation-status patients from other hospitals are included in the model. This may lead to a change in the hospital’s relative performance as a result of whether, or how much, observation status is used by their peers. Given the high financial stakes for many children’s hospitals, more universal standardization of readmission metrics will provide hospitals with opportunities for targeted improvements and more accurate comparisons with peer institutions.
Targeted interventions to lower readmission rates may not only improve hospital finances but also reduce the harm that readmissions have on vulnerable patient populations. Although all-cause readmissions are relatively rare after an index hospitalization (4%–6%),20,21 the impact on individual patients and their families can be profound. The emotional, social, and financial consequences of an unexpected readmission have been shown to be severe and potentially long lasting.22–24 In recent years, there has been national focus on identifying and reducing unnecessary readmissions in an effort to reduce the harm of these encounters.25–27 Including observation-status patients in readmission metrics will optimize our understanding of hospital readmissions and promote health equity by avoiding exclusion of certain patient populations on the basis of billing designation alone.
Our study has several limitations. First, the PHIS database includes only tertiary children’s hospitals. Although our results are not generalizable to all hospitals that care for children, they do allow for a large-scale relative comparison of the impact of admission status designation. Second, although our results indicate that observation-status stays affect readmission metrics, there may be variables not captured in this study that may also impact these metrics. We accounted for these effects using the PACR algorithm, which excludes planned returns and adjusts for patient-level factors, including age, sex, and the existence of chronic conditions. Third, inpatient versus observation designations are generally finalized at discharge; however, after discharge, payers may dispute the designation assigned, and, as a result, inpatient stays may be downgraded to observation status. PHIS captures the hospitalization designation assigned at discharge and does not account for changes that occur after payer use review. Finally, we did not evaluate other clinical quality indicators (ie, central line–associated bloodstream infections) to determine the effects of administrative designations on the rates of these adverse events. Further work is needed to determine if these metrics are also impacted by billing status.
Conclusions
Including observation-status encounters in readmission metrics changes readmission rates and subsequent hospital rankings. There is great variability in the use of observation status among children’s hospitals, and observation encounters make up a substantial proportion of patient encounters at children’s hospitals contributing to the PHIS. Their exclusion alters the composition of the population of hospitalized patients under evaluation and makes national studies and comparison of hospitals difficult. We recommend including observation stays when calculating readmission metrics to ensure valid hospital-to-hospital comparisons.
Dr Synhorst participated in the study design and the analysis and interpretation of the data, was the primary author of the manuscript, and provided critical intellectual content in the revision of the manuscript; Drs Hall, Harris, Gay, Peltz, Auger, Teufel, Macy, Neuman, Simon, Shah, Lutmer, Eghtesady, and Pavuluri participated in the study design and the analysis and interpretation of the data, were coauthors of the manuscript, and provided critical intellectual content in the revision of the manuscript; Dr Morse participated in the study design and the analysis and interpretation of the data, was the mentor author of the manuscript, and provided critical intellectual content in the revision of the manuscript; and all authors approved the final manuscript as submitted.
FUNDING: Dr Auger received funding for this work provided by the Agency for Healthcare Research and Quality (K08HS024735).
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments