Hospitals are being required to report publically their adjusted mortality rates, which are then being used to rank hospitals. Our objectives were to assess the statistical reliability of the determination of a hospital's adjusted mortality rate, of comparisons of that rate with the rates of other hospitals, and of the use of those rates to rank the hospitals.


A cross-sectional study of 473 383 patients discharged from 42 US children's hospitals in 2008 was performed. Hospital-specific observed/expected (O/E) mortality rate ratios and corresponding hospital rankings, with 95% confidence intervals (CIs), were examined.


Hospitals' O/E mortality rate ratios exhibited wide 95% CIs, and no hospital was clearly distinguishable from the other hospitals' aggregated mean mortality performance. Only 2 hospitals' mortality performance fell outside the comparator hospitals' 95% CI. Those hospitals' 95% CIs overlapped with the overall comparator set's 95% CI, which suggests that there were no statistically significant hospital outliers. Fourteen (33.3%) of the 42 hospitals had O/E ratios that were not statistically different from being in the 95% CI of the top 10% of hospitals. Hospital-specific mortality rate rankings displayed even broader 95% CIs; the typical hospital had a 95% CI range that spanned 22 rank-order positions.


Children's hospital-specific measures of adjusted mortality rate ratios and rankings have substantial amounts of statistical imprecision, which limits the usefulness of such measures for comparisons of quality of care.

You do not currently have access to this content.