Although cost-effectiveness analyses could inform recommendations regarding preventive services in primary care, valid assessments are rarely conducted for policy makers in the United States, other than for immunizations. Assuming policy makers were interested, how could researchers provide them with useful information? Cost-effectiveness analyses compare the expected improvement in health outcomes, for example, quality-adjusted life-years, and the change in total costs relative to current practice. The cost-effectiveness of screening depends on factors related to the treatments that screening enables, the likelihood of uptake of screening, and the additional costs of screening itself that accrue to the whole targeted population, along with potential harms (eg, false positive screening results). Several data gaps exist that complicate the conduct of economic analyses for childhood screening in the primary care setting. Although clinical trials can provide strong evidence of efficacy, because of the time lag between when a preventive service is provided and when health outcomes occur, such trials often use proxy measures in place of health outcomes. Few trials of pediatric preventive services have been conducted, and even when available, clinical trials may have limited generalizability because of strict inclusion and exclusion criteria for study participants. Furthermore, the interventions delivered in trials may differ from what is practical to provide in primary care. Given these limitations, careful use of observational data and modeling techniques is generally required for the estimation of long-term effectiveness and associated costs. Particular attention needs to be paid in analyses of observational data to threats to validity, including sources of potential bias.
How should considerations of cost and value inform or influence recommendations for preventive services in primary care practices? Neither the US Preventive Services Task Force nor Bright Futures explicitly considers costs (ie, resources required) for expanded preventive services (eg, screening, counseling, immunizations, and preventive medications) when developing recommendations.1,2 Of course, resources are not unlimited. Policy makers interested in the budget implications of screening recommendations could benefit from the availability of high-quality economic evaluations (eg, cost-effectiveness and benefit-cost analyses) for effective primary prevention to help ensure that recommended screenings are not so expensive as to crowd out other interventions. The availability of economic analyses could also improve transparency regarding the trade-offs between economic and health outcomes.
Despite the potential benefit of economic evaluations, economic analyses have rarely been used to inform recommendations for primary clinical preventive services in the United States, other than for immunizations.3 Economic evaluations of newborn screening tests are commonly conducted and have sometimes informed decisions made by state governments to add conditions to state-mandated screening panels.4 In contrast, we are unaware of any primary care preventive service recommendation that was directly informed by a formal economic evaluation, other than immunizations, for the pediatric primary care setting in the United States. More broadly, economic evaluations historically have not been used to inform recommendations of clinical services for primary prevention.5
Cost-effectiveness and Prevention
Cost-effectiveness analysis (CEA) is one type of economic evaluation. CEAs assess both costs and health outcomes for multiple strategies.6 If an intervention improves health and reduces total direct costs, it is said to be dominant or cost-saving relative to the comparison. Most preventive strategies, regardless of the targeted age, do not save costs.7 Instead, most preventive interventions require an investment to achieve gains in health. The policy decision is therefore whether the additional increases in health are worth the investment of additional costs.8 A handful of childhood preventive services have evidence of cost-effectiveness reported after they were implemented but not before implementation (eg, some newborn screening tests9 and possibly a comprehensive preventive oral health program10 ). Another type of economic evaluation, cost-benefit analysis, is commonly used to assess nonhealth interventions such as early childhood education.11,12
Challenges and Solutions
Data gaps and methodologic challenges can contribute to the absence of CEAs at the preimplementation stage. Methodologic challenges include the difficulty of determining the value of health outcomes.13 Data gaps include lack of data on costs and resource use for screening interventions, including the associated follow-up costs, and lack of evidence on long-term outcomes.14,15
To allow comparability of cost-effectiveness across disease areas and interventions, health economists generally recommend that outcome data be combined with measures of health state preferences to generate outcomes in terms of quality-adjusted life-years (QALYs).16 Although QALYs provide a standardized approach to incorporating health outcomes and preferences in economic analyses, there are significant challenges related to obtaining valid and reliable scores throughout childhood and into adulthood.17 Advances in the field have led to the availability of improved tools for the assessment of QALYs in children, but challenges remain.18,19
Economic evaluations of interventions generally discount future health outcomes and costs to reflect time preference, the concept that all else being equal people typically prefer a reward now (in the present) rather than in the future. Calculating the present value of future costs and outcomes entails applying a discount rate in future years before summing over the analytic time horizon of the study. The standard discount rate used in the United States is 3%20 ; in other countries, it has ranged from 1.5% to 5%.21 Questions have been raised as to whether the standard approach for discounting is appropriate for interventions with long-term outcomes, which is relevant to pediatrics, but no alternatives are recommended at this time.22
Economic evaluations can be challenging to conduct, especially for pediatric interventions for which the evidence base is often incomplete and for which it may take decades to observe impacts on meaningful health outcomes. Policy makers may be justifiably skeptical of economic analyses because of the risk of generating biased results based on unvalidated assumptions or poor underlying data.23 Ideally, economic evaluations should incorporate data on long-term outcomes and costs, neither of which is commonly available for pediatric interventions. For interventions that have not yet been implemented or have only recently been introduced, long-term outcomes data are typically lacking.
A study of adolescent depression treatment provides some insight into understanding the challenges of evaluating the cost-effectiveness of screening.24 In this study, adolescents who had screened positive for depression and were diagnosed with either prevalent or newly identified depression were recruited from primary care practices and randomly assigned into a trial of a collaborative care model with depression care managers or usual care enhanced by a letter describing test results and recommending follow-up depression care. The main outcome of the study was prevalence of depressive symptoms at 12 months; the authors reported that compared with usual care with the letter, 3.4 adolescents would need to be managed for depressive symptoms by using the collaborative care model for 1 to experience clinical improvement at 12 months. Although this report estimated the cost of collaborative care from the perspective of the health care system ($1403 per participant; ∼$4770 per participant with clinical improvement), it did not assess its cost-effectiveness. To assess the cost-effectiveness of screening and management of depression, one would need to know the costs of screening, without which the intervention could not be provided; the follow-up costs associated with screening; the costs of any averted treatment due to improved clinical outcomes; and the improvements in quality of life associated with a reduction in depressive symptoms.
Using simulation modeling can help to address some of the challenges related to an incomplete evidence base. Researchers can model variation in costs and outcomes of proposed interventions before implementation to predict potential cost-effectiveness under different conditions, covering a range of possible scenarios when data are scarce. Although researchers conduct sensitivity analyses of uncertainty of parameter estimates, they generally do not address model specification as a source of uncertainty. For example, a model of mandated folic acid fortification of cereal grain products in the United States conducted before the 1996 policy decision conservatively estimated that it would lead to direct cost savings of $5 million per year (1996 US dollars).25 In a subsequent study, using postfortification birth defects surveillance data, researchers found a much larger reduction in the number of births with neural tube defects (NTDs) than predicted in the prefortification analysis and estimated direct cost savings of $143 million per year (2002 US dollars).26 The main reason for the difference in findings was that in the prefortification analysis, a threshold effect was assumed rather than a dose-response association of periconception folic acid intake and risk of NTDs, a modeling choice that was not considered in sensitivity analyses. A 2016 economic evaluation with more complete epidemiological and economic data estimated direct cost savings of $300 to $600 million per year (2014 US dollars).27 In contrast, predictions of cost savings to Medicaid programs that were made during the 1980s and early 1990s if programs expanded access to prenatal care were not realized in practice because the predicted reduced rates of low birth weight did not materialize.28 Conducting sensitivity analyses to understand the impact of uncertainty is a best practice and especially important in evaluating the CEA of pediatric interventions.
Good quality, unbiased evidence of the net health benefits and costs of interventions can be hard to acquire. Although well-conducted randomized trials are a valuable source of unbiased estimates, generalizability may be limited if the trials are not conducted in typical practice settings or if they include patients not representative of those who would receive the preventive service. For example, a trial of obesity prevention might overestimate benefit if participants were selected on the basis of their willingness to change behaviors. Even if the study population is generalizable, trials often lack sufficient sample sizes and heterogeneity to understand how impact varies across important subgroups (eg, sex, race and/or ethnicity, rural and urban status). Again, simulation modeling can address this limitation by evaluating a range of inputs.
CEAs can be conducted from either societal or health care sector perspectives. Although a societal perspective is generally preferred for both childhood and adult interventions, use of that perspective is demanding in terms of inputs on costs. Analyses from a societal perspective may consider impacts on patients and families, such as unpaid caregiving time costs and income losses due to premature death or disability, as well as costs incurred outside the health care sector (eg, special education services and the criminal justice system).6 Because data on such costs are generally not readily available, primary data collection might be required to be able to conduct an analysis from a societal perspective. If the health care perspective is appropriate to address a particular policy question, this narrower set of costs is typically more readily available.
Study Designs to Inform CEAs
Trials often use proxy outcomes (eg, biomarkers) instead of meaningful patient-centered outcomes (eg, length of life, quality of life) because outcomes can usually only be feasibly assessed within a short period (typically <5 years) of receipt of an intervention. If there is a long lag between screening, interventions, and improved health, it is impossible for a trial to directly assess health outcomes. For example, studies of lipid screening, other than in high-risk adults, typically use differences in blood cholesterol levels29 because it is not practical to follow participants to the point when adverse cardiovascular outcomes might be anticipated.30 Because meaningful economic analyses should consider long-term impacts, use of modeling with assumptions on the relationship between proxy intermediate end points and long-term outcomes of interest is necessary. For example, in one study, 6 different interventions to promote childhood physical activity were compared.31 On the basis of the available evidence, including trial data, the study authors evaluated the 10-year impact assuming that effects on BMI persisted over this time period, an assumption that might not be valid. Alternate assumptions of varying duration of the benefits could have been incorporated into the analysis by using modeling.
Observational data can be used for informing model inputs when complete data from randomized trials are not available, and even when both are available, observational data may be superior to trial data in some situations. Most critically, results of randomized trials can seriously mislead analysts if the form in which a service is delivered differs appreciably between a research trial and the recommended policy or practice. The folic acid fortification case is a telling example. Trial data revealed that a daily dose of 400 µg of folic acid greatly reduced the risk of NTDs. Analysts assumed that only women who consumed that amount of folic acid would be protected, and with an average expected intake of 100 µg/day through fortification, few women were modeled as being protected by fortification of cereal grain products. CEA analysts did not incorporate the evidence published in 1995 of a continuous dose-response association between blood folate levels and NTD risk in observational data.32 Cost-effectiveness analysts who continued to rely on the original trial data continued for many years to greatly underestimate the effectiveness and cost-effectiveness of folic acid fortification for the prevention of NTDs.33,34
For many interventions, no trial data are available, and researchers are obliged either to use observational data or ignore the interventions. A well-conducted analysis of observational data that takes into account potential sources of bias can provide useful information for decision-makers. For example, in a recent CEA in Australia,35 using Markov modeling and observational data, researchers assessed long-term health care costs and health outcomes (QALYs) of lipid screening in 10-year-old children to detect, and treat with statins, familial hypercholesterolemia for the prevention of heart disease. The study authors concluded that such a strategy would almost certainly be considered cost-effective and might even be cost-saving; that is, resulting in better health and lower total health care costs.
A challenge of using observational data is to minimize differences between intervention and comparison groups that could cause confounding. For example, historical controls or comparison groups from different jurisdictions may have inferior access to services other than the intervention, leading to an overstatement of improved outcomes among those receiving the intervention. Unscreened cohorts are subject to ascertainment and referral biases, which can result in more severe cases with higher costs and worse outcomes relative to screened cohorts, independent of the effectiveness of screening and treatments.
CEAs are often supplemented by expert evidence or modified by expert opinion,36 which can lead to a high risk of bias. For example, the assumption that late-treated cases of phenylketonuria have the same outcomes as untreated cases could have overstated the cost-effectiveness of phenylketonuria newborn screening.37 Although almost every CEA will require some assumptions to be made, best practices exist to guide the process of developing assumptions by using expert elicitation and explicit modeling of uncertainty.38
Policy Decisions and CEAs
Before considering specific types of data sources and study designs that could yield estimates to inform economic evaluations, those responsible for funding research might consider various factors as to whether an economic evaluation is warranted. One factor that could be considered is the importance of the policy decisions in terms of potential cost or impact. For example, the US government requires regulatory impact analyses for proposed federal regulations anticipated to cost >$100 million.39 Another factor to consider is the likelihood that the estimates provided in economic evaluations will inform or influence policy decisions. If the likelihood is low, research to inform economic evaluations in that policy area might also be of low priority. Little is known about the demand for evidence of cost-effectiveness at the time that a decision is being made about preventive services. Although it is likely that payers or families who might have to bear the cost of the preventive service are interested in information about costs and expected outcomes, it is unclear how much this information would directly impact policy-level decisions in the United States.
Although economic analyses of preventive services can provide useful information, clinical investigators typically assess the effectiveness of screening and associated interventions without considering costs. There can be great enthusiasm to implement screening tests that are found to be effective, and adoption can move ahead faster than the collection of economic data needed to assess cost-effectiveness. Although cost data collected while conducting a randomized trial might not be generalizable to nonresearch settings, and costs might change during the course of the study, trial-based cost data can serve as an initial benchmark of resource requirements for implementation. We suggest that researchers collect cost data while interventions are being assessed in either trials or observational studies.
Even with the collection of high-quality data, the degree to which policy makers would consider the findings to be helpful is unclear. Policy makers in the United States consider many factors when evaluating prevention strategies, which generally do not include evidence of cost or cost-effectiveness. In any case, it is uncommon to have valid economic data available at the time that pediatric preventive services are considered for recommendation. This suggests that future research could help improve the quality of CEAs and other economic analyses and find ways to ensure that relevant audiences understand their strengths and limitations. However, it is up to guideline-setting groups to decide whether to use such information in developing future recommendations.
Future Opportunities
It is important to recognize that the US Preventive Services Task Force does not consider costs in making recommendations, “in part to avoid any misperception that the Task Force’s purpose is to limit health care based on cost.”40 However, understanding the balance of costs and impacts could be helpful in targeting preventive services. For example, the US Advisory Committee on Immunization Practices considers economic analyses when recommending vaccines for specific populations.3,41 Carefully conducted economic evaluations could help to identify target populations for specific screening or counseling interventions. Prioritization and targeting of preventive services are important because there is not enough time to complete all recommended preventive services in the usual primary care setting.42 Evaluating how to efficiently collect the information necessary for rigorous CEAs of preventive services delivered in primary pediatric care is an important topic for future research.
Dr Grosse conceptualized and drafted the initial manuscript. Drs Prosser and Kemper reviewed and revised the manuscript for important intellectual content; and all authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
FUNDING: No external funding.
References
Competing Interests
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
Comments