Skip Navigation Bar
NLM logo

National Information Center on Health Services Research and Health Care Technology (NICHSR)

blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
blue arrow
Health Economics Logo

Health Economics Information Resources: A Self-Study Course

Module 4: An Introduction to the Principles of Critical Appraisal of Health Economic Evaluation Studies

Drummond's check-list for assessing economic evaluations

(Drummond M et al. Methods for the economic evaluation of health care programmes. 2nd ed. Oxford. Oxford University Press. 1997)

1.    Was a well-defined question posed in answerable form?
1.1.    Did the study examine both costs and effects of the service(s) or programme(s)?
1.2.    Did the study involve a comparison of alternatives?
1.3.    Was a viewpoint for the analysis stated and was the study placed in any particular decision-making context?
2.    Was a comprehensive description of the competing alternatives given (i.e. can you tell who did what to whom, where, and how often)?
2.1.    Were there any important alternatives omitted?
2.2.    Was (should) a do-nothing alternative be considered?
3.    Was the effectiveness of the programme or services established?
3.1.    Was this done through a randomised, controlled clinical trial? If so, did the trial protocol reflect what would happen in regular practice?
3.2.    Was effectiveness established through an overview of clinical studies?
3.3.    Were observational data or assumptions used to establish effectiveness? If so, what are the potential biases in results?
4.    Were all the important and relevant costs and consequences for each alternative identified?
4.1.    Was the range wide enough for the research question at hand?
4.2.    Did it cover all relevant viewpoints? (Possible viewpoints include the community or social viewpoint, and those of patients and third-party payers. Other viewpoints may also be relevant depending upon the particular analysis.)
4.3.    Were the capital costs, as well as operating costs, included?
5.    Were costs and consequences measured accurately in appropriate physical units (e.g. hours of nursing time, number of physician visits, lost work-days, gained life years)?
5.1.    Were any of the identified items omitted from measurement? If so, does this mean that they carried no weight in the subsequent analysis?
5.2.    Were there any special circumstances (e.g., joint use of resources) that made measurement difficult? Were these circumstances handled appropriately?
6.    Were the cost and consequences valued credibly?
6.1.    Were the sources of all values clearly identified? (Possible sources include market values, patient or client preferences and views, policy-makers’ views and health professionals’ judgements)
6.2.    Were market values employed for changes involving resources gained or depleted?
6.3.    Where market values were absent (e.g. volunteer labour), or market values did not reflect actual values (such as clinic space donated at a reduced rate), were adjustments made to approximate market values?
6.4.    Was the valuation of consequences appropriate for the question posed (i.e. has the appropriate type or types of analysis – cost-effectiveness, cost-benefit, cost-utility – been selected)?
7.    Were costs and consequences adjusted for differential timing?
7.1.    Were costs and consequences that occur in the future ‘discounted’ to their present values?
7.2.    Was there any justification given for the discount rate used?
8.    Was an incremental analysis of costs and consequences of alternatives performed?
8.1.    Were the additional (incremental) costs generated by one alternative over another compared to the additional effects, benefits, or utilities generated?
9.    Was allowance made for uncertainty in the estimates of costs and consequences?
9.1. If data on costs and consequences were stochastic (randomly determined sequence of observations), were appropriate statistical analyses performed?
9.2.    If a sensitivity analysis was employed, was justification provided for the range of values (or for key study parameters)?
9.3.    Were the study results sensitive to changes in the values (within the assumed range for sensitivity analysis, or within the confidence interval around the ratio of costs to consequences)?
10.    Did the presentation and discussion of study results include all issues of concern to users?
10.1.    Were the conclusions of the analysis based on some overall index or ratio of costs to consequences (e.g. cost-effectiveness ratio)? If so, was the index interpreted intelligently or in a mechanistic fashion?
10.2.    Were the results compared with those of others who have investigated the same question? If so, were allowances made for potential differences in study methodology?
10.3.    Did the study discuss the generalisability of the results to other settings and patient/client groups?
10.4.    Did the study allude to, or take account of, other important factors in the choice or decision under consideration (e.g. distribution of costs and consequences, or relevant ethical issues)?
10.5.    Did the study discuss issues of implementation, such as the feasibility of adopting the ‘preferred’ programme given existing financial or other constraints, and whether any freed resources could be redeployed to other worthwhile programmes?
blue arrow facing left Previous   Next blue arrow

Last Reviewed: February 23, 2016