Skip Navigation Bar

National Information Center on Health Services Research and Health Care Technology (NICHSR)



The impact of HTA is variable and inconsistently understood.  Among the most important factors influencing the impact of HTA reports is the directness of the relationship between an HTA program and policymaking bodies and health care decisions.  Whereas some HTA reports are translated directly into policies with clear and quantifiable impacts, the findings of some authoritative, well-documented assessment reports, even those based on “definitive” RCTs and other rigorous studies, often go unheeded or are not readily adopted into general practice (Banta 1993; Ferguson, Dubinsky 1993; Henshall 2002; Institute of Medicine 1985).  Indeed, even when the reporting of HTA findings is followed by changes in policies, use of a technology, or other potential indicators of impact, it may be difficult to demonstrate the causal effect of the HTA on those changes. 

HTA reports can make an impact by changing one or more of:

  • Regulatory policy (e.g., market access of a technology)
  • Third-party payment policy (e.g., coverage, pricing, reimbursement of a technology)
  • Rate of use of a technology
  • Clinical practice guidelines
  • Clinician awareness and behavior
  • Patient awareness and behavior
  • Acquisition, adoption, or diffusion of a technology
  • Organization or delivery of care
  • R&D priorities and associated spending levels
  • Data collection (e.g., to fill evidence gaps identified by HTA reports)
  • Marketing of a technology
  • Allocation of local, regional, national, or global health care resources
  • Investment decisions (e.g., by industry, investors)
  • Incentives to innovate

The impacts of HTA can occur in an interrelated series (although not necessarily in strict sequence), such as that described by EUnetHTA (Garrido 2008): 

           Awareness → Acceptance → Policy process → Policy decision → Practice → Outcome

Historically, systematic attempts to document the dissemination processes and impacts of HTA programs have been infrequent (Banta 1993; Goodman 1988; Institute of Medicine 1985; Jacob 1997), though a few, notably the NIH Consensus Development Program (Ferguson 1993), have been studied in detail.  More recently, there is growing recognition that monitoring the impact of individual HTAs and HTA programs is a “good practice” or “key principle” of HTA (see, e.g., Drummond 2012; Drummond 2008; Goodman 2012; Velasco 2002).  A small but steadily growing literature has reported a range of impacts of HTA in specific countries and other jurisdictions on technology adoption, disinvestment, reimbursement, and other policies and practices (Hailey 2000; Hanney 2007; Zechmeister 2012). 

Although precise estimates of the impact of individual HTAs and HTA programs will seldom be possible, ongoing efforts to systematically document the changes that are known to result from HTA, or that are associated with HTA, are feasible (Hanney 2007; Jacob 1997). 

An area of increasing interest is the impact of HTA on disinvestment in technologies that do not offer value for money or have been superseded by others that are more safe, effective, and/or cost-effective.  The use of such technologies often persists due to financial incentives, professional interests, and resistance to change among clinicians, patients, and health care delivery and payment systems (Garner 2011).  For example, in the US, despite the findings from two rigorous, double-blind RCTs demonstrating that percutaneous vertebroplasty for painful vertebral fractures provided no better pain relief than a sham procedure, third-party payers continued to cover the procedure more than two years after publication of the trial results (Wulff 2011).  The ability of HTA to inform evidence-based disinvestment is of particular importance in health care systems with fixed budgets, where spending on low-value technologies limits expenditures on more cost-effective ones (Kennedy 2009). 


A. Attributing Impact to HTA Reports

The impact of a HTA depends on diverse factors.  Among these are target audiences’ legal, contractual, or administrative obligations, if any, to comply with the HTA findings or recommendations (Anderson 1993; Ferguson, Dubinsky 1993; Gold 1993).  Regulatory agency (e.g., the FDA in the US) approvals or clearances for marketing new drugs and devices are translated directly into binding policy.  In the US, HTAs conducted by AHRQ at the request of CMS are used to inform technology coverage policies for the Medicare program, although CMS is not obligated to comply with findings of the AHRQ HTA.  The impacts of NIH consensus development conference statements, which were not statements of government policy, were inconsistent and difficult to measure.  Their impact appeared to depend on a variety of factors intrinsic to particular topics, the consensus development process itself, and a multitude of contextual factors (Ferguson 1993; Ferguson 2001).

The task of measuring the impact of HTA can range from elementary to infeasible.  As noted above, even if an intended change does occur, it may be difficult or impossible to attribute this change to the HTA.  A national-level assessment that leads to recommendations to increase use of a particular intervention for a given clinical problem may be followed by a documented change in behavior consistent with that recommendation.  However, the recommendation may be made at a time when the desired behavior change is already underway, third-party payment policy is already shifting in favor of the technology, a strong marketing effort is being made by industry, or results of a definitive RCT are being made public. 

As is the case for attributing changes in patient outcomes to a technological intervention, the ability to demonstrate that the results of an HTA have an impact depends on the conditions under which the findings were made known and the methodological approach used to determine the impact.  Evaluations of the impact of an HTA often are unavoidably observational in nature; however, under some circumstances, quasi-experimental or experimental evaluations have been used (Goldberg 1994).  To the extent that impact evaluations are prospective, involve pre- and post-report dissemination data collection, and involve directed dissemination to clearly identified groups with well-matched controls (or at least retrospective adjustment for reported exposure to dissemination), they are more likely to detect any true causal connection between an HTA report and change in policy or behavior.  Even so, generalizing from one experience to others may be impractical, as it is difficult to describe and replicate the conditions of a particular HTA report dissemination.


B. Factors Influencing Impact

Many factors can affect the impact of HTA reports.  Beyond the particular dissemination techniques used, characteristics of the target groups, the environment and the HTAs themselves can influence their impact (Goldberg 1994; Mittman and Siu 1992; Mittman and Tonesk 1992).  Examples are shown in Box IX-1.  Knowledge about these factors can be used prospectively to improve the impact of HTA. 

As described in another chapter of this document, in seeking to maximize the impact of their reports, HTA programs can involve target audiences early, such as in priority setting of assessment topics and determination of assessment questions.  Further, they can consider how to properly present their reports and plan their dissemination strategies to reach and influence those various target audiences. 

The impact of HTA findings may be increased to the extent that the HTA process is local, i.e., conducted by or involving people in the target decision-making organization, such as a hospital network or major payer agency.  Such “local” HTA can increase the utility of HTA findings due to the relevance of the HTA topic (e.g., by having input on topic selection and use of local data), timeliness, and formulating policy reflecting the local values and context (Bodeau-Livinec 2006; McGregor 2005).  Findings from HTA that is conducted with rigorous, well-documented methodology on topics that are priorities or otherwise of interest to sponsors with policymaking authority (“policy customers”) are more likely to be adopted and have an impact (Hanney 2007; Raftery 2009). 

In summary, the following are ways in which HTA programs can increase the likelihood of their reports having the intended impacts (see, e.g., Hailey 2000; McGregor 2005; Sorensen 2008):

  • Conduct a transparent, credible, unbiased, rigorous, and well-documented HTA process
  • Gain prior commitment, where feasible, from decision makers to use HTA findings
  • Ensure that assessments are designed to address decision makers’ questions
  • Seek to establish formal links between producers and users of HTA
  • Involve key stakeholders throughout the HTA process (e.g., in priority setting, determination of assessment questions) in a transparent, well-managed manner
  • Gain input of representatives of anticipated target audiences and communication experts in planning knowledge transfer strategies, including different formats, languages, media, and related messaging of HTA findings to different target audiences, as appropriate
  • Anticipate the resource requirements, incentives, delivery system characteristics, and other diverse factors that will influence the feasibility of implementing HTA findings
  • Ensure that HTA findings are delivered on a timely basis to inform decision making
  • Promote collaboration and transfer of knowledge and skills across jurisdictions (e.g., across nations, regions, localities)

Box IX-1. Examples of Factors That Can Affect Impact of HTA Reports

Target clinician characteristics

  • Type of clinician: physician, mid-level practitioner, nurse, dentist, etc.
  • Specialty; training
  • Professional activities/affiliations
  • Institutional affiliations (e.g., community hospital, university hospital)
  • Financial, professional. quality incentives to implement findings/recommendations
  • Awareness of performance relative to peers
  • Access to and familiarity with current evidence, practice guidelines
  • Malpractice concerns/exposure

Target provider organization characteristics

  • Hospitals: general versus specialized, size, teaching status, patient mix, for-profit vs. non-profit, distribution of payment sources (e.g., fee-for-service vs. capitation), ownership status, financial status, accreditation, market competition
  • Physicians' offices: group practice vs. solo practice, hospital affiliation, teaching affiliation, board certification, distribution of payment sources, market competition
  • Financial, organizational, or quality incentives to implement findings/recommendations

Target patient characteristics

  • Insurance (type) and cost sharing status (deductible, copayment, etc.)
  • Access to regular primary care provider, other care
  • Health status
  • Health awareness, use of health information media, health literacy
  • Socioeconomic/demographic/cultural factors
  • Home, workplace, other environmental factors
  • Social interaction (family, friends, peers, etc.)

Environmental characteristics

  • Urban, suburban, rural
  • Competition
  • Economic status
  • Third-party payment (e.g., market distribution of fee-for-service vs. bundled payment)
  • State and local laws, regulations
  • Activities of pressure groups/lobbyists, other interest groups
  • Malpractice potential/activity
  • Political factors

Characteristics of HTA findings/recommendations

  • Type/extent of engagement of target audiences/stakeholders in process
  • Timeliness/responsiveness relative to needs of target audiences
  • Reputation/credibility of HTA organization, analysts, expert panels
  • Transparency/rigor of assessment process
  • Quality and strength of evidence base
  • Application of findings: evidence review only; policy implications/recommendations; input to practice guidelines, coverage/reimbursement, technology acquisition, quality standards, etc.
  • Perceived appropriateness of rigidity or flexibility of findings/recommendations
  • Dissemination media, format, content/frequency
  • Proximity to decision makers or policymakers and extent of their obligation (e.g., legal mandate or optional) to implement findings/recommendations
  • Resources required to implement findings/recommendations

References for Chapter IX

Anderson GF, Hall MA, Steinberg EP. Medical technology assessment and practice guidelines: their day in court. Am J Public Health. 1993;83(3):1635-9.

Banta HD, Luce BR. Health Care Technology and Its Assessment: An International Perspective. New York, NY: Oxford University Press; 1993.

Bodeau-Livinec F, Simon E, Montagnier-Petrissans C, Joël ME, Féry-Lemonnier E. Impact of CEDIT recommendations: An example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161-8.

Drummond M, Neumann P, Jönsson B, Luce B, et al. Can we reliably benchmark health technology assessment organizations? Int J Technol Assess Health Care. 2012 Apr;28(2):159-65.

Drummond MF, Schwartz JS, Jönsson B, Luce BR, et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care. 2008;24(3):244-58.

Ferguson JH. NIH consensus conferences: dissemination and impact. Ann N Y Acad Sci. 1993;703:180-98.

Ferguson JH, Dubinsky M, Kirsch PJ. Court-ordered reimbursement for unproven medical technology. JAMA. 1993;269(16):2116-21.

Ferguson JH, Sherman CR. Panelists' views of 68 NIH consensus conference. Int J Technol Assess Health Care. 2001;17(4):542-58.

Garner S, LIttlejohns P. Disinvestment from low value clinical interventions: NICEly done? BMJ 2011;343:d4519.

Garrido MV, Kristensen FB, Nielsen CP, Busse R. Health Technology Assessment and Health Policy-Making in Europe: Current Status, Challenges, and Potential. European Observatory for Health Systems and Policies. Copenhagen: WHO Regional Office for Europe, 2008.

Gold JA, Zaremski MJ, Lev ER, Shefrin DH. Daubert v. Merrell Dow. The Supreme Court tackles scientific evidence in the courtroom. JAMA. 1993;270(24):2964-7.

Goldberg HI, Cummings MA, Steinberg EP, et al. Deliberations on the dissemination of PORT products: translating research findings into improved patient outcomes. Med Care. 1994;32(suppl. 7):JS90-110.

Goodman C. Toward international good practices in health technology assessment. Int J Technol Assess Health Care. 2012;28(2):169-70.

Goodman C, ed. Medical Technology Assessment Directory: A Pilot Reference to Organizations, Assessments, and Information Resources. Washington, DC: Institute of Medicine; 1988.

Hailey D, Corabian P, Harstall C, Schneider W. The use and impact of rapid health technology assessments. Int J Technol Assess Health Care. 2000;16(2):651-6.

Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess. 2007;11(53):iii-iv, ix-xi, 1-180.

Henshall C, Koch P, von Below GC, Boer A, et al. Health technology assessment in policy and practice. Int J Technol Assess Health Care. 2002;18(2):447-55.

Institute of Medicine. Assessing Medical Technologies. Washington, DC: National Academy Press; 1985.

Jacob R, McGregor M. Assessing the impact of health technology assessment. Int J Technol Assess Health Care. 1997;13(1):68-80.

Kennedy I. Appraising the Value of Innovation and Other Benefits. A Short Study for NICE. July 2009. Accessed June 27, 2014 at:

McGregor M, Brophy JM. End-user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(2):263-7.

Mittman BS, Siu AL. Changing provider behavior: applying research on outcomes and effectiveness in health care. In Improving Health Policy and Management: Nine Critical Research Issues for the1990s. Shortell SM, Reinhardt UE, eds. 195-226. Ann Arbor, Mich: Health Administration Press; 1992.

Mittman BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and practitioner behavior change. QRB Qual Rev Bull. 1992;18(12):413-22.

Raftery J, Hanney S, Green C, Buxton M. Assessing the impact of England's National Health Service R&D Health Technology Assessment program using the "payback" approach. Int J Technol Assess Health Care. 2009;25(1):1-5.

Sorensen C, Drummond M, Kristensen FB, Busse R. How can the impact of health technology assessments be enhanced? European Observatory for Health Systems and Policies. Copenhagen: WHO Regional Office for Europe, 2008.

Velasco M, Perleth M, Drummond M, Gürtner F, et al. Best practice in undertaking and reporting health technology assessments. Working group 4 report. Int J Technol Assess Health Care. 2002;18(2):361-422.

Wulff KC, Miller FG, Pearson SD. Can coverage be rescinded when negative trial results threaten a popular procedure? The ongoing saga of vertebroplasty. Health Aff (Millwood). 2011;30(12):2269-76.

Zechmeister I, Schumacher I. The impact of health technology assessment reports on decision making in Austria. Int J Technol Assess Health Care. 2012;28(1):77-84.

Page 1 of 1

< Previous Section | Next Section >