Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

National Information Center on Health Services Research and Health Care Technology (NICHSR)

HTA 101: VI. DETERMINE TOPICS

Organizations that conduct or sponsor HTAs are subject to limited resources.  With the great supply of potential assessment topics, HTA organizations need practical and accountable means of determining what to assess.  This chapter considers how assessment programs identify candidate assessment topics and set priorities among these. 

A. Identify Candidate Topics

To a large extent, assessment topics are determined, or bounded, by the mission or purpose of an organization.  For example, national and regional health plans and other third-party payers generally assess technologies on a reactive basis; a new medical or surgical procedure that is not recognized by payers as being standard or established may become a candidate for assessment.  For the US Centers for Medicare and Medicaid Services (CMS), some assessment topics arise in the form of requests for national coverage policy determinations that cannot be resolved at the local level or that are recognized to be of national interest.  These requests typically originate with Medicare contractors that administer the program in their respective regions, Medicare beneficiaries (people who are eligible for Medicare), physicians, health product companies, health professional associations, or government entities.  CMS may request assistance in the form of “evidence reports” or HTAs from a sister agency, AHRQ, which typically commissions these from one of its Evidence-based Practice Centers (part of the AHRQ Effective Healthcare Program). 

Apart from requests from CMS, the AHRQ Effective Healthcare Program solicits topic nominations from the public.  Its online topic nomination form requests information about:  the health care intervention of interest; any specific comparator(s); patient groups and subgroups affected; health benefits or outcomes; risks/harms/side effects;  which (if any) of 14 priority health conditions/diseases are involved; which (if any) of six priority populations is involved; which (if any) federal health program (e.g., Medicare, Medicaid) is involved; why the topic is important; whether the question represents uncertainty for clinicians or policymakers; stakeholders in this topic; how the findings will be used; technical experts relevant to the topic; and any supporting documentation.

For the UK National Institute for Health and Care Excellence (NICE), topics are not determined internally, but are referred from the UK Department of Health.  Topics are selected based on such factors as burden of disease, impact on resources, and whether there is inappropriate variation in practice across the UK (NICE 2013).

For the Cochrane Collaboration, potential topics generally arise from members of the more than 50 review groups, who are encouraged to investigate topics of interest to them, subject to the agreement of their review groups.  However, there is as yet no standard or common priority-setting process used across the Cochrane Collaboration (Nasser 2013).

Horizon Scanning

The demand for early information about new, emerging, and existing health care interventions and related trends has prompted the development and evolution of “horizon scanning” functions (Carlsson 1998; Douw 2003; Harper 1998; Packer 2012). Horizon scanning is intended to serve multiple purposes, including, e.g., the following:

  • Identify potential topics for HTA and information for setting priorities among them
  • Identify areas of technological change
  • Anticipate and identify new indications or uses of technologies
  • Identify variations in use of technologies
  • Identify inappropriate use of technologies, including over-use, under-use, and improper use
  • Forecast the health and economic impacts of technologies
  • Identify levels in improvement in effectiveness in relation to additional costs that would demonstrate the cost-effectiveness of a new technology
  • Anticipate potential social, ethical, or legal implications of technologies
  • Plan data collection to monitor adoption, diffusion, use, and impacts of technologies
  • Enable health care providers, payers, and patients to plan for, adapt to, and manage technological change, including “rising”/emerging technologies and “setting” (becoming obsolescent) technologies (for potential disinvestment)

Most horizon scanning programs generate rapidly completed, brief descriptions of new or emerging technologies and their potential impacts. Certainly, there are tradeoffs inherent in using early information that may be incomplete or unreliable as opposed to waiting long enough for more definitive information that opportunities to benefit from it may have passed.  HTA programs have made use of horizon scanning in important ways.  While the major thrust of horizon scanning has been to identify “rising” (new and emerging) technologies that eventually may merit assessment, horizon scanning can identify “setting” technologies that may be outmoded, superseded by newer ones, and candidates for disinvestment (Henshall 2012.) In either case, horizon scanning provides an important input into setting assessment priorities.

Examples of national and international horizon scanning programs are:

For example, the purposes of EuroScan, a collaborative network involving about 20 HTA agencies, are to collect and share information on innovative health care technologies to support decision making and adoption and use of effective, useful, and safe technologies, as well as to provide a forum for sharing and developing methods for early identification and assessment of new and emerging technologies and predicting their potential impacts.

The Canadian Network for Environmental Scanning in Health (CNESH) identifies information on new, emerging, or new applications of health technologies and shares this information across Canada.  It also develops and promotes methods for identifying, filtering, and setting priorities among new or emerging health technologies.  CNESH produces a “top 10” list of new and emerging health technologies in Canada.

The Health Policy Advisory Committee on Technology (HealthPACT) provides evidence-based advice about potentially significant new and emerging technologies to health departments in Australia and New Zealand.  This supports information exchange and evaluation of the potential impact of these technologies on those national health systems, including informing financing decisions and the managed introduction of new technologies.  HealthPACT produces New and Emerging Health Technology Reports and Technology Briefs.

The AHRQ Healthcare Horizon Scanning System provides AHRQ with a systematic process to identify and monitor target technologies and create an inventory of those that have the highest potential for impact on clinical care, the health care system, patient outcomes, and costs.  This system is also intended to serve as a tool for the public to identify and find information on new health care technologies (ECRI Institute 2013).

EUnetHTA developed a web-based Planned and Ongoing Projects (POP) database to enable HTA agencies to share information about planned and ongoing projects at each agency, with the aim of avoiding duplication and encouraging collaborative efforts (EUnetHTA 2013).

A 2013 systematic review of international health technology horizon scanning activity identified 23 formal programs, most of which are members of EuroScan, along with a variety of other less structured horizon scanning functions of government and private sector organizations.  Although the formal programs had somewhat varying emphases on target technologies, time horizons of interest, and methods of scanning and assessment, they generally shared the main functions of identification and monitoring of technologies of interest and evaluation of potential impacts of technologies (Sun 2013).

As shown in Box VI-1, a considerable variety of electronic bibliographic databases, newsletters, regulatory documents, and other sources provide streams of information pertaining to new and emerging health care interventions. The AHRQ Horizon Scanning Protocol and Operations Manual provides a detailed list of databases, news sources, and other information sources for horizon scanning, as well as search filters for horizon scanning of PubMed and Embase (ECRI Institute 2013).

Box VI-1. Information Sources for New and Emerging Health Care Interventions

  • Large bibliographic databases (e.g., PubMed, Embase, SciSearch)
  • Specialized bibliographic databases (e.g., CINAHL, PEDro, PsycINFO)
  • Databases of ongoing research and results (e.g., ClinicalTrials.gov, HSRProj)
  • Priority lists and forthcoming assessments from HTA agencies and vendors
  • Cochrane Collaboration protocols (plans for forthcoming/ongoing systematic reviews)
  • Trade publications (e.g., The Pink Sheet, The Gray Sheet, In Vivo, Medtech Insight, Pharmaceutical Approvals Monthly, Medical Device Daily, GenomeWeb Daily News, Telemedicine and e-Health)
  • General news (e.g., PR Newswire, New York Times, Wall Street Journal)
  • General health care/medical journals and specialty health care/medical journals
  • Health professions and industry news (e.g., Medscape, Reuters Health Industry Briefing, , Reuters Health Medical News)
  • Conference abstracts and proceedings of health professions organizations, health industry groups
  • Technology company web sites
  • Industry association (e.g., AdvaMed, BIO, PhRMA) sites (e.g., AdvaMed SmartBrief, PhRMA New Medicines Database)
  • Market research reports (e.g., Frost & Sullivan; GlobalData; IHS Global Insight; Thomson Reuters)
  • Regulatory agency announcements of market approvals, other developments for new pharmaceuticals, biological, and devices (e.g., FDA Advisory Committee Alerts, FDA Approval Alerts, FDA Drug Daily Bulletin, FDA Device Daily Bulletin)
  • Adverse event/alert announcements (e.g., from FDA MedWatch, United States Pharmacopeia)
  • Payer policies, notifications (e.g., CMS Updates to Coverage Pages, Aetna Clinical Policy Bulletins)
  • Reports and other sources of information on significant variations in practice, utilization, or payment policies (e.g., The Dartmouth Atlas)
  • Special reports on health care trends and futures (e.g., from Institute for the Future Health Horizons Program; Institute for Healthcare Improvement)

B. Setting Assessment Priorities

Some assessment programs have explicit procedures for setting priorities; others set priorities only in an informal or ad hoc manner.  Given very limited resources for assessment and increasing accountability of assessment programs to their parent organizations and others who use or are affected by their assessments, it is important to articulate how assessment topics are chosen.

Most assessment programs have criteria for topic selection, although these criteria are not always explicit.  For example, is it most important to focus on costly health problems and technologies?  What about health problems that affect large numbers of people, or health problems that are life-threatening?  What about technologies that cause great public controversy?  Should an assessment be undertaken if it is unlikely that its findings will change current practice?  Examples of selection criteria that are used in setting assessment priorities are shown in Box VI-2

Box VI-2. Examples of HTA Selection Criteria Used in Setting Assessment Priorities

  • High individual burden of morbidity, mortality, or disability
  • High population burden of morbidity, mortality, or disability
  • High unit/individual cost of a technology or health problem
  • High aggregate/population cost of a technology or health problem
  • Substantial variations in practice
  • Unexpected adverse event reports
  • Potential for HTA findings to have impact on practice
  • Potential for HTA findings to have impact on patient outcomes or costs
  • Available findings not well disseminated or adopted by practitioners
  • Need to make regulatory decision
  • Need to make payment decision (e.g., provide coverage or include in health benefits)
  • Need to make a health program acquisition or implementation decision
  • Recent or anticipated “breakthrough” scientific findings
  • Sufficient research findings available upon which to base HTA
  • Feasibility given resource constraints (funding, time, etc.) of the assessment program
  • Public or political demand
  • Scientific controversy or great interest among health professionals

The timing for undertaking an assessment may depend on the availability of evidence.  For example, the results of a recently completed major RCT or meta-analysis may challenge current practice, and prompt an HTA to consolidate these results with other available evidence for informing clinical or payment decisions.  Or, an assessment may be delayed pending the results of an ongoing study that has the potential to shift the weight of the body of evidence on that topic.

As noted in section II. Fundamental Concepts, the demand for HTA by health care decision makers has increasingly involved requests for faster responses to help inform emergent regulatory, payment, or acquisition decisions.  The urgency of such a request may raise the priority of an assessment topic and prompt an HTA organization to designate it for a more focused, less-comprehensive “rapid HTA.”  See discussion of rapid HTA in chapter X.

Systematic priority-setting processes typically include such steps as the following (Donaldson and Sox 1992; Lara and Goodman 1990).

  1. Select criteria to be used in priority setting.
  2. Assign relative weights to the criteria.
  3. Identify candidate topics for assessment (e.g., as described above).
  4. If the list of candidate topics is large, reduce it by eliminating those topics that would clearly not rank highly according to the priority setting criteria.
  5. Obtain data for rating the topics according to the criteria.
  6. For each topic, assign a score for each criterion.
  7. Calculate a priority score for each topic.
  8. Rank the topics according to their priority scores.
  9. Review the priority topics to ensure that assessment of these would be consistent with the organizational purpose.

Processes for ranking assessment priorities range from being highly subjective (e.g., informal opinion of a small group of experts) to quantitative (e.g., using a mathematical formula) (Donaldson 1992; Eddy 1989; Phelps 1992).  Box VI-3 shows a quantitative model for priority setting.  The Cochrane Collaboration has used a more decentralized approach in which review groups use a range of different priority-setting systems (Clarke 2003; Nasser 2013).  Starting with topics suggested by their members, many Cochrane Collaboration review groups have set priorities by considering burden of disease and other criteria, as well as input from discussions with key stakeholders and suggestions from consumers.  These priorities have been offered to potential reviewers who might be interested in preparing and maintaining relevant reviews in these areas.

Box VI-3. A Quantitative Model for Priority Setting

  • A 1992 report by the Institute of Medicine provided recommendations for priority setting to the Agency for Health Care Policy and Research (now AHRQ). Seven criteria were identified:
  • Prevalence of a health condition
  • Burden of illness
  • Cost
  • Variation in rates of use
  • Potential of results to change health outcomes
  • Potential of results to change costs
  • Potential of results to inform ethical, legal, or social issues

The report offered the following formula for calculating a priority score for each candidate topic.

   Proriority Score = W1lnS1+ W2lnS2+ ... W7lnS7

   where:

      W is the relative weight of each of seven priority-setting criteria

      S is the score of a given candidate topic for a criterion

      ln is the natural logarithm of the criterion scores.

Candidate topics would then be ranked according to their priority score.

Source: Donaldson MS, Sox HC, Jr, eds. Setting Priorities for Health Technology Assessment: A Model Process. Washington, DC: National Academy Press; 1992. Reprinted with permission from the National Academy of Sciences, courtesy of the National Academies Press, Washington, DC.

There is no single correct way to set priorities.  The great diversity of potential assessment topics, the urgency of some policymaking needs, and other factors may diminish the practical benefits of using highly systematic and quantitative approaches.  On the other hand, ad hoc, inconsistent, or non-transparent processes are subject to challenges and skepticism of policymakers and other observers who are affected by HTA findings.  Certainly, there is a gap between theory and application of priority setting.  Many of the priority setting models are designed to support resource allocation that maximizes health gains, i.e., identify health interventions which, if properly assessed and appropriately used, could result in substantial health improvements at reasonable costs.  However, some potential weaknesses of these approaches are that they tend to set priorities among interventions rather than the assessments that should be conducted, do not address priority setting in the context of a research portfolio, and do not adopt an incremental perspective (i.e., consideration of the net difference that conducting an assessment might accomplish) (Sassi 2003).

Reviewing the process by which an assessment program sets its priorities, including the implicit and explicit criteria it uses in determining whether or not to undertake an assessment, can help to ensure that the HTA program is fulfilling its purposes effectively and efficiently.

 

C. Specify the Assessment Problem

One of the most important aspects of an HTA is to specify clearly the problem(s) or question(s) to be addressed; this will affect all subsequent aspects of the assessment.  An assessment group should have an explicit understanding of the purpose of the assessment and who the intended users of the assessment are.  This understanding might not be established at the outset of the assessment; it may take more probing, discussion and clarification.

The intended users or target audiences of an assessment should affect the content, presentation, and dissemination of results of the HTA.  Clinicians, patients, politicians, researchers, hospital managers, company executives, and others have different interests and levels of expertise.  They tend to have varying concerns about the effects or impacts of health technologies (health outcomes, costs, social and political effects, etc.).  They also have different needs regarding the scientific or technical level of reports, the presentation of evidence and findings, and the format (e.g., length and appearance) of reports.

When the assessment problem and intended users have been specified, they should be reviewed by the requesting agency or sponsors of the HTA.  The review of the problem by the assessment program may have clarified or focused the problem in a way that differs from the original request.  This clarification may prompt a reconsideration or restatement of the problem before the assessment proceeds.

1. Problem Elements

There is no single correct way to state an assessment problem.  The elements typically include specifying most or all of the following:

  • Health problem of interest
  • Patient population (including subgroups as appropriate)
  • Technology of interest
  • Comparator(s)
  • Setting of care
  • Provider/clinician delivering the intervention(s)
  • Properties, impacts, or outcomes
  • Timeframe, duration, or follow-up period
  • Timeframe, duration, or follow-up period
  • Study design or type of evidence/data to be included in the HTA
  • Target audiences for the HTA findings

One commonly used framework is known as PICOTS (sometimes only PICO or PICOT):  Population, Intervention(s), Comparator(s), Outcome(s), Timing, and Study design (Counsell 1997).  This framework can be used for describing individual studies or HTAs that might examine evidence from multiple studies.  For example, a basic specification of one assessment problem would be the following.  (This example uses some characteristics of a particular RCT [Stewart 2005].)

  • Population: males and females age 55-75 years with mild hypertension, i.e., diastolic blood pressure 85-99 mm Hg, systolic blood pressure 130-159 mm Hg; no other serious health problems
  • Intervention: standardized, moderate exercise program (aerobic and resistance training)
  • Comparator: usual physical routine and diet
  • Outcomes: changes in: general and abdominal obesity, systolic blood pressure, diastolic blood pressure, aerobic fitness, aortic stiffness (measured as aortofemoral pulse-wave velocity)
  • Timing: 6-24 months
  • Study design: randomized controlled trials

2. Analytic Frameworks for Presenting HTA Problems

A useful graphical means of presenting an assessment problem is an “analytic framework,” sometimes known as a “causal pathway.”  Analytic frameworks depict direct and indirect relationships between interventions and outcomes.  Although often used to present clinical interventions for health problems, they can be used as well for other types of interventions in health care.

Analytic frameworks provide clarity and explicitness in defining the key questions to be addressed in an HTA, and draw attention to important relationships for which evidence may be lacking.  They can be useful tools to formulate or narrow the focus of an assessment problem.  For a clinical problem, an analytic framework typically includes a patient population, one or more alternative interventions, intermediate outcomes (e.g., biological markers), health outcomes, and other elements as appropriate.  In instances where a topic involves a single intervention for narrowly defined indications and outcomes, these frameworks can be relatively straightforward.  However, given the considerable breadth and complexity of some HTA topics, which may cover multiple interventions for broadly defined health problem (e.g., screening, diagnosis, and treatment of osteoporosis in various population subgroups), analytic frameworks can be detailed.

An example of an analytic framework of the impact of a diagnostic test on health outcomes is shown in Box VI-4.  In particular, this framework presents a series of key questions intended to determine whether testing for a particular genotype in adults with depression entering treatment with selective serotonin reuptake inhibitors (SSRIs) will have an impact on health outcomes.  The framework includes an overarching key question about the impact of the test on outcomes, as well as a series of linked key questions about the accuracy of the test; its ability to predict metabolism of SSRIs, efficacy of SSRIs, and risk of adverse drug reactions; the test’s impact on treatment decisions; and the ultimate impact on health outcomes.

Box VI-4. Analytic Framework: CYP450 Genotype Testing for Selective Serotonin Reuptake Inhibitors

Box VI-4. Analytic Framework: CYP450 Genotype Testing for Selective Serotonin Reuptake Inhibitors.  The numbers above correspond to the following key questions: 1. Overarching question: Does testing for cytochrome P450  <em>(CYP450)</em>  polymorphisms in adults entering selective serotonin reuptake inhibitor (SSRI) treatment for nonpsychotic depression lead to improvement in outcomes, or are testing results useful in medical, personal, or public health decision-making?
2. What is the analytic validity of tests that identify key  <em>CYP450</em>  polymorphisms? 3. Clinical validity:  a: How well do particular  <em>CYP450</em>  genotypes predict metabolism of particular SSRIs? b: How well does  <em>CYP450</em>  testing predict drug efficacy? c: Do factors such as race/ethnicity, diet, or other medications, affect these associations? 
4. Clinical utility:  a: Does  <em>CYP450</em>  testing influence depression management decisions by patients and providers in ways that could improve or worsen outcomes? b: Does the identification of the  <em>CYP450</em>  genotypes in adults entering SSRI treatment for nonpsychotic depression lead to improved clinical outcomes compared to not testing? c: Are the testing results useful in medical, personal, or public health decision-making?
5. What are the harms associated with testing for  <em>CYP450</em>  polymorphisms and subsequent management options?

The numbers above correspond to the following key questions:

  1. Overarching question: Does testing for cytochrome P450 (CYP450) polymorphisms in adults entering selective serotonin reuptake inhibitor (SSRI) treatment for nonpsychotic depression lead to improvement in outcomes, or are testing results useful in medical, personal, or public health decision-making?
  2. What is the analytic validity of tests that identify key CYP450 polymorphisms?
  3. Clinical validity: a: How well do particular CYP450 genotypes predict metabolism of particular SSRIs? b: How well does CYP450 testing predict drug efficacy? c: Do factors such as race/ethnicity, diet, or other medications, affect these associations?
  4. Clinical utility: a: Does CYP450 testing influence depression management decisions by patients and providers in ways that could improve or worsen outcomes? b: Does the identification of the CYP450 genotypes in adults entering SSRI treatment for nonpsychotic depression lead to improved clinical outcomes compared to not testing? c: Are the testing results useful in medical, personal, or public health decision-making?
  5. What are the harms associated with testing for CYP450 polymorphisms and subsequent management options?

Source: Teutsch SM, Bradley LA, Palomaki GE, et al. The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) initiative: methods of the EGAPP Working Group. Genet Med. 2009;11(1):3-14.

D. Reassessment and the Moving Target Problem

Health technologies are “moving targets” for assessment (Goodman 1996).  As a technology matures, changes occur in the technology itself or other factors that can diminish the currency of HTA findings and their utility for health care policies.  As such, HTA can be more of an iterative process than a one-time analysis.  Some of the factors that would trigger a reassessment might include changes in the:

  • Evidence pertaining to the safety, effectiveness, and other outcomes or impacts of using the technology (e.g., publication of significant new results of a major clinical trial or a new meta-analysis)
  • Technology (modified techniques, models, formulations, delivery modes, etc.)
  • Indications for use (different health problems, degree of severity, etc.)
  • Populations in which it is used (different age groups, comorbidities, primary vs. secondary prevention, etc.)
  • Protocols or care pathways in which the technology is used that may alter the role or utility of the technology
  • Care setting in which the technology is applied (inpatient, outpatient, physician office, home, long-term care)
  • Provider of the technology (type of clinician, other caregiver, patient, etc.)
  • Practice patterns (e.g., large practice variations)
  • Alternative technology or standard of care to which the technology is compared
  • Outcomes or impacts considered to be important (e.g., quality of life, types of costs)
  • Resources available for health care or the use of a particular technology (i.e., raising or lowering the threshold for decisions to use the technology)
  • Cost (or price) of a technology or its comparators or of the associated episode or course of care
  • Adoption or use of guidelines, payment policies, or other decisions based on the HTA report
  • Interpretation of existing research findings (e.g., based on corrections or re-analyses)

There are numerous instances of moving targets that have prompted reassessments.  For example, since the inception in the late 1970s of percutaneous transluminal coronary angioplasty (PTCA, approved by the US FDA in 1980), its clinical role in relation to coronary artery bypass graft surgery (CABG) has changed as the techniques and instrumentation for both technologies have evolved, their indications have expanded, and as competing, complementary, and derivative technologies have emerged (e.g., laser angioplasty, bare metal and drug-eluting coronary artery stents, minimally-invasive and “beating-heart” CABG).  The emergence of viable pharmacological therapy for osteoporosis (e.g., with bisphosphonates and selective estrogen receptor modulators) has increased the clinical utility of bone densitometry.  Long rejected for its devastating teratogenic effects, thalidomide reemerged for carefully managed use in a variety of approved and investigational uses in leprosy and other skin diseases, certain cancers, chronic graft-vs.-host disease, and other conditions (Richardson 2002; Zhou 2013).

While HTA programs cannot avoid the moving target problem, they can manage and be responsive to it.  Box VI-5 lists approaches for managing the moving target problem. 

Box VI-5. Managing the Moving Target Problem

  • Recognize that HTA must have the capacity to revisit topics as needed, whether periodically (e.g., every two or five years) or as prompted by important changes since preparation of the original HTA report.
  • Document in HTA reports the information sources, assumptions, and processes used. This “baseline” information will better enable HTA programs and other interested groups to recognize when it is time for reassessment.
  • In the manner of a sensitivity analysis, indicate in HTA reports what magnitudes of change in key variables (e.g., accuracy of a diagnostic test, effectiveness of a type of drug, patient adherence rates, costs) would result in a significant change in the report findings.
  • Note in HTA reports any known ongoing research, work on next-generation technologies, population trends, or other developments that might prompt the need for reassessment.
  • Have or subscribe to a horizon scanning or monitoring function to help detect significant changes in technologies, how they are used, or other developments that might trigger a reassessment.
  • Recognize that, as the number of technology decision makers increases and evidence-based methods diffuse, multiple assessments are generated at different times from different perspectives. This may diminish the need for clinicians, payers, and other decision makers to rely on a single, definitive assessment on a particular topic.

Aside from changes in technologies and their applications, even new interpretations of, or corrections to, existing evidence can prompt a new assessment.  This was highlighted by a 2001 report of a Cochrane Center that prompted the widespread re-examination of screening mammography guidelines by government and clinical groups.  The report challenged the validity of evidence indicating that screening for breast cancer reduces mortality, and suggested that breast cancer mortality is a misleading outcome measure (Olson 2001).  More recently, an assessment by the US Preventive Services Task Force of the same issue prompted re-examination of available evidence, the process used by this task force to arrive at its findings, how the findings were transmitted to the public, and how the findings were interpreted by patients and clinicians (Thrall 2010; US Preventive Services Task Force 2009).

Changes in the volume or nature of publications may trigger the need for an initial assessment or reassessment.  A “spike” (sharp increase) in publications on a topic, such as in the number of research reports or commentaries, may signal trends that merit attention for assessment.  However, in order to determine whether such publication events are reliable indicators of technology emergence or moving targets requiring assessment, further bibliometric research should be conducted to determine whether actual emergence of new technologies or substantial changes in them or their use has been correlated with such publication events or trends (Mowatt 1997).

Not all changes require conducting a reassessment, or that a reassessment should entail a full HTA.  A reassessment may require updating only certain aspects of an original report.  In some instances, current clinical practices or policies may be recognized as being optimal relative to available evidence, so that a new assessment would have little potential for impact; or the set of clinical alternatives and questions have evolved so much since the original assessment that it would not be useful to update it, but to conduct an entirely new assessment.

In some instances, an HTA program may recognize that it should withdraw an existing assessment because to maintain it could be misleading to users and perhaps even have adverse health consequences.  This may arise, for example, when an important flaw is identified in a pivotal study in the evidence base underlying the assessment, when new research findings appear to refute or contradict the original research base, or when the assumptions used in the assessment are determined to be flawed.  The determination to maintain or withdraw the existing assessment while a reassessment is conducted, to withdraw the existing assessment and not conduct a reassessment, or to take other actions, depends on the risks and benefits of these alternative actions for patient health, and any relevant legal implications for the assessment program or users of its assessment reports.

Once an HTA program determines that a report topic is a candidate for being updated, the program should determine the need to undertake a reassessment in light of its other priorities.  Assessment programs may consider that candidates for reassessment should be entered into the topic priority-setting process, subject to the same or similar criteria for selecting HTA topics. 

A method for detecting signals for the need to update systematic reviews was validated for a set of reports produced by the AHRQ Comparative Effectiveness Review program.  This method for determining whether a report needed to be updated involved applying the literature search strategy for the original systematic review to five leading general interest medical journals plus four-to-six specialty journals most relevant to the topic.  The method also involved providing a questionnaire to experts in the field that requested them to indicate whether the conclusions in the original review were still valid and, if not, to identify any relevant new evidence and citations.  This information was used to identify reports to be updated.  After the new (i.e., updated) reports were completed, the researchers conducted a systematic comparison of the conclusions of the original and new reports, and found that the determination of priority for updating the original reports was a good predictor of actual changes to conclusions in the updated reports (Shekelle 2014).

Some research has been conducted on the need to reassess a particular application of HTA findings, i.e., clinical practice guidelines.  For example, for a study of the validity of 17 guidelines developed in the 1990s by AHCPR (now AHRQ), investigators developed criteria defining when a guideline needs to be updated, surveyed members of the panels that prepared the respective guidelines, and searched the literature for relevant new evidence published since the appearance of the guidelines.  Using a “survival analysis,” the investigators determined that about half of the guidelines were outdated in 5.8 years, and that at least 10% of the guidelines were no longer valid by 3.6 years.  They recommended that, as a general rule, guidelines should be reexamined for validity every three years (Shekelle, Ortiz 2001).  Others contend that the factors that might prompt a reassessment do not arise predictably or at regular intervals (Brownman 2001).  Some investigators have proposed models for determining whether a guideline or other evidence-based report should be reassessed (Shekelle, Eccles 2001).

 

References for Chapter VI

Brownman GP. Development and aftercare of clinical guidelines: the balance between rigor and pragmatism. JAMA. 2001;286:1509-11. PubMed

Carlsson P, Jørgensen T. Scanning the horizon for emerging health technologies. Int J Technol Assess Health Care. 1998;14(4):695-704. PubMed

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380-7. PubMed

Donaldson MS, Sox HC, Jr, eds. Setting Priorities for Health Technology Assessment: A Model Process. Washington, DC: National Academy Press; 1992. PubMed

Douw K, Vondeling H, Eskildensen D, Simpson S. Use of the Internet in scanning the horizon for new and emerging health technologies; a survey involved in horizon scanning. J Med Internet Res. 2003;5(1):e6. PubMed

ECRI Institute. AHRQ Healthcare Horizon Scanning System Protocol and Operations Manual: January 2013 Revision. (Prepared by ECRI Institute under Contract No. 290-2010-00006-C.) Rockville, MD: Agency for Healthcare Research and Quality. August 2013. Accessed November 1, 2013 at: //effectivehealthcare.ahrq.gov/ehc/products/393/886/AHRQ-Healthcare-Horizon-Scan-Protocol-Operations-Manual_130826.pdf.

Eddy DM. Selecting technologies for assessment. Int J Technol Assess Health Care.1989;5(4):485-501. PubMed

EUnetHTA (European network for Health Technology Assessment). EUnetHTA POP Database. Accessed Sept. 1, 2013 at: https://www.eunethta.eu/pop-database/.

Goodman C. The moving target problem and other lessons from percutaneous transluminal coronary angioplasty. In: A Szczepura, Kankaanpää J. Assessment of Health Care Technologies: Case Studies, Key Concepts and Strategic Issues. New York, NY: John Wiley & Sons; 1996:29-65.

Harper G, Townsend J, Buxton M. The preliminary economic evaluation of the health technologies for the prioritization of health technology assessments. Int J Technol Assess Health Care. 1998;14(4):652-62. PubMed

Henshall C, Schuller T, Mardhani-Bayne L. Using health technology assessment to support optimal use of technologies in current practice: the challenge of "disinvestment". Int J Technol Assess Health Care. 2012;28(3):203-10. PubMed

Lara ME, Goodman C, eds. National Priorities for the Assessment of Clinical Conditions and Medical Technologies. Washington, DC: National Academy Press; 1990. Publisher free book.

Mowatt G, Bower DJ, Brebner JA, Cairns JA, Grant AM, McKee L. When and how to assess fast-changing technologies: a comparative study of medical applications of four generic technologies. Health Technology Assessment. 1997;1(14):i-vi, 1-149. PubMed

Nasser M, Welch V, Tugwell P, Ueffing E, et al. Ensuring relevance for Cochrane reviews: evaluating processes and methods for prioritizing topics for Cochrane reviews. J Clin Epidemiol. 2013;66(5):474-82. PubMed

National Institute for Health and Care Excellence. Medical Technologies Evaluation Programme. Accessed Dec. 1, 2013 at: http://www.nice.org.uk/aboutnice/whatwedo/aboutmedicaltechnologies/medicaltechnologiesprogramme.jsp.

Olson O, Gøtzsche PC. Cochrane review on screening for breast cancer with mammography. Lancet. 2001;358(9290):1340-2. PubMed

Packer C, Gutierrez-Ibarluzea I, Simpson S. The evolution of early awareness and alert methods and systems. Int J Technol Assess Health Care. 2012;28(3):199-200. PubMed

Phelps CE, Mooney C. Correction and update on 'priority setting in medical technology assessment.' Medical Care. 1992;30(8):744-51. PubMed

Richardson P, Hideshima T, Anderson K. Thalidomide: emerging role in cancer medicine. Annu Rev Med. 2002;53;629-57. PubMed

Sassi, F. Setting priorities for the evaluation of health interventions: when theory does not meet practice. Health Policy. 2003;63(2):141-54. PubMed

Shekelle P, Eccles MP, Grimshaw JM, Woolf SH. When should clinical guidelines be updated? BMJ. 2001;323(7305):155-7. PubMed//www.ncbi.nlm.nih.gov/pmc/articles/PMC1120790.

Shekelle PG, Motala A, Johnsen B, Newberry SJ. Assessment of a method to detect signals for updating systematic reviews. Syst Rev. 2014;3:13. PubMed//www.ncbi.nlm.nih.gov/pmc/articles/PMC3937021/pdf/2046-4053-3-13.pdf.

Shekelle PG, Ortiz E, Rhodes S, Morton SC, et al. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: how quickly do guidelines become outdated? JAMA. 2001;286(12):1461-7. PubMed

Stewart KJ, Bacher AC, Turner KL, Fleg JL, Hees PS, et al. Effect of exercise on blood pressure in older persons: a randomized controlled trial. Arch Intern Med. 2005;165(7):756-62. PubMed

Sun F, Schoelles K. A systematic review of methods for health care technology horizon scanning. (Prepared by ECRI Institute under Contract No. 290-2010-00006-C.) AHRQ Publication No. 13-EHC104-EF. Rockville, MD: Agency for Healthcare Research and Quality; August 2013. Publisher free article9.

Teutsch SM, Bradley LA, Palomaki GE, et al. The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) initiative: methods of the EGAPP Working Group. Genet Med. 2009;11(1):3-14. PubMed | PMC free article.

Thrall JH. US Preventive Services Task Force recommendations for screening mammography: evidence-based medicine or the death of science? J Am Coll Radiol. 2010;7(1):2-4. PubMed

US Preventive Services Task Force. Screening for breast cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2009;151(10):716-26, W-236. Pubmed

Zhou S, Wang F, Hsieh TC, Wu JM, Wu E. Thalidomide-a notorious sedative to a wonder anticancer drug. Curr Med Chem. 2013;20(33):4102-8. PubMed

Page 1 of 1

Last Reviewed: January 4, 2021