The U.S. Preventive Services Task Force (USPSTF) recently reported that there is no convincing evidence to support routine screening for ovarian cancer (NY Times article). That announcement followed similar findings that failed to support both routine prostate-specific antigen (PSA) tests and routine mammograms. Until these reports were published, routine screening for ovarian, prostate, and breast cancers was well accepted. Each test was considered a safe and effective way of detecting cancer early and saving lives. How did we get from “routine” to “not advised” so quickly? Was there something wrong with the science? Can we trust medical research if advice changes so dramatically? As clinicians (and patients) how are we to make decisions for our patients, ourselves, and our loved ones? Who can you trust?
What we’re observing is evidence-based practice (EBP) playing out on the national stage in real-time. As most of you know by now, EBP is the “…conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” “It’s about integrating individual clinical expertise and the best external evidence.” (Sackett, 1966). The quality of clinical evidence is graded on a hierarchy from the lowest level of evidence (expert opinion) to the highest (systematic reviews of randomized controlled trials). Adoption of the PSA test, for example, may have been influenced by the results of early studies suggesting that cancer cells can be detected and if left untreated will metastasize to the rest of the organ and beyond, eventually leading to death. The results of those early studies and the recommendations that followed may have been compromised by limitations in the study design —such as the nature of the sample (too small or too homogeneous), and the failure to follow the treated and untreated men long enough to determine the long-term cost-benefit of treatment vs. ‘watchful waiting’. As we gathered more information about the course of prostate cancer though subsequent studies, the evidence suggested that the overall survival rate for most men with positive PSA tests was no different whether they were treated or not (USA Today article).
The issues associated with screening for disease are an excellent example of the nature of medical research. Each subsequent experiment builds on the findings of the previous one. No single study or finding should be accepted unless repeated by other investigators and other laboratories. Remember, the results of any clinical study are based on a sample. To the extent that the sample represents the population of individuals exhibiting the disorder under investigation, the results may or may not transfer to the general population. Often, promising results reported as part of an early investigation of a drug or intervention are not supported by subsequent research. Jonah Lehrer describes this phenomenon as the Decline Effect.
But what does this have to do with audiology? Plenty! Our knowledge of auditory disorders and interventions is constantly growing. Just one of our journals, published this past month, reports the results of studies on tinnitus treatment, unilateral vs. bilateral cochlear implants, dead regions, bone-anchored hearing aids, evoked potentials, and directional microphones. Add to those, the results of research reported in other audiology and otology peer-reviewed and trade journals, e-publications, and conference proceedings, and we can begin to appreciate the burden placed on the clinician to keep up with the literature and make the best decisions for her or his patient. The audiologist needs to be a knowledgeable consumer of the research. Such knowledge does not require a Ph.D. – just a healthy dose of skepticism and a good eye for quality research. The skepticism is important to prevent us from enthusiastically accepting a finding before it has become a victim of the ‘decline effect’. When reading an article ask yourself a few questions: How did the investigators allocate subjects to the treatment and non-treatment groups (randomization)?; were there enough subjects to see a treatment effect if one truly existed (sample size)?; did the subjects and investigators know what treatment the subjects were receiving (blinding)?; did the investigators follow the subjects long enough to see an effect (length of treatment)?; and did the investigators account for those subjects who dropped out (intention to treat)? Evidence-based practice is a 3-legged stool: The needs and values of the patient; the knowledge and expertise of the clinician; and the application of quality research to clinical decisions. Without any one of these legs, the effectiveness of our treatment will be compromised.
Harvey Abrams is the Director of Audiology Research at Starkey Laboratories. Prior to joining Starkey, Harvey served in a number of clinical, research and administrative capacities with the Department of Veterans Affairs and the Department of Defense. He received his undergraduate degree from the George Washington University and his masters and doctoral degrees from the University of Florida. Harvey instructs distance learning courses for the University of Florida. His research has focused on treatment efficacy and improved quality of life associated with audiologic intervention. He has authored and co-authored several recent papers and book chapters and is a frequent lecturer on the topics of outcome measures, health-related quality of life, and evidence-based audiologic practice.
References:
Sackett, D. (1966). Evidence based medicine: what it is and what it isn’t. BMJ, 312(13, January):71-72.
Like this:
Like Loading...