HOME    ABOUT    NEWS    JOB BANK     EVENTS    CONTACT

 

From our archives - March 2012
Ken Rothman Delivers 12th Annual Saward-Berg Lecture
at the University of Rochester
on the Public Perception of Epidemiology

 

True or false? If there is a discrepancy between the results of a randomized clinical trial and those of observational studies, you should automatically consider the trial results correct and the observational results suspect.

According to Ken Rothman, Distinguished Fellow at the Research Triangle Institute and annual Saward-Berg lecturer at the University of Rochester, the answer is false. If you answered true to this statement you are falling prey to a common misperception about the value of different study designs, Rothman told the Rochester audience. In fact, randomized trials fall within the broader rubric of epidemiologic studies and share with other epidemiologic studies a broad array of concerns, he added.

Origins of Skepticism

Rothman traced the origins of some of the public reservations about epidemiologic findings to earlier critics of epidemiologic work such as Ralph Horwitz and Alvan Feinstein, who published papers in the New England Journal of Medicine in the late 1970’s questioning the link between estrogen and endometrial cancer. At the time, they blamed detection bias for the association.  In later work, they criticized epidemiologic research more broadly, asserting that principles they believed were used to evaluate scientific results in general should also be applied to epidemiology.

Coffee and Pancreatic Cancer

Rothman mentioned another controversial finding, the report of a link between coffee and pancreatic cancer in the early 1980’s, which fueled subsequent public skepticism about the value of epidemiology. This skepticism came to be embodied in cartoons such as the one in the Cincinnati Enquirer showing a newscaster selecting the day’s Random Medical News from the New England Journal of Panic-Inducing Gobbledygook. The newscaster spins wheels with chance alone determining which factor comes up, then which disease, then which affected population will randomly constitute the day’s news.

Cartoons

In a talk which he himself peppered with such cartoons and illustrations to make his point about public perceptions, Rothman showed his audience the now often-used drawing of a tanker truck carrying potentially hazardous liquids with the following words inscribed on the rear of the tanker—“The scientific community is divided. Some say this stuff is dangerous, some say it isn’t.”

The much-referred-to 1995 Science article by Gary Taubes entitled “Epidemiology Faces Its Limits” was criticized by Rothman for feeding public misperceptions about epidemiology by relying on out-of-context quotes and other journalistic devices.

Discrepant Findings

Of course, the main factor in public skepticism about epidemiology comes not so much from journalists’ errors or practices but from apparently discrepant findings between epidemiologic studies or between observational studies and randomized trials. Perhaps the most well known modern example of this type of discrepancy is the much publicized conflicting set results on hormone replacement therapy. In this example, cohort studies indicated that hormonal therapy could reduce risk of coronary heart disease, whereas trials showed either no effect or an adverse effect.

Other Explanations

According to Rothman, it was facile to ascribe differences in study results such as those relating to hormone replace therapy to a hierarchy of supposed reliability in study designs, casting doubt on the validity of any findings that did not emanate from randomized trials. He praised the work of Miguel Hernan, who showed that the discrepancies between the Nurses’ Health Study and the Women’s Health Initiative results could be largely explained by differences in the distribution of time since menopause and length of follow-up, and that in the case of hormone replacement therary and CHD, the differences between experimental and non-experimental studies reflected differences across study populations, rather than confounding problems or other internal biases.

Reasons For Discrepancies

In Rothman’s view, it is much more productive to ask why the discrepant results have been produced than to ascribe them automatically to study design issues. Some of the reasons that may be operating to explain discrepancies between study results include differences in exposures or treatments, misinterpretation based on statistical significance testing, uncontrolled confounding, effect measure modification, random error, bias from intent-to-treat analysis, or other biases.

In fact, observational studies have positive features not found with trials such as lower cost, larger sample sizes, ability to examine relatively rare endpoints, fewer ethical barriers, inclusion of a wider range of patients, and evaluation of treatments in real world as opposed to artificial trial conditions.

Conclusion

In concluding, Rothman stated that differences across epidemiologic studies have often been ascribed to the study design itself instead of study flaws. He stated that if studies are well conducted and if they truly address the same question, different approaches should give similar results. 


 

 


 


 


 


 

 

HOME    ABOUT    NEWS    JOB BANK     EVENTS    CONTACT