The Voice of Epidemiology

    
    


    Web EpiMonitor

► Home ► About ► News ► Jobs ► Events ► Resources ► Contact

Keynotes

Humor Quotes Wit & Wisdom EpiSource Miscellany Editor's Tips Triumphs Links Archives
 


Epi Wit & Wisdom Articles

Epidemiology Department Tackles Science Article (3 of 6)

Special report by Jonathan Samet

The Department of Epidemiology of the Johns Hopkins University School of Hygiene and Public Health discussed the July 1995 Science article by Gary Taubes in one of the weekly departmental seminars. There was great interest in the article and a very lively discussion. It was focused around 10 points of criticism of epidemiology that could be identified in the article. They are:

1) Epidemiologic studies are conflicting.

2) Epidemiologic studies exaggerate risks.

3) Epidemiologic studies cannot address “weak risks.”

4) Epidemiology is an observational science.

5) The randomized trial is the “best” design.

6) Bias is a plague in epidemiologic studies.

7) Exposure assessment is an obstacle.

8) Case-control studies are inherently flawed by the difficulty of selecting controls.

9) Epidemiologic studies can only address relative risks above 3 - 4.

10) Even consistency does not necessarily help in interpreting evidence.

The seminar participants noted that these were not newly identified limitations of epidemiology. Most of the attendees were surprised by the article’s pervasively critical tone and some by its publication. Most of the points of concern have long been recognized and have already been topics for discussion among epidemiologists.

The observational nature of much epidemiologic research is evident, but the non-experimental nature of this research does not necessarily mean that the randomized trial is the design of choice.

The topics of bias and control selection have received substantial attention by epidemiologists, as have methods for exposure assessment. In fact, the new field of environmental exposure assessment now complements environmental epidemiology. The article and a number of the quoted epidemiologists cited arbitrary lower bounds of relative risk, below which the findings of epidemiologic studies lose credibility. While uncontrolled confounding or other forms of bias have potentially more serious effects at lower levels of relative risk, some exposures to risk factors for disease would be expected to have effects that might be judged as “weak.” In assessing the credibility of lower levels of relative risk, consideration should be given to the level of risk anticipated on a biological basis and not to arbitrary boundaries.

The participants acknowledged that the article had elements of “truth” about epidemiology and epidemiologists. Interpretation of epidemiologic data may be challenging, particularly in investigating poorly measured risk factors for diseases having multifactorial etiology, e.g. diet and cancer. There needs to be more effective communication with the media and the public as the findings of studies are reported. These skills and an understanding of the policy implications of epidemiologic research might become part of training in epidemiology.

The seminar ended with a discussion of next steps. How should the field respond to the article? Participants offered that little would be gained by responding in letters to Science. There was consensus that advocacy for the field by professional organizations was needed.

Published February 1996  v

 

 
      ©  2011 The Epidemiology Monitor

Privacy  Terms of Use  |  Sitemap

Digital Smart Tools, LLC