The Voice of Epidemiology

    
    


    Web EpiMonitor

► Home ► About ► News ► Jobs ► Events ► Resources ► Contact

Keynotes

Humor Quotes Wit & Wisdom EpiSource Miscellany Editor's Tips Triumphs Links Archives
 


Epi Wit & Wisdom Articles

Daubert Decision Seen as a Step Forward in Reducing Junk Science

By C.R. (Reg) Allen Jr.

[Ed. The January issue of the Epi Monitor carried a news item about the Supreme Court decision to reinforce the role of judges as screeners of scientific evidence for reliability and subsequent admissibility in court. The original decision which found that judges had this responsibility was the Daubert decision in 1993 that had epidemiologic evidence at the heart of the controversy. Epidemiologists and a lawyer interviewed at that time (See Aug/Sept 1993 issue of the Epi Monitor for comments by Kenneth Rothman, Susan Rose, Robert Morgan and Shanna Swan) were unsure about the long term impact of what was hailed by many at the time as a landmark decision. In the following article, Reg Allen, a former public health physician turned practicing attorney, gives his overall assessment of the impact the decision has had.]

Junk science has been a growing concern in the courts. The tremendous increase in use of expert witnesses has unfortunately included some individuals willing to offer opinions of dubious value for a fee. Such testimony can have a misleading and prejudicial impact on the jury. Lacking a base in traditional scientific methodology, junk science not only clutters the courts with unreliable information and wastes time and other resources, but it can also undermine the credibility and integrity of the justice system.

Against this backdrop, Daubert vs. Merrell Dow Pharmaceuticals Inc [509 U.S. 579, 113 A.Ct. 2786, 125 L.Ed. 2nd 469 (1993)] has proved to be a landmark US Supreme Court case on scientific evidence. This case arose from the consolidation of two cases in which plaintiffs claimed that mothers’ ingestion of Bendectin, a prescription drug for nausea, caused birth defects. Both plaintiffs and defendant proffered experts who based opinions on evidence which included epidemiologic studies. The procedural history of the case is complex, but ultimately the exclusion of plaintiffs’ expert testimony was affirmed.

The court interpreted federal rules of evidence to require the trial court judge to serve as a gatekeeper to screen scientific, technical or other specialized knowledge for relevance and reliability. The court decided that the adjective “scientific” connotes more than subjective belief or ungrounded speculation. The judge is to focus on the basis for an expert’s opinions, not the expert’s conclusions per se. To assist the trial judge in determining the admissibility of proffered scientific evidence, the court listed some “observations” or reliability factors which can be considered:

1) whether the scientific knowledge can be tested

2) whether the theory or technique has been subjected to peer review and publication

3) the known or potential rate of error

4) whether there is general acceptance of a theory

The impact of Daubert has been large, although difficult to quantify. In the more than four years since the decision was handed down, it has been cited in hundreds of subsequent reported cases, and these numbers do not count unreported trial court cases. It has been the subject of or referred to in dozens of law review articles, including both praise and criticism.

Perhaps its greatest impact is an effect very difficult to quantify—-introducing caution into the selection and use of experts in discovery and trial. Hearings on motions to exclude an expert may address the reliability factors set forth in Daubert.

As a US Supreme Court case, Daubert is controlling authority for federal cases but much of its impact results from it being widely followed at the state level. Daubert and its state law progeny have had a major influence on many areas of law, including negligence law. This is of particular importance to epidemiologists, since negligence law includes, inter alia, health provider malpractice, product liability and toxic torts.

Some states have not only embraced but have extended the reach of Daubert. For example, in Texas, [Robinson vs. E. I. Dupont de Nemours, 923 S.W. 2d 549 (Tex 1995)] followed Daubert and added two additional reliability factors:

1) the extent to which the technique relies upon subjective interpretation of the expert

2) the non-judicial uses which have been made of the theory.

Further, in Merrell Dow Pharmaceuticals Inc vs. Havner, 953 S.W. 2d 706 (Tex 1997), the Texas Supreme Court explicitly discussed the role of epidemiologic information to support a finding of causation (a topic for another day).

Daubert and its progeny are not complete or necessarily smooth solutions to the problem of junk science. Its requirements may seem burdensome, the judge’s gate-keeping discretion may be an imperfect screen for “good” science, and excluded experts may feel wronged, to name but a few of the critiques. Further law and application will no doubt refine the Daubert approach. Nonetheless, the overall impact has been to foster closer scrutiny of experts and scientific evidence and to move a step forward in reducing junk science.

This line of cases has strong implications for epidemiologists. The growing importance and recognition of epidemiologic evidence increase the possibility that epidemiologists may become involved in litigation, most likely as experts and hopefully not as defendants. As an expert, an epidemiologist might prepare a report; give deposition testimony; and/or give live testimony at trial. An important part of an epidemiologist’s preparation for reports and testimony is understanding and satisfying the Daubert reliability requirements for admissible evidence.

Published April 1998  v

Epidemiologists and Journalists Gain Insight at Boston University Symposium

A symposium was held at Boston University in early October to provide an opportunity for journalists, particularly those covering nutrition topics, to meet with epidemiologists and obtain a better grasp of what epidemiology is all about. As a result of the meeting, both journalists and epidemiologists gained new insights about one another’s fields, according to Ellen Ruppel Shell, co-director of the BU program on science and journalism which hosted the meeting. An account of the meeting appeared in the New York Times on October 11.

According to Ruppel Shell, the scientists in attendance came to better understand that journalists do not write their stories primarily to educate their readers, but to report the news. This helps explain why journalists do not present all of the caveats that scientists may attach to their findings. This practice has appalled some scientists, according to Ruppel Shell.

The journalists in attendance came to understand that scientists can have an agenda beyond financial gain, said Ruppel Shell. Most often, reporters are looking for how a report may be tainted because of financial considerations. In fact, dogma and strongly held beliefs may pose more of a potential “conflict of interest” for scientists than financial matters.

What Journalists Like About Epidemiology

Ruppel Shell told the Epi Monitor that epidemiology is fascinating to journalists because:

• It deals with populations

• Its pronouncements appear to have immediate consequences for people as a whole (in contrast to clinical studies which involve only a few subjects)

• Its findings relate to humans (in contrast to basic science studies which tend to  involve only  animals)

• Its findings frequently appear in the New England Journal of Medicine

• It appears easy to understand (at the surface at least)

• The jargon is limited compared to molecular biology and other sciences.

Published November 1995  v

Epi Critic Speaks at ISEE Meeting

Gary Taubes, the freelance journalist whose Science article on the “Limits of Epidemiology” irked so many epidemiologists back in 1995 because of his negative portrayal of epidemiology, was an invited speaker at the recent International Society for Environmental Epidemiology meeting in Boston. Speaking in a session on “The Media As A Link To The Community,” Taubes’ presentation was entitled “Epidemiology in the Press: Telling Time by the Second Hand.”

Taubes prefaced his talk by informing the audience that much of his career has been spent investigating bad science. For example, he recently completed a book on cold fusion. He referred to his earlier article in Science on the “Limits of Epidemiology” and summarized it by saying that he had criticized epidemiology for looking for subtle effects without the right tools. He likened the efforts of epidemiologists to find health effects to those investigators who looked for canals on Mars in the days before the right telescopes were invented. In a more recent Science article, “The (Political) Science of Salt,” (August 14) Taubes again criticized a large body of epidemiologic work.

Taubes began his talk by revealing his main message, namely that the reason for the persistent conflict between the media and science is that the demands of the scientific process and the demands of the media are incompatible. Put another way, Taubes emphasized that the requirements for a good story and the requirements for good science are at odds.

One criterion for making the news is to claim something at variance with the conventional wisdom. This is the classic “man bites dog” story. In science, consistency is more prized. Bad science is more sensational than good science and the press favors this because it is a better story despite the likelihood of being proven wrong over the long run. Taubes quoted Australian physicist and historian of science John Ziman about how 90% of what is in physics textbooks is true, whereas 90% of what is in physics journals is false. Extending this to epidemiology and assuming that only 50 - 75% of the reports in epidemiology journals will later prove to be false, the problem is that these reports are likely to be picked up by the press, said Taubes. In physics, reports are ignored because the public is less interested. Not so in epidemiology.

In an interesting coincidence, Taubes referred to a New York Times story on August 18, 1998, the same day as his talk at ISEE, by Jane Brody detailing “Health Scares That Weren’t So Scary.” In this article, Brody describes an American Council On Science and Health report “Facts Versus Fears” which reviews the greatest unfounded health scares of the last five decades (www.acsh.org/Publications/reports/ factsfears.html). Among the unfounded scares she describes are those about alar, electric blankets, cellular phones, asbestos in schools, and coffee and pancreatic cancer. This last scare was caused by a 1981 epidemiology report which could not be confirmed subsequently by the same investigators.

Taubes offered no solutions for the limitations he perceives in epidemiology and in the press, nor for reducing their tendencies to conflict. Thus, he closed his talk by saying that trying to arrive at the truth about epidemiology and health from newspapers is like trying to establish the correct time by looking at the second hand on a clock.

Taubes and other speakers in the session took questions at the end. One participant took advantage of the opportunity to chastise Taubes for his 1995 article critical of epidemiology. Taubes responded by saying that numerous epidemiologists had read his article in draft before it was published. He claimed, “I did not write anything in the article that you epidemiologists do not know in your hearts to be true.”

Published August/September 1998  v

 

 
      ©  2011 The Epidemiology Monitor

Privacy  Terms of Use  |  Sitemap

Digital Smart Tools, LLC