Pediatrician/Writer Trained Epidemiologist Provides Tips For Parents
In Gauging The Quality Of Research Studies
Colloquial
Language Makes For User-Friendly Guide
In a brief set of
guidelines written in a colloquial style, Amitha Kalaichandran,
a pediatrician with training in global epidemiology at Johns Hopkins
has consulted with colleagues and experts to provide a highly readable
account for parents of what to watch out for in reading and digesting
new research reports.
Kalaichandran’s main
objective is to inject caution into any parental decisions about
changing the behavior or lifestyle of their children or family based
on misleading or overblown research findings.
Among the cautionary
guidelines she advocates are:
1.
Correlation does not equal causation.
She calls this a common trap and makes
the important point that several questions must be answered often over
several years before causation can be established.
2. Mice aren’t men.
Some reported results are intriguing, but if the studies are done in
animals it can take years to determine if the initial findings are
relevant for humans.
3. Study quality
matters.
According to Kalaichandran, “When it comes to study design, not all
are created equal.” She describes a hierarchy of studies between case
reports and clinical trials and urges parents to be mindful of how the
data were obtained.
4. Statistics can be
misinterpreted.
Kalaichandran explains statistical significance as a result unlikely
to have occurred by chance, but she cautions that statistical
significance does not equate to clinical significance and provides an
example for readers.
“Imagine a randomized
controlled trial that split 200 women with migraines into two groups
of 100. One was given a pill to prevent migraines and another was
given a placebo. After six months, 11 women from the pill group and 12
from the placebo group had at least one migraine per week, but the 11
women in the pill group experienced arm tingling as a potential side
effect. If women in the pill group were found to be statistically less
likely to have migraines than those in the placebo group, the
difference may still be too small to recommend the pill for migraines,
since just one woman out of 100 had fewer migraines. Also, researchers
would have to take potential side effects into account.”
She adds, “The
opposite is also true. If a study reports that regular exercise helped
relieve chronic pain symptoms in 30 percent of its participants, that
might sound like a lot. But if the study included just 10 people,
that’s only three people helped. This finding may not be statistically
significant, but could be clinically important, since there are
limited treatment options for people with chronic pain, and might
warrant a larger trial. "
5. Bigger is often
better.
Kalaichandran quotes John Ioannidis to explain the power of
studies. “Power is telling us what the chances are that a study will
detect a signal, if that signal does exist,” and notes that the
easiest way for researchers to increase a study’s power is to increase
its size. “Simply put, larger studies are more likely to help us get
closer to the truth than smaller ones,” she says.
6. Not all findings
apply to you.
In this section, parents are cautioned to examine any selection
factors used in recruiting study subjects such as age, gender, or
ethnicity. These subjects may be different from the average person
reading about the results. She notes how early studies on heart
disease, for instance, were performed primarily on white men.
7. One study is just
one study.
No single study is likely to impact medical practice. It takes time to
accumulate a robust body of evidence that leads to solid
recommendations.
8. Not all journals
are created equal.
A good way to spot a high quality journal is to look for one with a
high impact factor. Parents are warned against giving weight to
findings published in “predatory journals”.
To read the original
article in full, click here:
https://nyti.ms/2STvL0m
■
|