Report Finds Systematic Reviews Increasing Dramatically In Quantity
But Decreasing In Quality
Not All Studies
Are Created Equal
A new report has found an astounding 2600% increase in
the publication rate of both systematic reviews and meta-analyses on
Pub Med in the past three decades from 1986 through 2014. The analysis
by Stanford’s John Ioannidis published in the September 2016
Milbank Quarterly discovered that much of the more recent increase
comes from overseas, where Chinese author affiliations represent more
than a third of the meta-analyses published in 2014 and outpace the
United States by four-fold. Yet despite this increase, Ioannidis
estimates that “only a small fraction of data from empirical
biomedical studies are included in such efforts”, leaving out a vast
portion of potentially relevant information on a topic. Furthermore,
Ioannidis found that many of the articles retread the same ground,
addressing nearly identical questions with sometimes little
acknowledgement for one another.
Redundancy Example
One such example used by Ioannidis was of numerous
meta-analyses all published within 4 years (2008-2012) that looked at
the prevention of atrial fibrillation after cardiac surgery. Across
these studies, the result went first from a non-significant summary
effect of the drugs to a highly significant effect and benefit of
statins in the second study. The latter result was then essentially
found and supported again and again in the nine proceeding
publications.
This repetition is not uncommon. In fact, a survey of
all topics from the Cochrane Database of Systematic Reviews found that
most had more than one published meta-analysis covering them, although
some had as many as 13. And while there is potential value in
replicating or updating the results of these studies, this publishing
practice can lead to confusion for even the most well-trained
investigators, particularly when the conclusions differ.
To wit, Ioannidis compared the results of several
meta-analyses ranking the effectiveness and or tolerability of diverse
antidepressants and found that a given drug’s rank out of 12 could
vary considerably among them, with some studies even reaching opposite
conclusions.
Questionable
Motivations?
The case of meta-analyses of antidepressants is
particularly enlightening with respect to some of the problems that
face this type of reporting. The massive amount of money in the
pharmaceutical industry coupled with the influence many systematic
reviews and meta-analyses have on patients and doctors can make these
types of publications effective marketing tools.
For example, Ioannidis identified 185 eligible meta-analyses published
from 2007 through March 2014 on antidepressants, 29% of which were
authored by employees of the assessed drug’s manufacturer and nearly
80% which had some ties to the industry (via sponsorship or conflict
of interest). Unsurprising is the fact that nearly all of the
industry-authored articles favorably reviewed the assessed drug and
were more than 20 times less likely than other meta-analyses to have
negative statements about such drugs, despite the use of the same
primary data.
This brings to light other issues that face the field
including variations in a study’s selection criteria, statistics, and
synthesis methods, all of which can dramatically influence the final
conclusions.
Make Meta-Analysis
Great Again
Ioannidis is careful to caution that the criticism of
these methods that he brings to light should not be considered an
endorsement to revert to nonsystematic reviews. Done properly he
believes that systematic reviews and meta-analyses can be quite
valuable and should be conducted by those who have few stakes in the
results and by those who do not have financial (or other) conflicts of
interest. Because methodology is of utmost importance, transparency
can improve matters, and registration of a study protocol can be
helpful.
Matthew Page
and David Moher, who published a commentary on the Ioannidis
article in the same issue of The Milbank Quarterly, agree.
They state that policies that “enhance transparency and
reproducibility regarding the availability of data and methods for all
research articles... are also likely to improve the credibility of
research articles in the future”, but emphasize that biomedical
researchers, and indeed the members of the entire public health field,
require more training on research methodology.
To address this Page and Moher suggest formal training
in reporting guidelines such as PRISMA (Preferred Reporting Items for
Systematic Reviews and Meta-Analyses) to improve reporting quality
overall, prevent bias, and reduce “research waste”. Additionally,
they suggest a model called a “living systematic review” as an
alternative to the one-off publications, in which an initial
systematic review is updated over time by a community of collaborating
scientists.
In the end, they emphasize that there is not one single
solution for this problem, and the strategies to fix the issues
outlined above will take the work of all parties involved, from
methodologists and researchers, to journals and publishers. They are
hopeful that by focusing on sound science and methodological rigor,
the quality of systematic reviews and meta-analyses will improve.
Primary Sources:
Original
article:
https://tinyurl.com/zhcuo3j
Commentary:
https://tinyurl.com/jeynvv4
Cochrane:
http://www.cochrane.org
PRISMA:
http://www.prisma-statement.org/
■
|