That’s the conclusion of a new analysis led by a researcher at the Stanford University School of Medicine.
John Ioannidis, MD, DSc, chief of the Stanford Prevention Research Center, and his colleagues parsed data from more than 85,000 previous meta-analyses (analyses that can include any number of medical trials on a particular intervention and outcome of interest) and determined that most medical interventions have only small or modest incremental effects, but that those effects are frequently overestimated by small studies. The results of the analysis were published Oct. 24 in the Journal of the American Medical Association.
“We’ve seen lots of anecdotal evidence for this,” said Ioannidis. “With almost any new study that claims a very large effect, there’s a follow-up study later down the road that finds otherwise.”
When scientists develop a new way to treat a disease or symptom, they must perform clinical trials to ensure that the new intervention is effective and doesn’t harm patients. In most cases, this leads to a randomized, controlled trial wherein some patients receive the new treatment and others fall into a control group, receiving either a placebo or the current best treatment for the condition. Then, researchers can compare the outcomes of the two groups. But Ioannidis has been involved in long-standing efforts to improve this process, and the ways in which medical research is carried out. And that means understanding weaknesses of the current system.
Ioannidis wanted to put numbers to the anecdotal evidence suggesting that clinical trials reporting major treatment effects don’t hold up over time. To do this, he and his collaborators — Ralph Horwitz, MD, of Yale University School of Medicine, and Tiago Pereira, PhD, of Oswaldo Cruz Hospital in Sao Paulo, Brazil — turned to the Cochrane Database of Systematic Reviews, a collection of review articles on health-care studies from around the world. The researchers looked at more than 3,000 reviews that covered 85,000 meta-analyses on medical topics. They searched for trials concluding that a medical intervention had a very large treatment effect, defined as a positive or negative effect that was five times greater than the effect experienced by the control group.
The team discovered that in about 10 percent of the medical topics examined, a very large treatment effect was found in a first study, and another 6 percent found a very large treatment effect only in a later trial. But in more than 90 percent of all those cases, the very large effect disappeared when additional studies or meta-analyses were performed.
“Most of them did not survive to have a truly very large effect,” said Ioannidis, who is also the C.F. Rehnborg Professor in Disease Prevention in the School of Medicine. And the very large effects usually didn’t involve life-threatening aspects of the disease; rather, they were related to minor symptoms, or involved harms from the treatment being tested.
In fact, among all the interventions that the scientists studied involving randomized trials, only one intervention — a treatment known as extracorporeal oxygenation for respiratory failure in newborns — had a very large effect on mortality with high levels of statistical significance and no concerns about its validity. Overall, only 9 percent of the very large effects had high levels of statistical significance without any concerns about their validity.
Ioannidis admits that some interventions likely to have very large effects are never tested in randomized studies — for instance, if a person drops dead in the street, someone will always try to resuscitate them. But for most medical interventions, randomized trials are performed to determine whether these treatments are effective and, if so, how effective they are. When Ioannidis looked into what kinds of trials most often concluded very large treatment effects, one thing stood out.
“Almost always, these very large effects are seen in very small trials — usually with fewer than 20 people in the study having the outcome claimed to be the very large effect,” he said.
This emphasizes the need for larger studies, Ioannidis said, and the fact that small studies concluding a large treatment effect need to be looked at skeptically.
“The take-home message is that one should be very careful when a very large treatment effect is seen,” said Ioannidis. The data serves as a reminder to scientists, clinicians, the media and the public that all new studies must be taken with a grain of salt and that results not be overhyped, he added. Incorrectly publicizing a very large treatment effect can be as much the fault of a scientist who picks and chooses data or ignores the biases inherent to a small study as it is of the media.
“Yes, a large study can cost more than a small study,” Ioannidis said. “But performing a well-done, larger, long-term, randomized study is better than wasting money left and right on very small, mediocre studies.”
A small amount of external funding for the study came from the Sao Paulo Research Foundation. Information about Stanford’s Department of Medicine, which also supported the work, is available at http://medicine.stanford.edu.
Sarah C. P. Williams is a freelance writer.