New Data Reveal the Full Extent of STAR*D Failure
Psychiatrists tout the STAR*D trial as strong evidence for the use of antidepressant drugs. It was a real-world study of over 4,000 people with depression who were able to receive up to four different trials of antidepressant drugs. The STAR*D researchers reported that over the course of the study, more than two-thirds of the patients had remitted (no longer had depression).
Since the 2008 publication of the STAR*D results in the American Journal of Psychiatry (AJP), however, those researchers have been criticized for misleading the public about the true remission rates in the study.
Now, in a new study, researchers were able to obtain the patient-level data used in the STAR*D. Based on their analysis, the true remission rate—over the year-long trial—was a little more than one-third.
“In contrast to the 67% cumulative remission rate reported in AJP, the actual rate was 35.0% when using the protocol-specified HRSD,” the researchers write.
STAR*D was an open-label, real-world trial of antidepressants conducted at 41 different treatment centers. The study was meant to show outcomes after one year. There was no placebo control group with which to compare the results.
The study included 4,041 patients with major depressive disorder. They were started on the SSRI citalopram (Celexa), but if they did not respond to that drug, there were three more treatment levels, individualized for each patient. There were 11 drug combinations offered in the study. This was meant to reflect real-world practice, in which patients who don’t respond to a drug are (theoretically) given a different drug until they find something that works.
The 2010 Reanalysis
In a 2010 study, researchers H. Edmund Pigott, Allan M. Leventhal, Gregory S. Alter, and John J. Boren reanalyzed the published results of STAR*D—combining data from various tables and other data reporting—to discover that the original publication had misled the public.
The STAR*D researchers submitted a protocol before conducting the study, which outlines exactly what measures will be used and how they will be reported. In that protocol, remission on the Hamilton Rating Scale for Depression (HRSD) was listed as the primary outcome measure—the main way to tell whether the treatment was successful or not.
However, in their AJP publication of the STAR*D results, the researchers did not include the primary outcome of remission on the HRSD. They simply left this out of the publication entirely. Instead, they reported on a different measure, one that they themselves created: the Quick Inventory of Depressive Symptomatology—Self Report (QIDS-SR).
Crucially, the HRSD was delivered by a third party to ensure that the researchers were blinded to the outcomes, which guards against their biases and the placebo effect. Unlike the HRSD, though, the QIDS-SR was unblinded, meaning that researcher biases and the placebo effect likely enhanced the scores.
However, without access to the original patient-level data, it was not possible to see exactly how much this outcome switching affected the results. That’s why the new study—with its finding that only 35% counted as “remitted” on the HRSD—is so important.
In 2010, Pigott, Levantal, Alter, and Boren documented that 607 of the STAR*D had an HRSD score of less than 14 and thus were ineligible to be in the trial because they weren’t very depressed to begin with. Yet, many in this group subsequently scored as remitted during one of the four stages of active treatment, inflating the remission rates.
Moreover, for those that remitted and entered into the year-long follow-up, they would not be scored as having “relapsed” during the follow-up unless their scores rose back up to 14 or higher on the HRSD scale. Thus, patients in this group of 607 who weren’t eligible for the trial in the first place could be counted as remitted and non-relapsed at the end of one year, even though, at that point, they were worse than when they entered the study.
And finally, Pigott, Leventhal, Alter, and Boren found that the actual number of people who stayed remitted and continued to the end of the trial was dismal—108 of the 4,041 in the trial, or about 2.7%.
A huge percentage of the STAR*D participants dropped out of the trial. Almost 10% dropped out within two weeks, and over a thousand participants dropped out during their first trial of antidepressants—many of them counted as having “remitted,” despite the fact that it’s usually people who do poorly or have adverse effects that drop out of studies.
The 2018 Reanalysis
In 2018, Pigott and other researchers—led by renowned Harvard researcher in placebo studies Irving Kirsch, along with Tania B. Huedo-Medina, and Blair T. Johnson—were able to access the patient-level data. They analyzed this data, focusing on just the first antidepressant trial in STAR*D.
Kirsch, Huedo-Medina, Pigott, and Johnson juxtaposed these outcomes to comparator trials of antidepressants (studies that compare antidepressant drugs against one another, rather than against a placebo, since STAR*D did not have a placebo group).
In comparator trials, the average improvement in HRSD score is 14.8 points. In the STAR*D, it was 6.6 points.
In comparator trials, the average remission rate is 48.4%. In STAR*D, it was 25.6%.
In comparator trials, the average response rate is 65.2%. In STAR*D, it was 32.5%.
They add that the antidepressants in STAR*D performed worse than what is typically seen from a placebo group in clinical trials.
The New Reanalysis
In this context, the new reanalysis of patient-level data, which shows that the original STAR*D publication used outcome switching to double the efficacy of antidepressant drugs (from 35% to 67%), is a confirmation of the way the original study results misled the public.
The reanalysis was conducted by Pigott and Kirsch, along with Thomas Kim, Colin Xu, and Jay Amsterdam.
According to Pigott, Kim, Xu, Kirsch, and Amsterdam, the highly publicized inflated outcomes presented in the original STAR*D publication have left the public with the incorrect assumption that antidepressant drugs are effective for over 15 years. They argue that this misleading data has led to a failure to search for better interventions that could be more effective.
“Bias in the clinical literature is commonly associated with industry-funded RCTs, not publicly funded ones. Our RIAT reanalysis though documents scientific errors in this NIMH-funded study. These errors inflated STAR*D investigators’ report of positive outcomes,” they write.
“The STAR*D summary article’s claim of a 67% cumulative remission rate was published in 2006. If STAR*D’s outcomes had been reported as prespecified, its model of care would likely have faced much stronger criticism 16 years ago and fuelled a more vigorous search for evidence-based treatment alternatives,” they add.
****
Pigott, H. E., Kim, T., Xu, C., Kirsch, I., & Amsterdam, J. (2023). What are the treatment remission, response and extent of improvement rates after up to four trials of antidepressant therapies in real-world depressed patients? A reanalysis of the STAR*D study’s patient-level data with fidelity to the original research protocol. BMJ Open, 0, e063095. doi:10.1136/bmjopen-2022-063095 (Link)
The post New Data Reveal the Full Extent of STAR*D Failure appeared first on Mad In America.