Falling STAR*D?
January 14, 2012 12:16 PM Subscribe
Falling STAR*D?: It is common practice for psychiatrists to switch depressive patients between different antidepressants if their current drug does not evince a symptomatic response. Despite clinical wisdom supporting this, little empirical, controlled evidence exists to direct “switching” protocols (e.g. if a patient with Z characteristics is on drug X, is it usually better to switch to drug A, B, or C? Will switching help at all?) in the psychopharmacological treatment of depression. The NIMH-funded STAR*D (Sequenced Alternatives to Relieve Depression) study aimed to address these questions of treatment direction in a very large (n>4000), “real-world” sample using a multi-phase treatment plan with different drugs (and cognitive therapy) at every step to maximize chances of eventual remission. Overall, the NIMH reported that about 67% of patients eventually achieved remission, with few differences in effectiveness between different types of treatment at each step. However, researchers and commentators have raised concerns regarding inconsistent reporting of outcomes, after-the-fact changes in study design and analysis, and other issues that may have inflated, partially invalidated, or misrepresented widely reported treatment outcomes. These inequities may also have implications for the secondary moderator analyses (i.e. does trait A predict switching to X or Y is better?) that were a major reason for the study.
Criticisms of STAR*D include (but are not limited to):
* Switching primary outcome at the study's conclusion from the widely used (for better or for worse) Hamilton Rating Scale for Depression to the Quick Inventory of Depressive Symptomatology, ostensibly due to high patient dropout preventing acquisition of HRSD assessments (though robust statistical methods for imputing/estimating missing data in this sort of study exist). While the HRSD was performed by blinded assessors independent of the immediate treatment situation, the QIDS-SR was actually taken as part of the treatment-guiding process, leading to conjecture that there was more demand on patients to respond positively to the QIDS-SR versus the HRSD. (There is some odd confusion over whether the QIDS-SR was administered in-person as the study protocol would indicate, or over a computerized telephone check-up system.) Notably, the QIDS-SR reported higher rates of remission than the HRSD. However, some articles on individual steps of treatment did report on HRSD outcomes.
* A large number of patients (607) entered treatment at milder levels of depression than the minimum set by study protocols (i.e. with HRSD scores less than 14, with remission defined as HRSD less than 7), but were treated and included in the summary findings of the study. Previous research has suggested that milder depression may respond somewhat similarly to placebo or antidepressant treatment.
* Lack of placebo control leading to diminished ability to conclude to what degree apparent remission at each step is spontaneous or treatment-driven (i.e. if you remit at step 2, is it because of the new drug or because depression can spontaneously remit, especially for more moderate depressions).
* Considering success-trending dropouts in many cases as treatment successes despite front-end guidelines to the contrary for some types of dropout, to the result of raising treatment success rates. (Though Pigott and critics often take the reverse interpretation 100%, equating dropout always with lack of clinical efficacy, which may be overly biased in the other direction.) Dropouts in the study over the course of follow-up, though always expected, were astonishingly frequent in STAR*D (perhaps as high as around 90%).
* Confusion over different STAR*D papers publishing different rates of suicidality for the Celexa step of the study (each using slightly different sampling from the STAR*D cohort), some an order of magnitude higher than others. The higher rates were reported in a paper examining a particular gene variant’s ability to predict suicidality on the drug, which was associated with a patent for screening this gene variant.
* Lack of publication on several pre-established secondary outcome measures (e.g. general assessment of functioning, work productivity) several years after the study finished leading to suspicions regarding treatment failure.
At least one STAR*D investigator may agree with some of these criticisms, Dr. Maurizio Fava (see remarks at end). Dr. Fava, while not at all dismissive of the helpful role of psychopharmacology in psychiatry (rather the contrary), has in the past raised concerns regarding lack of research on the efficacy and side-effects of long-term antidepressant use [paywalled, summarized partially here].
Medical journalist and psychotropic drug critic Robert Whitaker (previously) has also blogged his criticisms regarding STAR*D, largely based on the findings of Dr. Pigott’s group above.
A large number of papers have been published examining data from the STAR*D study (bibliography incompletely updated), many exploring the differential contribution of genetic profiles to antidepressant response, to varying degrees of success. And it would be incorrect to say STAR*D got everything wrong: for example, one unique and praised feature of the STAR*D trial over other clinical trials, as noted by Neuroskeptic, is the inclusion of patients who have co-morbidities and depressive features that may exclude them from more traditional clinical trials. These more complexly symptomatic patients may be representative of depression within real-world populations compared to patients found in antidepressant trials. Analyses of the STAR*D data suggest that “clinical trial” patients (who composed a minority of the STAR*D cohort) responded significantly better to treatment than other patients. The investigators of STAR*D themselves conjectured from this result that standard clinical trials may overestimate the effects of antidepressants.
Criticisms of STAR*D include (but are not limited to):
* Switching primary outcome at the study's conclusion from the widely used (for better or for worse) Hamilton Rating Scale for Depression to the Quick Inventory of Depressive Symptomatology, ostensibly due to high patient dropout preventing acquisition of HRSD assessments (though robust statistical methods for imputing/estimating missing data in this sort of study exist). While the HRSD was performed by blinded assessors independent of the immediate treatment situation, the QIDS-SR was actually taken as part of the treatment-guiding process, leading to conjecture that there was more demand on patients to respond positively to the QIDS-SR versus the HRSD. (There is some odd confusion over whether the QIDS-SR was administered in-person as the study protocol would indicate, or over a computerized telephone check-up system.) Notably, the QIDS-SR reported higher rates of remission than the HRSD. However, some articles on individual steps of treatment did report on HRSD outcomes.
* A large number of patients (607) entered treatment at milder levels of depression than the minimum set by study protocols (i.e. with HRSD scores less than 14, with remission defined as HRSD less than 7), but were treated and included in the summary findings of the study. Previous research has suggested that milder depression may respond somewhat similarly to placebo or antidepressant treatment.
* Lack of placebo control leading to diminished ability to conclude to what degree apparent remission at each step is spontaneous or treatment-driven (i.e. if you remit at step 2, is it because of the new drug or because depression can spontaneously remit, especially for more moderate depressions).
* Considering success-trending dropouts in many cases as treatment successes despite front-end guidelines to the contrary for some types of dropout, to the result of raising treatment success rates. (Though Pigott and critics often take the reverse interpretation 100%, equating dropout always with lack of clinical efficacy, which may be overly biased in the other direction.) Dropouts in the study over the course of follow-up, though always expected, were astonishingly frequent in STAR*D (perhaps as high as around 90%).
* Confusion over different STAR*D papers publishing different rates of suicidality for the Celexa step of the study (each using slightly different sampling from the STAR*D cohort), some an order of magnitude higher than others. The higher rates were reported in a paper examining a particular gene variant’s ability to predict suicidality on the drug, which was associated with a patent for screening this gene variant.
* Lack of publication on several pre-established secondary outcome measures (e.g. general assessment of functioning, work productivity) several years after the study finished leading to suspicions regarding treatment failure.
At least one STAR*D investigator may agree with some of these criticisms, Dr. Maurizio Fava (see remarks at end). Dr. Fava, while not at all dismissive of the helpful role of psychopharmacology in psychiatry (rather the contrary), has in the past raised concerns regarding lack of research on the efficacy and side-effects of long-term antidepressant use [paywalled, summarized partially here].
Medical journalist and psychotropic drug critic Robert Whitaker (previously) has also blogged his criticisms regarding STAR*D, largely based on the findings of Dr. Pigott’s group above.
A large number of papers have been published examining data from the STAR*D study (bibliography incompletely updated), many exploring the differential contribution of genetic profiles to antidepressant response, to varying degrees of success. And it would be incorrect to say STAR*D got everything wrong: for example, one unique and praised feature of the STAR*D trial over other clinical trials, as noted by Neuroskeptic, is the inclusion of patients who have co-morbidities and depressive features that may exclude them from more traditional clinical trials. These more complexly symptomatic patients may be representative of depression within real-world populations compared to patients found in antidepressant trials. Analyses of the STAR*D data suggest that “clinical trial” patients (who composed a minority of the STAR*D cohort) responded significantly better to treatment than other patients. The investigators of STAR*D themselves conjectured from this result that standard clinical trials may overestimate the effects of antidepressants.
Wow, this is a great meaty post!
posted by OmieWise at 1:15 PM on January 14, 2012 [1 favorite]
posted by OmieWise at 1:15 PM on January 14, 2012 [1 favorite]
I don't know where to dig in first! Excellent post!
posted by kamikazegopher at 2:11 PM on January 14, 2012
posted by kamikazegopher at 2:11 PM on January 14, 2012
What a great post. Thanks so much for this; my early evening is officially shot.
posted by downing street memo at 2:16 PM on January 14, 2012
posted by downing street memo at 2:16 PM on January 14, 2012
Great post. And as someone who suffers with depression, great news. A 67% remission rate, if real, is encouraging.
posted by Maisie at 3:12 PM on January 14, 2012
posted by Maisie at 3:12 PM on January 14, 2012
Strong work! Thanks for this!
Not 30 minutes ago, I just happened to be reading (latent variable modeling demi-god) Bengt Muthén, et al's article entitled "Growth Modeling With Nonignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial". My intent is to finish reading it and fully understand it by the end of the three-day weekend. Fingers crossed.
posted by mean square error at 3:20 PM on January 14, 2012 [1 favorite]
Not 30 minutes ago, I just happened to be reading (latent variable modeling demi-god) Bengt Muthén, et al's article entitled "Growth Modeling With Nonignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial". My intent is to finish reading it and fully understand it by the end of the three-day weekend. Fingers crossed.
posted by mean square error at 3:20 PM on January 14, 2012 [1 favorite]
Wonderful post, but as someone who is affected, I have to add this:
"Just so there’s no confusion: Depression is very real and not always treatable. Seriously."
posted by vers at 3:35 PM on January 14, 2012 [1 favorite]
"Just so there’s no confusion: Depression is very real and not always treatable. Seriously."
posted by vers at 3:35 PM on January 14, 2012 [1 favorite]
My biggest concerns about this study are the lack of differentiation between those suffering mild to moderate depression, and the short timescale between the steps, with limited long term followup of remission cases from what I can see.
Previous meta studies have shown that mild to moderate depression responds roughly equally to placebo vs medication ; however the more severe the symptoms, the more effective anti-depressants are over placebo. More useful information might have been obtained if mild sufferers were separated out from severe or chronic depression - given the existing evidence, different treatment options are arguably going to give different results.
In regards to treatment length; in all the people I know with depression, including myself there was an initial burst of relief at having a doctor actually confirm the diagnosis ; that it's not all just in your head, what you're suffering is both real, not just weaknesses of character, and is treatable. Merely accepting that you need help and seeking it is a big step towards tackling depression - the very nature of the disease is that it hampers the rational mind set you need to recognise and tackle the underlying causes.
Given my own circumstances (at lest two years of depression getting steadily worse before , followed by two years of treatment so far) if I'd been on that trial,I would have been a successful remission at first stage. I responded well to a low dose of fluoxetine at first (probably placebo would have worked too) from the sheer relief at receiving treatment.
However after several months my symptoms were back near their original level. Neither increasing my dose nor therapy helped much, though given each day is so varied, it takes time to determine what is actually working.
Switching to venlafaxine helped very quickly though. We have had to increase my dose twice though, and even with treatment my symptoms are still fairly severe; but then my underlying issues (workload, work stress and related issues in how I cope) haven't significantly changed either.
The drugs help me remain functional, but they don't fix the problem - they just give you time and clarity to understand and plan a bit more, something the dark fog takes away from you; the numbness and inability to make meaningful decisions, everything is a massive struggle just to keep going at all.
I suspect if more talk therapy and cbt don't help, I will probably need to switch to something else, possibly in a cocktail. That certainly seems to be a common story; finding which treatment works is such an individual thing that trying to draw meaningful conclusions upon a standard (first drug x, then drug y) is a bit doomed to failure I suspect, which the study appears to validate.
Still; tis a good post, and encouraging that some combination of therapy and needs did help most people, at least in the short term.
posted by ArkhanJG at 3:46 PM on January 14, 2012 [5 favorites]
Previous meta studies have shown that mild to moderate depression responds roughly equally to placebo vs medication ; however the more severe the symptoms, the more effective anti-depressants are over placebo. More useful information might have been obtained if mild sufferers were separated out from severe or chronic depression - given the existing evidence, different treatment options are arguably going to give different results.
In regards to treatment length; in all the people I know with depression, including myself there was an initial burst of relief at having a doctor actually confirm the diagnosis ; that it's not all just in your head, what you're suffering is both real, not just weaknesses of character, and is treatable. Merely accepting that you need help and seeking it is a big step towards tackling depression - the very nature of the disease is that it hampers the rational mind set you need to recognise and tackle the underlying causes.
Given my own circumstances (at lest two years of depression getting steadily worse before , followed by two years of treatment so far) if I'd been on that trial,I would have been a successful remission at first stage. I responded well to a low dose of fluoxetine at first (probably placebo would have worked too) from the sheer relief at receiving treatment.
However after several months my symptoms were back near their original level. Neither increasing my dose nor therapy helped much, though given each day is so varied, it takes time to determine what is actually working.
Switching to venlafaxine helped very quickly though. We have had to increase my dose twice though, and even with treatment my symptoms are still fairly severe; but then my underlying issues (workload, work stress and related issues in how I cope) haven't significantly changed either.
The drugs help me remain functional, but they don't fix the problem - they just give you time and clarity to understand and plan a bit more, something the dark fog takes away from you; the numbness and inability to make meaningful decisions, everything is a massive struggle just to keep going at all.
I suspect if more talk therapy and cbt don't help, I will probably need to switch to something else, possibly in a cocktail. That certainly seems to be a common story; finding which treatment works is such an individual thing that trying to draw meaningful conclusions upon a standard (first drug x, then drug y) is a bit doomed to failure I suspect, which the study appears to validate.
Still; tis a good post, and encouraging that some combination of therapy and needs did help most people, at least in the short term.
posted by ArkhanJG at 3:46 PM on January 14, 2012 [5 favorites]
Being familiar with large scale clinical studies were the readout was something simple like "Dead a year later? Yes[ ] No[ ]" and seeing the errors that pros make in study designs (hint: hindsight is 20/20) none of this really surprises me, particularly for something with a purely subjective readout.
What surprised me was how good the big picture analysis at Neuroskeptic was, particularly the "why not include the suicidal in your clinical study" commentary.
posted by Kid Charlemagne at 4:55 PM on January 14, 2012
What surprised me was how good the big picture analysis at Neuroskeptic was, particularly the "why not include the suicidal in your clinical study" commentary.
posted by Kid Charlemagne at 4:55 PM on January 14, 2012
How about have these brilliant researchers stop wasting time twiddling with meta-drugs, and help invent some real, creative solutions to this phenomenon that is killing lives in today's society.
posted by polymodus at 8:48 PM on January 14, 2012 [1 favorite]
posted by polymodus at 8:48 PM on January 14, 2012 [1 favorite]
polymodus: Too much effort. Pharmaceuticals are an easy "fix". Especially when the pharmaceutical companies are "helping" so much.
posted by Soupisgoodfood at 8:06 PM on January 15, 2012
posted by Soupisgoodfood at 8:06 PM on January 15, 2012
« Older My Father, Always Fond of a Long Shot, Chose to... | Erection news Newer »
This thread has been archived and is closed to new comments
posted by Keter at 12:17 PM on January 14, 2012 [3 favorites]