Feeds:
Posts
Comments

Posts Tagged ‘anxiety’

I read an article recently about Cognitive Behavioural Therapy and its “evidence base”. The term “Evidence Based” is thrown around these days as a kind of label of approval. You might think it means “proven” or that scientists have examined the therapy and found that it works – well, when they say it works they mean they found it to be statistically superior to the control group. What they don’t say is whether or not the patients actually get well. And here’s the problem with CBT – a recent review found that 75% of people with depression treated with “CBT” did not become well, even though the statistical findings applied by the researchers led them to conclude it was “effective”. Can it be called effective if it doesn’t make people well?

CBT researcher Alan Kazdin put it bluntly in the flagship journal of the American Psychological Association:

“Researchers often do not know if clients receiving an evidence-based treatment have improved in everyday life or changed in a way that makes a difference. It is possible that evidence-based treatments with effects demonstrated on arbitrary metrics do not actually help people, that is, reduce their symptoms and improve their functioning.”

It’s strange really. The second half of my career was spent working at Glasgow Homeopathic Hospital, which developed into the NHS Centre for Integrative Care. We worked exclusively with patients with long term conditions, and, for the most part, with those who had failed to find relief through the orthodox approaches of drugs and surgery….or at least, who had failed to become well again.

We used an in-house assessment tool to measure the patients’ progress. It was a simple scale, 0 to 4, where 0 represented no change, 1 a change which had not made an impact on daily living, 2 a change which had made an impact on daily living, 3 a change which had made a major impact on daily living and 4 for completely well (there was a corresponding scale 0 to negative 4 for people who got worse). The person who assessed the change was the patient. The important point about this simple measure was that it was focused on the question…..has this therapy been of value to the patient in their daily living. That’s quite a different question from what percentage of the patients had a change in their blood lipid levels, their blood pressure, or whatever.

Time and time again our reviews showed that around two thirds of the patients rated a 2, 3 or 4 – in other words, two thirds of the patients experienced a change with had impacted on their daily living.

Yet, our approach, our tools and our therapies were rated as “not evidence based”, and year, after year, the Service was cut back and cut back, whilst at the same time online cognitive therapy programmes expanded on the back of their being “evidence based” (even though most patients didn’t become well again)

It’s a great idea to look at evidence, relevant evidence, but the pioneers of EBM said the clinician should take into account the research evidence, their clinical expertise and the preferences and values of their patients. How often does that happen?

It’s long past the time we should stop rubber stamping an approval on treatments which haven’t been shown to make a difference in most patients’ lives.

Read Full Post »