The latest issue of The British Gestalt Journal features an article writing up the findings of the gestalt CORE project (hereafter Stevens et al). In their own words:
This is the account of a three-year research project within the Gestalt therapy community in the UK. It is an example of clinically-based, mostly quantitative research carried out in a methodical and rigorous way, using voluntary effort and minimum funding. The results can be compared with national databases of similar UK studies and show that Gestalt psychotherapists are as effective as therapists trained in other modalities working in the NHS and in primary care (p22).
The issue of evidence-based therapy is a thorny one in the UK, and one which places Stevens et al’s research into an interesting political category. I have a mixed reaction to these research findings, so offer that reaction here. I’ll conclude with congratulations to the people who put this research together because (and I apologise for this in advance) it was clearly a hardCORE effort.
NICE
Let’s start with NICE guidance, because it’s not always all that nice. NICE (or The National Institute for Health and Clinical Excellence to its friends) is a laudable organisation with laudable aims.
From its what we do page: “we develop evidence-based guidelines on the most effective ways to diagnose, treat and prevent disease and ill health”.
From its who we are page: “the National Institute for Health and Clinical Excellence (NICE) was set up in 1999 to reduce variation in the availability and quality of NHS treatments and care – the so called ‘postcode lottery’”.
So, NICE exists in order to ensure that someone in BS6 gets the same quality of treatment as someone in S13. This is basically a Good Thing; no one really wants to die or experience extended pain and suffering because their doctor didn’t get the memo. Which of course leads us to how we decide what is effective in any given case, which in turn leads us to evidence-based care. If I’m going to be prescribed medication, I want it to be the most demonstrably effective medication for my condition, not the only one my doctor knows about (or worse, the one whose company was more effective at marketing to my doctor).
This is all classic medical model stuff. The human body is this really cool machine, and when it breaks or malfunctions, you just identify the faulty part(s) and fix it/them. Since the introduction of that model, infant mortality rates have dropped, diseases that used to be deadly are now fairly routine, and heart transplants are possible. I don’t think anyone really wants to throw out this particular baby.
The bath water, however, is another story. While the medical model allows for miracles when applied to physical conditions, it frequently stalls when applied to issues generally related to the experience of being human. The very idea of mental illness is itself a logical fallacy that presupposes a genuine separation between mind and body. And that leads us into problems because, having presupposed the existence of a distinct ‘part’ called the mind, the medical model then attempts to identify the part of the mind that is malfunctioning in order to fix it. Hence the rise of psychopharmacology and the fallacy of the chemical imbalance model of mental illness.
The conclusion of all this is that, when presented with people whose suffering has more to do with the experience of being human rather than their human machinery, NICE continues to assume it is essentially dealing with a machine and seeks standardisation. And that standardisation requires an evidence-base that strips away the humanity of the individual in order to discover the essential underlying problem. Which simply isn’t possible when there is no essential underlying problem; the so-called malfunction is itself an expression of the humanity of the individual.
And this leaves a major problem for gestalt, because the NHS is the main provider of ‘mental health’ services, and its IAPT (improving access to psychological therapies) programme follows NICE guidance. And NICE guidance strongly favours CBT for treating mental health problems because, as Stevens et al acknowledge, it has a bigger evidence base:
CBT has had the most exposure to clinical trials since it uses set protocols which enable standardised collection of quantifiable data (p22).
Given why NICE has been established, this is understandable. From the perspective of the medical model, clinical trials establish efficacy of method, and set protocols allow for standardisation. Try establishing a protocol for ‘sitting with the uncertainty’ or ‘intuiting my client’s desensitisation on the basis of changes in my own bodily experience’. Quite.
So that’s my meander into NICE and evidence-based therapy. What I’m establishing here is the political ground against which Stevens et al’s research needs to be taken: the therapeutic hegemony of CBT.
Research findings, self-validation, and the equivalence paradox
The key finding of this research is that gestalt psychotherapists are shown to be as effective as other types of psychotherapist according to CORE data. Stevens et al used three other studies with which to compare results. I’m not entirely clear why these three particular studies were used or which other studies were passed over for inclusion, so I think the article would have benefited from a literature review. The result of the comparison is that two studies with large datasets provide benchmarking material, whilst the third study focused on minimally trained mental health counsellors.
My response to these results has been interesting. I was angry at first, in the ‘well tell me something I don’t know and this is just political maneuvering not real research’ vein that is the clearest sign one of my buttons has been pressed; the animal anger of being prodded somewhere sore. A bit of licking later and I realise I am relieved. As a therapist, I sit with a great deal of doubt. I doubt that I’m effective. I doubt that gestalt is effective. I doubt that psychotherapy is effective. After all, if it’s all essentially placebo, then I’ve spent a lot of money, time and energy training in the interpersonal equivalent of prescribing sugar pills. My anger was masking a more fundamental feeling of ‘thank fuck for that!’. There is something vindicating about seeing quantitative research with headline numbers that appear to prove what I do is effective.
Now, pursuing that need for external validation appears highly antithetical to the gestalt therapy that Perls et al originally set down, and I think that will make this research controversial within the gestalt community. On the one hand, I practice a therapy that emphasises organismic self-regulation and the importance of self-validation over living up to externally imposed standards. On the other hand, the rise of CBT as the NICE approved therapy of choice is a genuine social challenge that I want to make contact with, not avoid. As Stevens et al point out:
If as Gestalt therapists we do not take seriously the challenge to articulate and evaluate our therapeutic claims we may be left talking only amongst ourselves and limited to working only with those clients who can afford to pay privately (p26).
I have introjected gestalt therapy if I use the ‘but Perls said I should discern things for myself and not be concerned with empirical validation’ line of defence to block contact with a genuine social challenge. If need organises the organism/environment field, then the fact that standardisation and evidence-based practice is organising the field of psychotherapy provision demonstrates the operation of powerful needs. Furthermore, if many gestalt therapists are, like me, not trained in quantitative research methods and don’t have backgrounds in academic psychology, then we are in danger of not contacting the woods for our own projected trees.
All of which serves to focus me on the CORE methodology and what Stevens et al identify as the equivalence paradox: “treatments that have different and incompatible theoretical backgrounds, philosophies and techniques tend to have the same degree of success as measured by CORE” (p24). In addition, the study focusing on minimally trained practitioners showed what I would have hypothesised; that minimally trained practitioners were less effective than professional practitioners.
This equivalence paradox speaks to an intuition in me that for all our different approaches to doing therapy, as practitioners we are all essentially undertaking the same journey only with different preferred routes and ways of travel. Psychotherapy theory seems to have more to do with the preferences of the therapist than it does the effectiveness of the therapy itself. Which is incredibly ironic because it means that, in the arena of working with the human experience, diversity of method actually leads to standardisation of efficacy!
On the other hand, maybe the equivalence paradox is actually an inherent flaw in CORE methodology itself. Perhaps the statistics are simply recording client expectation of what should be the case rather than effectively measuring what actually is the case. A key question there would be whether the clients in the minimally trained practitioner study knew that their therapists were minimally trained and adapted their expectations accordingly.
Politics and dialogue
Where I seem to end up with all this is a sense that Stevens et al have provided gestalt practitioners in the UK with a valuable opportunity to take part in a national conversation. That makes this research political as it has more to do with positioning gestalt in the professional field than it does developing gestalt practice. Both are valid reasons for research.
Ultimately, politics is an opportunity for dialogue about how key collective issues are to be addressed. And dialogue, as I am keenly aware as a gestalt practitioner, necessitates a willingness to open to contact with an other in such a way that risks being forever changed by the process.
And by definition that works both ways. Stevens et al used the CORE methodology not because it is finely attuned to the needs of gestalt practitioners; the writers acknowledge that there is no gestalt therapy box on the forms that need to be filled in (p23) and that filling out forms every session is quite alien to gestalt’s relational approach (p26).
Rather, the CORE methodology was used because it is a well-established outcome measure that allows for comparison with many other studies, including CBT (p22). In therapeutic terms, Stevens et al decided to learn and use the language of the people they are trying to reach rather than impose their own. Once gestalt therapy as a profession is part of that ongoing conversation, we can then also deconstruct introjections about what should count as an evidence-base and better dispel projections about gestalt therapy as an approach.
Finally, what this also highlights for me is the politics of research as an activity in itself. Having conducted my own, qualitative research, I’ve needed to address the issue of researcher bias and the various political and philosophical assumptions that underlie research as an activity. Basically, who asks what questions and why?
With respect to the gestalt CORE project, the GPTI (Gestalt Psychotherapy and Training Institute) discussion list was the birth place of the CORE project, growing out of “gestalt therapists’ concern to find a way to research the effectiveness of their work” (p22). GPTI also funded costs associated with CORE software and training for the first year to get the project started. Gestalt therapists volunteered to take part in data collection, and The British Gestalt Journal hosted a recruitment page with information about the project and downloadable forms.
One way of looking at this research, then, is this: a group of gestalt therapists, with funding from a gestalt therapy organisation and assistance from a gestalt therapy journal, have conducted research whose findings claim gestalt therapy is as effective as other therapies. This appears to justify one of Babette Rothschild’s favourite phrases: outcome focused research is some of the most biased research there is.
I think this bias is real and would hopefully be recognised by Stevens et al. However, I don’t think that recognition of vested interests damages the research. Rather, I think it opens up nicely the same question of vested interests for outcome focused research in general, and CORE studies in particular. After all, “the CORE measurement is primarily designed to provide managers and practitioners with evidence of service quality and effectiveness” (p23). The whole point is to demonstrate the level of effectiveness, not falsify.
Consequently, the inevitable criticism that can be levied at Stevens et al (ie your research is biased by the vested interests of those concerned) applies equally to other studies and only serves to demonstrate the impossibility of the neutral researcher; the motivation to research has to come from somewhere.
Congratulatory note
What I hope this post portrays is my attempt to reconcile my own thoughts and feelings about the possibility of evidence-based therapeutic practice with my recognition that this is an important piece of research for gestalt therapy.
My congratulations go out to Christine Stevens, Jane Stringfellow, Katy Wakelin, and Judith Waring for putting together this research. This was research conducted by volunteers, outside of academia, and with minimal funding. That is in contrast to CBT as an approach, which has an easier time gathering an evidence-base precisely because there is a greater abundance of resource aimed at producing that evidence-base.
In conclusion, I think those involved can consider this to be three years well spent, and I’ve enjoyed the challenge of chewing over this research.
Article reference:
Stevens et al (2011) The UK Gestalt psychotherapy CORE research project: the findings; The British Gestalt Journal, Vol. 20 (2), pp22-27 (online ref)