The use of routine outcome monitoring (ROM) is on the rise. In the United States and abroad, regulatory bodies are actually mandating the gathering of outcome data as the new “standard of care.”
As agencies rush to implement–often at great cost in terms of time and money–the question remains: just how much does ROM contribute to improved retention and effectiveness?
Over 20 years ago, I began using outcome and alliance scales in my work as a therapist, asking clients at each visit to give me feedback about the qaulity of our relationship and their experience of progress. Eventually, together with colleagues, I developed two, brief measures: the Outcome and Session Rating Scales.
When studies using the scales began to appear in the literature, I was immediately concerned. In my opinion, the results were just “too good to be true.” First, the results were confounded by allegiance effects, having been done exclusively by people with a significant investment in the results. More to the point, however, I was worried that the studies focused on the measures rather than on therapists.
Soon, as I predicted, other studies appeared with far more modest results. And now, a meta-analysis of all studies using the ORS and SRS has been published, confirming that routinely measuring performance, improves outcome but not as much as reported in the original studies (viz., .27 versus .50).
For those involved in and advocating FIT (Feedback-Informed Treatment), this is an IMPORTANT study. It makes clear that when working feedback-informed, improving effectiveness requires more than the use of two measures. Indeed, it’s not really about the measures at all. Rather, it’s about therapists using feedback to identify opportunities for their own professional development.
As my colleague and fellow psychologist, Birgit Valla, is fond of saying, “A stopwatch will not make you a better runner. It’s not about the clock. It’s how you use the information to identify small, specific aspects of your performance that could be improved and then practicing.”
That’s what the team at ICCE and I have been exploring these past 7 years. The latest article summarizing that research was published just this week.
All the best,
Scott
Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
P.S.: Registration for the Spring Intensives is open. Click on the links below to reserve your spot!
Jean Hornung-Starr says
I agree! When I saw your first sentence, Scott, I said to my husband: “But HE started it!” Thank you for recognizing that feedback is not for the greater glory of marketing the program, but for quietly guiding the therapists to know which way to lean in future work, and what areas of work they choose to further develop within themselves. Too often the results are supersized into training programs that are mandated for everyone, thus burdening a therapists’ time. When the human aspect of connecting with a client is removed from the equation, therapy is a revolving door, through which clients just go out of, more hopeless than when they came in. When a therapist’s time (and a client’s time) are respected, there can be time for pondering the meaning behind clients’ presentation. When I walk in the morning, or when I’m driving into work, often a thought will pop up that I want to inquire about with a client. I always remember your video of the therapist in England ( I think) who worked mostly from curiosity. There is no substitute for human contact in the process of healing.