For most behavioral health professionals, our first encounter with the notion of measuring outcomes occurs during graduate school in the context of research on psychotherapy effectiveness. Findings on this topic are typically based upon use of a myriad of self-report, therapist-report, psychometric and survey methods not intended for routine clinical use and apt to yield results not directly comparable to those of other studies. After slogging through a morass of confusing conclusions, the take-away message for many of us is that measurement is practically impossible, due to the complexity of the psychotherapeutic change process and the confounds created by endless combinations of patient, therapist and a host of other variables. On that note, many of us headed off to applied behavioral care careers, leaving the idea of measuring treatment outcomes behind.
Today, perhaps in light of recent evidence that the very act of collecting outcomes data is associated with clinical improvements, some clinicians are now gathering various types of outcomes information not just to satisfy policy or reimbursement requirements, but to improve services and promote consumer engagement. Some of these clinicians maintain that using outcomes data to continuously improve is a professional ethical responsibility. But relatively few go much beyond anecdotal evaluation of their clinical effectiveness; they view the collection of outcomes data as unnecessary or impractical. Many still regard requests for evidence of the results of their work as an unwelcome, and perhaps threatening, invasion of the sanctity of the therapeutic relationship.
The rise of performance measures
When the 1990s heralded the “industrialization” of the behavioral healthcare industry, quantitative evaluation of results established a beachhead in the applied service world. Multidimensional measurement of system performance became a prime ingredient in the explosive growth of the managed care industry. Economic concerns of at-risk managed behavioral care organizations (MBHOs) spawned measurement methods that initially focused on cost and patterns of utilization. While MBHOs quickly became adept at demonstrating their capacity for serving more persons at lower costs, purchasers of MBHO services and their constituents soon also wanted to know what sort of “bang for the buck” they were getting in terms of quality and effectiveness.
During the early 1990s, attempts to gauge MBHO quality relied upon two things:
- Tracking a variety of key performance indicators (KPIs), internally- or customer-defined process measures presumed to reflect quality care and service (e.g., penetration, wait times, provider network adequacy, complaints and appeals, claims payments, etc.) and,
- Tracking satisfaction survey results from consumers and providers.
By the mid-1990s, MBHOs also sought to boost cost-effectiveness by laying the groundwork for what was coined “provider profiling.” This made it possible for MBHOs to steer more referrals to contracted providers who appeared to best match service delivery needs based upon various ratings and measures (e.g., availability for referrals, average length of stay, timeliness of documentation, etc.).
So with KPIs, satisfaction data, and provider profiling as supplements to basic utilization and cost data, MBHOs were able to present convincing quantitative evidence that their internal processes met quality standards, that consumers and providers were generally satisfied with services and that their systems were delivering clinical services at optimal efficiency.
However, despite all of these advances in performance measurement, one critical question remained unanswered: “Did consumers of behavioral services get better?”
A shift in focus
Toward the late 1990s and mid-2000s, various test development, managed care, and even pharmaceutical organizations made early attempts to address the question of clinical effectiveness using various patient surveys and provider ratings that sought to assess and compare clinical status at intake and periodically thereafter. Unfortunately, many of these clinical outcomes tools failed to achieve popular acceptance and use because they were:
- not well validated psychometrically,
- too narrow in clinical focus,
- too time-consuming for routine pre- and post-treatment use,
- limited to certain populations or service settings,
- considered proprietary and therefore could not be shared,
- unable to produce results that compared with those of other measures, or
- unable to produce results that were meaningful to multiple stakeholders.
One promising approach, for example, was advanced by the Ohio Department of Mental Health. In 2001, ODMH led an ambitious effort to implement a large-scale outcomes measurement system specific to behavioral health. ODMH required publicly-funded providers to collect and upload outcomes data across a number of clinically relevant outcome domains (e.g., symptom distress, functioning, safety and health, quality of life, empowerment, satisfaction) for three purposes: