The Michigan Society for Psychoanalytic Psychology
MSPP Home Newsletter Archives Reading Room
Empirically-Validated Treatments Movement: A Practitioner Perspective
F. Levant, Ed.D., J.D.
(President-Elect, American Psychological Association)
would like to weigh in on the issue of what has been called,
sequentially, “empirically-validated treatments” (APA
Division of Clinical Psychology, 1995),
“empirically-supported treatments” (Kendall, 1998), and
now “evidence-based practice” (Institute of Medicine,
treatments is a difficult topic for a practitioner to discuss
with clinical scientists. In my attempts to discuss this
informally, I have found that some clinical scientists
immediately assume that I am anti-science, and others emit a
guffaw, asking incredulously: “What, are you for empirically
McFall (1991, p. 76) reflects this perspective when he
divides the world of clinical psychology into “scientific
and pseudoscientific clinical psychology,” and rhetorically
asks “what is the alternative [to scientific clinical
psychology]? Unscientific clinical psychology.” (see
also Lilienfeld, Lohr, & Morier, 2001).
are, thus, some ardent clinical scientists (e.g., McFall and
Lilienfeld) who appear to subscribe to scientistic faith, and
believe that the superiority of scientific approach is so
marked that other approaches should be excluded. Since this is
a matter of faith rather than reason, arguments would seem to
be pointless. Nonetheless, clinical psychologists have argued
over it, a lot, for the last eight years.
Punctuating these interactions from the practitioner
perspective, the controversy seems to stem from the attempts
of some clinical scientists to dominate the discourse on
acceptable practice, and impose very narrow views of both
science and practice.
start with a brief recapitulation of the events. Division 12,
under the leadership of then-President David Barlow, formed a
Task Force “to consider methods to educate clinical
psychologists, third party payors, and the public about
effective psychotherapies” (APA Division of Clinical
Psychology, 1995, p. 3). The Task Force came up with lists of
“Well-Established Treatments” and “Probably Efficacious
Treatments.” Not surprisingly, the lists themselves
emphasized short term behavioral and cognitive-behavioral
approaches, which lend themselves to manualization; longer
term, more complex approaches (e.g., psychodynamic, systemic,
feminist, and narrative) were not well represented.
empirically-validated treatments movement has had quite an
impact on practitioners. It provided ammunition to managed
care and insurance companies to use in their efforts to
control costs by restricting the practice of psychological
health care (Seligman & Levant, 1998). It has also
influenced many local, state and federal funding agencies, who
now require the use of empirically-validated treatments.
Moreover, this movement could have an even greater impact on
practitioners in the future. For example, it could create
additional hazards for practitioners in the courtroom if
empirically-validated treatments are held up as the standard
of care in our field. Further, adherence to
empirical-validated treatments could become a major criterion
in accreditation decisions and approval of CE sponsors, as the
Task Force urged (APA Division of Clinical Psychology, 1995,
p. 3). Some clinical scientists have gone so far as to call
for APA and other professional organizations “to impose
stiff sanctions, including expulsion if necessary,” against
practitioners who do not practice empirically-validated
assessments and treatments (Lohr, Fowler & Lilienfeld,
2002, p. 8).
all of this fallout, it should be no surprise that the Task
Force report was soon steeped in controversy. Critics argued
first and foremost that the Task Force used a very narrow
definition of empirical research. For example, Koocher
Seligman and Levant (1998) argued that efficacy research
programs based on RCTs may have high internal validity, but
they lack external or ecological validity. On the other hand,
effectiveness research, such as the Consumer Reports
study (Seligman, 1995), has much higher external validity and
fidelity to the actual treatment situation as it exists in the
community. Additional effectiveness studies are needed, and
could be conducted by the Practice-Research Networks that have
recently appeared (Borkovec,
Echemendia, Ragusea, & Ruiz, 2001).
Finally, others have pointed out that many treatments
have not been studied empirically, and that there is a big
difference between a treatment that has not been tested
empirically, and one that has not been supported by the
1999, John Norcross, then-President of Division 29
(Psychotherapy), countered by establishing a Task Force on
Empirically Supported Therapy Relationships, which emphasized
the person of the therapist, the therapy relationship and the
non-diagnostic characteristics of the patient
(Norcross, 2001). Lambert and Barley (2001) summarized
this research literature, pointing out that specific
techniques (namely those that were the focus of the studies
underlying the Division 12 Task Force Report) accounted for no
more than 15% of the variance in therapy outcomes. On the
other hand, the therapy relationship and factors common to
different therapies accounted for 30%, patient qualities and
extratherapeutic change accounted for 40%, and expectancy and
the placebo effect accounted for the remaining 15%.
and Morrison (2001) reported a multidimensional meta-analysis
of treatments for depression, panic disorder, and GAD in which
they found that “the majority of patients were excluded from
participating in the average study” due to the presence of
comorbid conditions (p. 880). Approximately two thirds of the
patients in the studies they reviewed were excluded, which
seems like a high percentage, but is actually a bit lower than
national figures for comorbidity. Meichenbaum (2003) noted
that fewer than 20% of mental health patients have only one
clearly definable Axis I diagnosis. Thus, the vast majority of
cases seen by practitioners do not meet the exact diagnostic
criteria used in the RCTs that established efficacy for
the empirically-validated treatments on these lists have
typically been studied using homogeneous samples of white,
middle-class clients, and therefore have not often been shown
to be efficacious with ethnic minority clients.
what does this all mean? Suppose we had lists of
empirically-validated manualized treatments for all DSM Axis I
diagnoses (which we are actually a long ways away from). We
would then have treatments for only 20% of the white, middle
class, patients who come to our doors, namely those who meet
the diagnostic criteria used in studies that validated these
treatments. That’s bad enough, but that’s not all. In
order to limit services to only these 20% of the white, middle
class, patients who come to us, the average practitioner would
have to spend many, many hours, perhaps years, in training to
learn these manualized treatments. And if we restricted
ourselves to use only these manualized treatments, we would be
limiting our role to that of a technician. And, in the end,
these treatments would only account for 15% of the variance in
therapy outcomes in these patients. One can readily see why
few practitioners have embraced the empirically-validated
view is that, although one of psychology’s strengths is its
scientific foundation, the present body of scientific evidence
is not sufficiently developed to serve as the sole foundation
for practice. Practitioners must be prepared to assess and
treat those who seek our services. To be sure, we all get
referrals of clients that we decide to refer to others because
we don’t think that we are the best clinician for that case,
but those who are in general practice have to work with the
clients that come to us. Whether we operate from a single
theoretical perspective or are more eclectic, we bring to bear
all that we know from the empirical literature, the clinical
case studies literature, and prior experience, as well as our
clinical skills and attitudes, to help the client that is
sitting in front of us. This is what is often referred to as
clinical judgement. Some condemn clinical judgement as
subjective. To them I say that clinical judgement is simply
the sum total of the empirical and clinical knowledge and
practical experience and skill which clinicians bring to bear
when it is our job to understand and treat a particular and
very unique person.
(2003) goes even further, pointing out that, in many learned
fields, science and practice are often separate endeavors, and
that practice often has to precede science. Physicians were
treating cancer long before they had much of an idea of what
it was, and were using pharmaceutical agents like aspirin long
before the pharmacodynamics were known. To quote Fox (2003):
I welcome your thoughts on this column. You can most easily
contact me via email: email@example.com
F. Levant, Ed.D., ABPP, is a fellow of Division 39 and APA
President-elect for 2004. He served as the Chair of the APA
Committee for the Advancement of Professional Practice (CAPP)
from 1993-95, a member at large of the APA Board of Directors
(1995-97), and two terms as APA Recording Secretary
(1998-2003). He is Dean, Center for Psychological Studies,
article was first published in the Psychologist/Psychoanalyst,
the newsletter of Division 39.
It is reprinted here with permission.
Psychological Association Division of Clinical Psychology
(1995). Training in and dissemination of empirically-validated
psychological treatments: Report and recommendations. The
Clinical Psychologist, 48, 3-27.
T. D., Echemendia, R. J., Ragusea, S. A., and Ruiz, M. (2001).
The Pennsylvania Practice Research Network and possibilities
for clinically meaningful and scientifically rigorous
psychotherapy effectiveness research. Clinical Psychology:
Science and Practice, 8, 155-167.
P.H. (2003). Remembering our fundamental societal mission. Public
Service Psychology, 28,
R. E. (2003, August). Toward creating a real profession of
psychology. Paper presented at the Annual Meeting of the
American Psychological Association, Toronto, Ontario, Canada.
J.J., Rngeisen, H. L., & Chambers, D. A. (2002). Clinical
Psychology: Science and Practice, 9, 204-220.
of Medicine (2001). Crossing the Quality Chasm: A new
Health System for the 21st Century. (2001).
Institute of Medicine: Washington, DC.
P. C. (1998). Empirically supported psychological therapies. Journal
of Consulting and Clinical Psychology, 66, 3-6.
M. J., & Barley, D. E. (2001). Research summary on the
therapeutic relationship and psychotherapy outcome.
Psychotherapy: Theory/Research/ Practice/Training, 38,
S. O., Lohr, J. M., & Morier, D.(2001). The teaching of
courses in the science and pseudoscience of psychology: Useful
resources. Teaching of Psychology, 28, 182-191
J. M., Fowler, K. A., &
Lilienfeld, S. O. (2002).The dissemination and promotion of
pseudoscience in clinical psychology: The challenge to
legitimate clinical science. The Cliical Psychologist, 55,
R. M. (1996). Manifesto for a science of clinical psychology.
The Clinical Psychologist, 44, 75-88.
D. (2003, May). Treating Individuals with Angry and
Aggressive Behaviors: A Life-Span Cultural Perspective.
Paper presented at the Annual Meeting of the Georgia
Psychological Association, Atlanta, GA. Norcross, J. C.
(2001). Purposes, processes, and products of the Task Force on
Empirically Supported Therapy Relationships. Psychotherapy:
Theory/Research/ Practice/Training, 38, 345-356
M.E.P. (1995). The effectiveness of psychotherapy. American
Psychologist, 50, 965-974.
M. E. P., &
Westen, D. and Morrison, K. ( 2001). A multidimensional meta-analysis of treatments for depression, panic, and generalized anxiety disorder: An empirical examination of the status of empirically supported therapies. Journal of Consulting and Clinical Psychology, 60, 875-899.