How do you think the field of psychotherapy should proceed in the future?

How has the practice of psychotherapy changed over the years? How do you think the field of psychotherapy should proceed in the future? What are some of the challenges the field of psychotherapy faces?

The Outcome of Psychotherapy:
Yesterday, Today, and Tomorrow

By:
Scott D. Miller

International
Center for Clinical Excellence;

Mark
A. Hubble

International
Center for Clinical Excellence

Daryl
L. Chow

International
Center for Clinical Excellence

Jason
A. Seidel

International
Center for Clinical Excellence

Acknowledgement:

The
progress of science is the work of creative minds. Every creative mind that
contributes to scientific advances works, however, within two limitations. It
is limited, first, by ignorance, for one discovery waits upon that other which
opens the way to it. Discovery and its acceptance are, however, limited by the
habits of thought that pertain to the culture of any region and period.—E. G.
Boring

If
we want to solve a problem that we have never solved before, we must leave the
door to the unknown ajar.—Richard P. Feynman

In
1963, the population of the United States was approaching 190 million. The
average worker earned just under $6,000 per annum. A first class stamp cost 4
cents, a gallon of gas, 29. The national debt stood at $310 billion. Around the
country, Americans were tuning into The Beverly Hillbillies, the
nation’s number one rated TV program. ZIP codes were introduced by the U.S.
Postal Service and the Beatles released their first album, Please Please Me.
A war in Vietnam was on, but few knew where the country was or what the
fighting was all about.

In
that year, membership of the American Psychological Association stood at 17,000
(Hilgard, 1987). The Diagnostic and
Statistical Manual (DSM) was 130 pages in length, and listed 106 mental
disorders. Treatment models numbered fewer than 40 (Miller,
Duncan, & Hubble, 1997; Wampold, 2001).
The number of states granting licenses to practice psychology was on the rise.
In August, the same month that Reverend Martin Luther King delivered his, “I
Have a Dream” speech from the steps of the Lincoln Memorial, the inaugural
issue of Psychotherapy was published. Three months later, in Dallas,
Texas, President John F. Kennedy was assassinated. In the tumultuous years that
followed, the American experience and identity would be transformed. So, too,
would the field of psychotherapy.

In
the decades preceding Psychotherapy‘s appearance, practice was mostly
limited to physicians, and psychoanalysis and psychodynamic approaches
predominated (Frank, 1992; VandenBos, Cummings,
& DeLeon, 1992). Beginning in the 1950s, the prevailing paradigm
came under scrutiny. Researchers within the emerging behavioral school were
harshly critical, challenging the scientific basis of Freudian theories and
concepts. Hans J. Eysenck (1952) published a review of 24 studies
concluding that psychotherapy was not only ineffective, but potentially
harmful. The conclusions provoked considerable public and professional attention,
and were immediately disputed by proponents of psychotherapy (Luborsky,
1954; Rosenzweig, 1954).

Strupp’s
(1963) article in the first issue of Psychotherapy, and Eysenck’s
(1964) response, revisited the still unsettled debate. Although the
efficacy of psychotherapy would remain in doubt for some time to come, the back
and forth between the two sides served to highlight both the “staggering
research problems” (Strupp, 1963, p. 2) confronting investigators
and the “necessity of properly planned and executed experimental studies into
this important field” (Eysenck, 1964, p. 97).

Fifty
years later, much has changed. The U.S. population has increased by 40%. Owing
to the frequent change in the cost of a first class stamp, the printed price
has been replaced with the word, “Forever.” At the time of writing this
article, a gallon of gas fetches $4.50, and the national debt is quickly
approaching $17 trillion. Only two members of the Fab Four are still
alive. Vietnam, once an implacable enemy, is now a trading partner of the
United States, and the two countries conduct joint naval training exercises.

Today,
the American Psychological Association has 137,000 members. Licenses are
required to practice independently as a psychologist in every state. More than
800,000 professionals are able to bill third party payers for mental health
services (Brown & Minami, 2010). The Substance Abuse and
Mental Health Service Administration’s (SAMHSA) Web site lists 145
manualized treatments for 51 of the 365 mental disorders now contained in the DSM.
This volume, in its fourth edition, has reached an astonishing 943 pages. A
fifth edition is in the works, and many psychologists, including the APA
president, are calling for the abandonment of the DSM and transition to the
World Health Organization’s International Classification of Diseases (Bradshaw,
2012; Clay, 2012).

The
principle disagreement between Strupp and Eysenck recorded in the first volume
of Psychotherapy has been resolved. Not only is the efficacy of
psychotherapy well established, but so is its effectiveness in real world
clinical settings (American Psychological Association, 2012; Duncan,
Miller, Wampold, & Hubble, 2010; Wampold, 2001).
Despite the consistent findings substantiating the field’s worth, a significant
question remains unanswered: How does psychotherapy work? In Strupp’s
words (1963, p. 2), the field would “not be satisfied with studies of
therapeutic outcomes until (it) succeed(ed) in becoming more explicit about the
independent variable”—in particular, the contributions made by the client, the
therapist, the treatment method, and commerce between the participants. Here,
debate continues to divide the profession.

Gathered
on one side are those who have long argued that psychotherapy is analogous to
medicine. From this point of view, psychologically informed interventions work
in much the same way penicillin treats infection. The hallmark of their
position is that effective treatments must contain specific ingredients
remedial to the condition being treated. For this group, randomized clinical
trials (RCTs) are the principal means of investigation, the findings of which
are used to generate treatment guidelines, manuals, and lists of “empirically
supported” or “validated” therapies (e.g., Barlow, 2004; Chambless
& Hollon, 1998). They contend that for psychotherapy to advance as a
science, psychologists must operationalize falsifiable hypotheses using
specific methods (discrete independent variables), test those hypotheses, and
teach students those methods that stand up to rigor and replication (Gambrill,
1990; Zuriff, 1985). The critical argument
supporting this approach is that different therapies are differentially effective,
and specific therapies are more effective than nonspecific treatment-as-usual
(TAU).

Exponents
for the other side insist that any suggestion psychotherapy is comparable with
a medical intervention is grossly inaccurate (Frank & Frank, 1999; Miller,
Duncan, & Hubble, 2004). Instead of focusing on specific methods, they
insist that mechanisms common to all approaches, no matter the theory or
technique, are responsible for change. In addition to the instillation of hope,
provision of a therapeutic rationale, and strategies for achieving change, the
therapeutic relationship is most often cited as one, if not the most, potent
transtheoretical ingredient of psychotherapy (Bachelor & Horvath,
1999; Grencavage & Norcross, 1990; Norcross,
2010). Three converging lines of research are cited in support of
these nonspecific factors as the most significant independent variables
responsible for client change: (1) the absence of differential effectiveness
when specific approaches are directly compared and when researcher allegiance
and other biasing variables are controlled (Wampold, 2001);
(2) dismantling studies that show the contribution of specific techniques to
treatment outcome is negligible (Duncan et al., 2010);
and (3) research showing consistently greater variance in outcomes between
psychotherapists in a given study than between the types of therapy they are
practicing (Benish, Imel, & Wampold, 2008; Beutler
et al., 2004; Crits-Christoph & Mintz, 1991; Crits-Christoph
et al., 1991; Imel, Wampold, Miller, & Fleming,
2008; Kim, Wampold, & Bolt, 2006; Luborsky
et al., 1986; Lutz, Leon, Martinovich, Lyons,
& Stiles, 2007; Okiishi, Lambert, Eggett,
Nielsen, Dayton, & Vermeersch, 2006; Shapiro, Firth-Cozens,
& Stiles, 1989; Wampold & Bolt, 2006; Wampold,
Mondin, Moody, & Ahn, 1997).

The
failure to reach agreement about how psychotherapy works is not without
consequence. To begin, how will the outcome of psychotherapy ever improve if
the two major explanatory paradigms are in continuous dispute and the causal
variables defy consensus? On that score, meta-analytic evidence shows outcome
has changed little over the past 40 years despite overwhelming support of
psychotherapy and a dramatic increase in the number of diagnoses and treatment
approaches (cf., APA, 2012; Smith & Glass, 1977; Wampold,
Mondin, Moody, & Ahn, 1997; Wampold, Mondin, Moody,
et al., 1997).

The
polarization among researchers and inability to answer basic questions about
the internal workings of psychotherapy also undermine the standing of the
profession within the world of health care, especially among consumers.
Nationwide surveys of potential users of psychotherapy find that a clear
majority (77%) doubt its efficacy (APA, 2004; Therapy
in America, 2004). Moreover, although 90% of people report they would
prefer to talk about their problems rather than take medication, use of
psychotropic drugs has continued to rise, whereas visits to psychotherapists
have steadily declined (Duncan, Miller, Wampold, & Hubble, 2010).

Some
contend that the threat to the field’s survival is so grave the profession’s
interest would best be served by setting the scientific issues aside and acting
as though the medical model applies (Nathan, 1997).
“Moving aggressively in the direction of developing and implementing
empirically validated treatment methods,” Wilson (1995)
argues, “would seem imperative in securing the place of psychological therapy
in future health care policy” (p. 163). Doing otherwise, it is claimed, risks
exclusion. Such assertions are entirely understandable. Economic pressures on
practitioners are powerful and real. Without a doubt, debate does not put food
on the table.

For
all that, an equally passionate call comes from the other side. “The
medicalization of psychotherapy,” Wampold (2001, p.
2) protests, “might well destroy talk therapy as a beneficial treatment of
psychological and social problems.” On the face of it, the premise has merit.
Therapy is a fluid, dynamic process, one involving a complex and nuanced series
of interchanges. Forcing clinicians to adopt “truncated and prescriptive”
treatments may well strip therapy of the very interpersonal processes critical
to its success.

To
resolve the predicament in which the profession remains mired, three possible
solutions are immediately apparent. First, both sides can continue to conduct
more of the same type of research in the hope that new findings will emerge
vindicating one, while forcing the other to capitulate. Second, end the problem
by legislative fiat. In effect, owing to the pressing financial and political
considerations, declare a winner, of necessity placing expedience above
science. Third, find a middle way. In this scenario, the two warring camps
finally move to the center, integrating their beliefs and best practices.

On
review, each of these approaches is empirically plausible. It is the case
though that, if having not already failed, they seem destined to do so. Taking
each of the three solutions in order, the hope that with the right research
design or line of investigation, a clear victor will come forth is—to put it
bluntly—akin to an alchemist’s optimism. After 50 years, and a massive
expenditure of time, effort, and money, had one side or the other been right,
lead would have been transformed into empirical gold long ago (Duncan
et al., 2010). Numerous replications, meta-analyses, and critiques
supporting both sides have been hailed as high truth on one side, and so much
sound and fury on the other. Few have been sufficiently swayed to give up their
claims or view of the evidence.

The
second solution of defining practice by statute is well underway. In 2009,
Cooper and Aratani (Cooper and Aratani, 2009) found that 90% of
states were implementing strategies to support the use of “evidence-based
practices” (EBPs). With few exceptions, such efforts have equated EBP with
lists of specific treatments for specific disorders (e.g., Addiction
& Mental Health Services, 2011). In turn, reimbursement has
been made contingent on an adherence to officially sanctioned therapies. At
present, one looks in vain for evidence that these policies have ended
divisions among researchers and clinicians regarding what constitutes a “best
practice,” improved either outcome or access to care (Bohanske
& Franczak, 2010), bolstered consumer confidence, or secured financial
stability for clinicians. As for the latter, in the same period, psychologists’
incomes have been in decline (APA Monitor, 2010; Cummings
& O’Donohue, 2008).

Finally,
what of the hope for finding a middle way? If the success of an integrative
movement could be measured by the number of books and articles published,
professional meetings held, or rhetorical eloquence of the advocates, then it
would be reasonable to conclude a new age of cooperation and unity has already
arrived. Of course, this has not happened, at all. Far from unifying the
profession, an entire new movement has come on the scene, burdened by its own
disagreements about what integration actually means and, at street level, how
to put it into practice (Miller et al., 2004; Norcross,
1997). Outside of the laboratory and the halls of academia, theories
and techniques are used idiosyncratically rather than systematically,
accumulated rather than integrated on any level but that of the individual
clinician. Like it or not, that is the reality on the ground.

The
Way Out

After
50 years, and little success in deciding how psychotherapy works, we return to Strupp’s
(1963) proposition. Once more, “It seems to me that we shall not be
satisfied with studies of therapeutic outcomes until we succeed in becoming
more explicit about the independent variable” (p. 2). Hands down, for all
concerned, the independent variable of consuming interest has been
psychotherapy—the treatment philosophy, theoretical constructions regarding
etiology and cure, and associated procedures and techniques. Of slightly lesser
interest have been the recipients of care; in particular, their diagnosis or
pathology, personality formation and malformations, life situation,
socioeconomic status, environmental supports and stressors and, in more recent
years, gender and ethnicity.

Although
identified by Strupp (1963), far less attention has been
paid to the contribution of the therapist (Beutler et al., 2004; Kim
et al., 2006; Wampold, 2010).
Doing, performing, and delivering has consistently overshadowed the doer,
performer, and deliverer. Looking past the therapist’s contribution has been
and continues to be an egregious error. Available evidence documents that the
therapist is one of the most robust predictors of outcome among factors
studied. Indeed, the variance of outcomes attributable to therapists (5%–9%) is
larger than the variability among treatments (0%–1%), the alliance (5%), and
the superiority of an empirically supported treatment to a placebo treatment
(0%–4%) (Duncan et al., 2010; Lutz et al., 2007; Wampold,
2005).

Beginning
in 1997, Garfield and other notable researchers, including Strupp (Strupp
& Anderson, 1997; Luborsky, McClellan, Woody,
O’Brien, & Auerbach, 1985; Luborsky, Mclellan,
Diguer, Woody, & Seligman, 1997; Okiishi, Lambert,
Nielsen, & Ogles, 2003), brought the therapist back to the table, in an
emphatic critique of the profession’s focus on treatment models and techniques.
Not surprisingly, for those who believe that psychotherapy is analogous to
medicine, therapist differences are considered a “nuisance variable,” noise to
be filtered out via strict adherence to the treatment protocol. On the other
side, the therapist is not only an interventionist, but also an intrinsic part
of the intervention; not just the delivery mechanism, but an important part of
what is delivered. Effectiveness, it is believed, results from a combination of
therapists’ “desirable personal requisites” (Garfield, 1997, p.
41) and their ability to use whatever methods empower the core conditions
shared by all healing practices (cf., Duncan, 2010).
Simply put, one cannot remove the effect of the therapist without undermining
the therapy.

Strupp
(1963) foresaw the variability between therapists before the
collection of the evidence that confirmed it: “Let us stay, however, with the
method of treatment and consider further its relation to outcomes. For this
purpose let us disregard (what in reality cannot be disregarded) therapist
variables and socioenvironmental factors” (p. 2). Although Eysenck
(1964) emphasized the need for clarity and precision in methods and
measurement, Strupp (1963) grappled with the importance of
the contextual nuances unfortunately reflected in “crude… quasi documentation
which has hopelessly befogged the issue” (p. 2).

Fortunately,
a large body of research outside of psychotherapy now provides a new clearer
direction that takes into account both the need for clear measurement and the
importance of contextual influences on methodology that drive better outcomes (Colvin,
2008; Ericsson, 2009b; Ericsson, Charness,
Feltovich, & Hoffman, 2006). These findings are less
concerned with the particulars of a given area of performance than how mastery
of any human endeavor is acquired. Across a variety of fields, including
sports, music, medicine, mathematics, teaching, computer programming, and more,
the subject of these studies has been the individual performer, and the
question of interest has been, Why are some better than others?

In
sharp contrast to the field of psychotherapy—with its rival paradigms,
competing schools, and disparate conclusions—investigations reveal a single
underlying trait shared by top performers: deep domain-specific knowledge. In
short, the best know more, perceive more, and remember more than their average
counterparts. The same research identifies a universal set of processes that
both account for how domain-specific knowledge is acquired and furnish
step-by-step directions anyone can follow to improve their performance within a
particular discipline (Ericsson et al., 2006).

In
summary, no matter one’s allegiance, the hope has been that knowing how
psychotherapy works would give rise to a universally accepted standard of care
which, in turn, would yield more effective and efficient treatment. However, if
the outcome of psychotherapy is in the hands of the person who delivers it,
then attempts to reach accord regarding the essential nature, qualities, or
characteristics of the enterprise are much less important than knowing how the
best accomplish what they do.

Looking
to the future, the application of research methods and findings from the field
of expertise and expert performance provides the way out of the field’s current
balkanization and stalemate. Such research is already underway, and the initial
results are informative and provocative (Miller & Hubble, 2011; Miller,
Hubble, & Duncan, 2007; Miller, Hubble, Duncan, &
Wampold, 2010).

The
“Road Best Traveled”: Improving Outcomes One Therapist at a Time

A
fundamental finding of the research on superior performance is that talent is
not a function of genetics, degrees earned, title, privilege, or experience. In
short, talent is made. It results from a process of an altogether different
nature, beyond traditional professional preparation and the mere investment of
time.

Informed
by findings reported by researchers (Ericsson, 1996; Ericsson,
2009b, 2009a; Ericsson et al., 2006; Ericsson,
Krampe, & Tesch-Romer, 1993) and writers (Colvin,
2008; Coyle, 2009; Shenk, 2010; Syed,
2010) on the subject of expertise, Miller et al. (2007)
identified three components critical for superior performance. Working in
tandem to create a “cycle of excellence,” these include: (1) determining a
baseline level of effectiveness; (2) obtaining systematic, ongoing, formal
feedback; and (3) engaging in deliberate practice. Each is discussed in turn.

To
be the best requires knowing how one fares in a given practice domain.
Interestingly enough, the exact methods by which top performers determine their
baseline are highly variable, defying any simple attempt at classification and
replication (Miller et al., 2007). What can be said
with certainty is that the best are constantly comparing what they do to their
own “personal best,” the performance of others, and existing standards or
baselines (Ericsson, 2006). Fortunately, in the realm of
psychotherapy, numerous well-established outcome measures are available to
clinicians for assessing their baseline (cf., Froyd & Lambert,
1989; Ogles, Lambert, & Masters, 1996).
Additionally, computerized databases exist that allow therapists to make real
time comparisons of their results with national and international norms (Lambert,
2012; Miller, Duncan, Sorrell, & Brown, 2005).
It is also worth noting that since the time of the debate between Strupp
(1963, 1964) and Eysenck
(1964), several methods have emerged for operationalizing and
standardizing the concepts of clinical improvement and treatment failure (cf., Hedges
& Olkin, 1985; Jacobson & Truax, 1991; Ogles,
Lambert, & Fields, 2002). Although each conceptualization
and measurement scheme has both benefits and drawbacks, these techniques show a
considerable improvement beyond the “befogged” understandings and
interpretations of 50 years ago (Strupp, 1963).

Nevertheless,
though measures and norms are now widely available, surveys indicate that few
clinicians actually use them in their day-to-day work (Phelps,
Eisman, & Kohout, 1998). Indeed, the collection of outcome data of any sort
is rare. Curiously, despite the low use, Bickman and associates (Bickman
et al., 2000) found in their own survey that a large percentage of
therapists hold interest in receiving regular reports of client progress.
Later, Hatfield and Ogles (2004) conducted a survey with a
national sample of licensed psychologists to investigate this discontinuity. As
before, clinicians expressed interest in having reliable outcome information.
Among the reasons given by those choosing not to use outcome measures, the top
two were, “practical (e.g., cost and time) and philosophical (e.g., relevance)
barriers” (p. 485).

Fully
aware of the realities of clinical practice, and in an effort to overcome the
obstacles to routine outcome measurement, Miller and Duncan (2000)
developed, tested, and disseminated two brief, four-item measures (Duncan
et al., 2003; Miller, Duncan, Brown, Sparks,
& Claud, 2003). The first, the Outcome Rating
Scale (ORS), assesses client progress and, when aggregated, can be used to
determine a therapist’s overall effectiveness. The second, the Session
Rating Scale (SRS), measures the quality of the therapeutic relationship, a
key element of effective therapy (Bachelor & Horvath, 1999; Norcross,
2010). Written and oral forms are available at no cost and have been
translated into 20 different languages. Both scales take less than a minute to
complete and score. Owing to their brevity and simplicity, adoption and usage
rates among therapists has been shown to be dramatically higher (89%) as
compared with other assessment tools ([20%–25%] Miller, Duncan, Brown,
Sorrell, & Chalk, 2006; Miller et al., 2003).

The
second element in fostering superior performance is obtaining feedback. Howard,
Moras, Brill, Martinovich, and Lutz (1996) were among the first to suggest
that formal routine measurement of client progress could be used for optimizing
treatment. In 2001, Lambert and colleagues (Lambert et al., 2001)
reported results demonstrating that providing feedback to clinicians about
client progress doubled the rate of clinically significant and reliable change,
decreased deterioration by 33%, and reduced the overall number of treatment
sessions. Over the past decade, research has continued and accelerated. For
example, studies involving the ORS and SRS have shown that exposure to feedback
as much as triples the rate of reliable change while cutting deterioration
rates in half (Anker, Duncan, & Sparks, 2009; Lambert
& Shimokawa, 2011; Reese, Norsworthy, &
Rowlands, 2009; Reese, Toland, Slone, &
Norsworthy, 2010). According to Lambert (2010),
“it is time (for clinicians) to routinely track client outcome” (p. 260).

Lambert’s
proprietary, outcome management system, has been approved as evidence-based by
the Substance and Mental Health Services Administration National Registry of
Evidence-based Programs and Practices (SAMHSA NREPP). The ORS and SRS, interpretive
algorithms, and normative database, collectively known as “Feedback Informed
Treatment” (FIT), are currently under review by SAMHSA. In 2012, moreover, the
International Center for Clinical Excellence (ICCE) released a series of six
“how-to” manuals for implementing routine outcome measurement in individual and
agency settings (Bertolino & Miller, 2012).
The process summarized in the manuals conforms to the American Psychological
Association’s (APA) definition of evidence-based practice. Of note, the definition
combines “the integration of the best available research” with clinical
expertise in “the monitoring of patient progress (and of changes in the
patient’s circumstances—e.g., job loss, major illness) that may suggest the
need to adjust the treatment (e.g., problems in the therapeutic relationship or
in the implementation of the goals of the treatment)” (APA
Presidential Task Force on Evidence-Based Practice, 2006,
pp. 273, 276–277).

As
powerful an effect as feedback exerts on outcome, it is not enough for the
development of expertise. As the literature on superior performance shows in
other fields, more is needed to enable clinicians to learn from the information
provided. De Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012)
found, for instance, that not all therapists benefit from feedback. In
addition, Lambert reports that practitioners do not get better at detecting
when they are off track or their cases are at risk for drop out or
deterioration, despite being exposed to “feedback on half their cases for over
3 years” (Miller et al., 2004, p. 16). In effect, feedback
functions like a GPS, pointing out when the driver is off track and even
suggesting alternate routes, while not necessarily improving overall navigation
skills or knowledge of the territory and, at times, being completely ignored.

Learning
from feedback requires an additional step: engaging in deliberate practice (Ericsson,
1996; Ericsson, 2006; Ericsson, 2009a; Ericsson,
Krampe, & Tesch-Romer, 1993). Deliberate practice means
setting aside time for reflecting on feedback received, identifying where one’s
performance falls short, seeking guidance from recognized experts, and then
developing, rehearsing, executing, and evaluating a plan for improvement.
Research indicates that elite performers across many different domains devote
the same amount of time to this process, on average, every day. In a study of
violinists, for example, Ericsson et al. (1993) found that the top
performers had devoted two times as many hours (10,000) to deliberate practice
as the next best players and 10 times as many as the average musician. In
addition to helping refine and extend specific skills, engaging in prolonged
periods of reflection, planning, and practice engenders the development of
mechanisms enabling top performers to use their knowledge in more efficient,
nuanced, and novel ways than their more average counterparts (Ericsson
& Stasewski, 1989).

Turning
to psychotherapy, research on the alliance is illustrative. Studies have consistently
found a moderate, yet robust, correlation between the quality of the
therapeutic relationship and outcome (Baldwin, Wampold, & Imel,
2007; Horvath, Del Re, Fluckiger, & Symonds, 2011).
At the same time, neither training in the alliance nor experience conducting
therapy has proven particularly predictive of clinician effectiveness (Horvath,
2001; Anderson, Ogles, Patterson, Lambert, and Vermeersch,
2009). In attempting to “untangle the alliance–outcome correlation,”
Baldwin et al. (2007) examined a group of 81
clinicians and found that 97% of the difference in outcome between the
practitioners was attributable to therapist variability in the alliance. By
contrast, client variability was unrelated to outcome. The results show that
some therapists are consistently better at establishing and maintaining helpful
relationships than others. Evidence that the difference is attributable to
their possession of deeper domain-specific knowledge can be found in a related
study by Anderson et al. (2009).

In
brief, Anderson et al. (2009) examined therapist effects using
a sample of 25 providers treating clients in a university counseling center.
The clinicians were asked to respond to a series of video simulations to test
for “facilitative interpersonal skills” (FIS). Each simulation presented a
difficult clinical situation, complicated by a client’s anger, dependency,
passivity, confusion, or need to control the interaction. Differences in client
outcomes between therapists were found to be unrelated to therapist gender,
theoretical orientation, professional experience, and overall social skills.
Instead, the best results were obtained by those who exhibited deeper, broader,
more accessible, interpersonally nuanced knowledge as measured on the FIS task.
No matter the client’s presenting problem or style of relating, top performers
were able to respond collaboratively and empathically, and far less likely to
make remarks or comments that distanced or offended a client.

Acquiring
such understanding, perception, and sensitivity is a common goal for
clinicians. Researchers have found that “healing involvement”—a practitioner’s
experience of engaging, affirming, being highly empathic, staying flexible, and
dealing constructively with difficulties encountered in the therapeutic
interaction—is the pinnacle of therapists’ aspirations (Orlinsky
& Ronnestad, 2005). And yet, the study by Anderson
et al. (2009) suggests that some end up having such
knowledge while others, of equal experience and social ability, do not.

Two
research projects are underway by members of the ICCE community. One is a
randomized clinical trial of deliberate practice applied to training
therapists—a longitudinal study being conducted at the University of North
Carolina Wilmington School of Social Work. Upon entry to the 2-year program,
beginning students are being given a battery of assessments, including (a) the
FIS inventory, a video-interactive tool designed to measure alliance building,
(b) the Values in Action Inventory of Strengths (VIA-IS), which measures
character strengths, and (c) a demographic questionnaire. During their first
year, all students receive the traditional training curriculum. In year two,
students are randomly split into two groups, with group one continuing the
traditional training, and the other, experimental group, receiving the
traditional training plus a program of deliberate practice aimed at improving
trainees’ skills in alliance formation and maintenance (i.e., ongoing
measurement, feedback, and practice opportunities under varying conditions).
The hypothesis of the study is that hours spent in deliberate practice
activities will be more predictive of outcome than participation in traditional
training, clinician character strengths, and other demographic variables. It is
hoped that this RCT will address, in part, Strupp’s (1963)
question regarding the “variance introduced by the person of the therapist
practicing them—his degree of expertness, his personality, and attitudes” (pp.
1–2). Results are not yet available.

The
second research project examines the relationship between outcome and
practitioner demographic variables, work practices, participation in
professional development activities, beliefs regarding learning and personal
appraisals of therapeutic effectiveness. Although preliminary, results from
this study are in line with earlier research on the factors that account for
expertise. Similar to Anderson et al. (2009) and others (Wampold
& Brown, 2005), therapist gender, qualifications, professional
discipline, years of experience, and time spent conducting therapy are
unrelated to outcome or therapist standing within the study sample. Similar to
findings reported by Walfish, McAlister, O’Donnell, and Lambert (2012),
therapist self-appraisal is not a reliable measure of effectiveness. The
findings also provide preliminary support for the key role deliberate practice
plays in the development of expertise among highly effective clinicians;
specifically, the amount of time therapists reported spending engaged in solitary
activities intended to improve their skills was related to outcome (Chow,
Miller, Kane, & Thorton, n.d).

In
all, the evidence at hand indicates that the findings from the expertise
literature likely apply to the domain of psychotherapy. Furthermore, the three
activities—knowing one’s baseline, obtaining feedback, and engaging in
deliberate practice—likely provide the means for achieving the gains in outcome
that have for so long eluded the field. If the results reported here hold up to
further investigation, it would suggest that a shift in focus is required.
Instead of trying to improve outcomes merely through the study of
psychotherapies in general (i.e., premises, models, and associated procedures),
the future of the profession may be better served by working to improve the
outcome of each and every therapist.

Summary
Conclusions

The
question that gave rise to the exchange between Strupp (1963) and
Eysenck (1964) in the inaugural issue of this
journal has been settled by the accumulation of five decades of evidence,
including a correction of what Eysenck criticized as a lack of “a set of
reasonable criteria which have a certain degree of reliability and objectivity”
(p. 99). The efficacy and effectiveness of psychotherapy are well established,
based on “standards stated and follow-ups carried out” (Eysenck,
1964, p. 99), and benefiting from continual refinements of what
constitutes effectiveness, whether in the behavioral terms preferred by Eysenck
or the intrapsychic judgments of clients preferred by Strupp (Eid
& Larsen, 2008). The second question of how it works—in
particular, the independent variable of importance—far from moving the
profession forward, has fragmented the field leaving outcomes unchanged for
just as many decades. In point of fact, no matter how the curative elements of
psychotherapy have been construed or taught, be they specific technical
operations, transtheoretical healing factors, or some combination thereof, the
field has not created new generations of superior clinicians.

The
way out as proposed in this article necessitates setting aside historical
perspectives, traditions, and even biases—and embracing a different view of
psychotherapy. As Norcross (1999) has observed, the “ideological
cold war may have been a necessary developmental state, (but) its days have
come and passed” (p. xvii). Indeed, once attention is turned to the performance
of the individual practitioner, as the weight of the research on expertise is
directing, then it would make eminent sense to regard therapeutic practice as
craft.

A
craft is defined as “a collection of learned skills accompanied by experienced
judgment” (Moore, 1994; p. 1). Consistent with both the research on
psychotherapy and the literature on the acquisition of expertise, no particular
personal qualities or talents are required for entry (Ericsson,
Krampe, & Tesch-Romer, 1993). Anyone, with a modicum of
instruction, can learn how to do the basic tasks and achieve outcomes
commensurate with professionals already practicing (Atkins
& Christensen, 2001; Nyman, Nafziger, & Smith,
2010). No amount of theory, coursework, continuing education, or
on-the-job experience will lead to the development of the “experienced
judgment” required for superior performance. For that, it appears that
practitioners must be engaged in the process outlined above—in essence,
continuously reaching for objectives just beyond their current ability (Miller,
Hubble, & Duncan, 2007).

The
implications for the future of research, professional preparation and
development, licensure and certification are nothing less than major. From a
craft perspective, professional training would emphasize the development of
evidence-based therapists at least as much as, if not more than, the
dissemination of the evidence base for specific therapies, what Strupp
(1963) called “the person of the therapist practicing them” (p. 1). In
practice, this could translate into easing admission criteria so that a larger
number of candidates may enter training programs. Prospective matriculants into
graduate programs focused on producing the best clinicians that psychology has
to offer might learn that graduation depends not only on learning about
psychotherapy but also on being capable of reliably producing positive results.
To that end, trainees would be exposed to clients early in their training,
routinely measured, and given ample opportunity to practice basic skills (e.g.,
alliance formation) under varying conditions (e.g., Anderson
et al., 2009).

In
addition, educators may improve the readiness of their incoming graduate
students by experimenting with undergraduate psychology curricula oriented to
elements of clinical quality beyond the learning of facts and methods, perhaps
including opportunities for clinical volunteer experiences (e.g., crisis
hotlines, safe houses, residential treatment) for those who express interest in
clinical training and who want to begin assessing their performance as budding
clinicians and learning the discipline of continually assessing and finding
ways to improve their clinical outcomes.

Similarly,
licensure to practice psychotherapy or quality certifications could be granted,
in part, on achieving and maintaining a baseline level of performance equal to
established outcome benchmarks. Postgraduate training would also change. As Neimeyer,
Taylor, and Wear (2009) point out, “If continuing education is a natural
expression of a profession’s ongoing evolution, then professional psychology
can be viewed as suffering a significant developmental delay” (p. 617).
Although most states, for example, mandate a number of continuing education
hours to maintain licensure to practice independently, the process is largely
self-regulated. With a few notable exceptions (e.g., ethics), practitioners
select the events they attend. Direct measures of learning are uncommon, and
performance measures for the participants completely absent. No process is in
place for identifying skill or knowledge deficits in need of remediation, and
no concrete plan is required for continual professional development or the assessment
of whether such a plan results in any change in clinical outcomes. From an
expertise perspective, the current system is at best ineffective and, at worst,
perilous. It reinforces clinicians’ well documented propensity to inflate their
effectiveness and see themselves as developing professionally when, in fact,
they are not (Walfish et al., 2012; Orlinsky
& Ronnestad, 2005). Considering the potential lag (likely a year or
more for many full-time psychotherapists) between clinical training and the accumulation
of sufficient data to determine whether such training has been successful, it
is especially important that these efforts are systematically tracked and
clinician data pooled together develop better methods for assessing and
improving the impact of these activities.

With
regard to research, the application of findings from the field of expertise to
psychotherapy is in its infancy. As a result, the potential areas for
investigation are numerous. For example, available evidence makes clear that superior
performance does not occur in a vacuum. The best flourish in supportive
communities—what has been termed, “cultures of excellence” or “communities of
practice” (Miller & Hubble, 2011). Although some aspects (e.g.,
error-centric learning environment, opportunities for reflection and deliberate
practice built into daily workflow) are known, more research is needed to
identify the characteristics of settings that prove optimal for the development
and maintenance of expert performance.

Another
potentially promising line of research would explore the practice patterns of
top performing therapists. A study by Najavits and Strupp (1994)
found, for instance, that effective therapists report making more mistakes and
being more self-critical than their less effective counterparts. Other research
shows that clinicians’ experience of difficulties in practice accounts for most
therapist variance in alliance ratings (Nissen-Lie, Monsen, &
Ronnestad, 2010). Results such as these immediately suggest the
possibility of studies exploring methods for helping practitioners develop an
open, even welcoming, attitude toward errors.

In
December 2009, the ICCE was launched (www.centerforclinicalexcellence.com).
Similar to sermo.com for physicians, the site provides a free, international,
web-based community for clinicians and researchers dedicated to excellence in
behavioral health. Members can choose to participate in any of the 100-plus
forums, create their own discussion groups, immerse themselves in a library of
documents and how-to videos, access outcome tools, and most important, request
and receive performance-oriented feedback from their peers.

The
following year a task force within the organization created and published a
document detailing four “core competencies” for applying the findings from the
expertise literature to the practice of psychotherapy (Miller,
Maeschalck, Axsen, & Seidel, 2011). The first core competency is
in the research foundations of FIT, including familiarity with research on the
therapeutic alliance; behavioral health care outcomes; expert performance and
its application to clinical practice; and the properties of valid, reliable,
and feasible alliance and outcome measures. The second competency is in FIT
implementation: integrating consumer-reported outcome and alliance data into
clinical work; collaborating with consumers about collecting feedback regarding
alliance and outcome; and ensuring that the course and outcome of behavioral
health care services are informed by consumer preferences. The third
competency, measurement and reporting, focuses on measuring and documenting the
therapeutic alliance and outcome of clinical services on an ongoing basis with
consumers, and on providing details in reporting outcomes sufficient to assess
the accuracy and generalizability of the results. The fourth competency is
continuous professional improvement: determining one’s baseline level of
performance; comparing one’s baseline level of performance to the best
available norms, standards, or benchmarks; developing and executing a plan for
improving baseline performance; and seeking performance excellence by
developing and executing a plan of deliberate practice for improving
performance to levels superior to national norms, standards, and benchmarks.
Researchers are already using the site to formulate research questions, solicit
participants for studies on expertise in psychotherapy, and using software to
investigate interesting outcome patterns as well as the conversational data
generated by clinicians interacting on the site.

Strupp
and Eysenck began a pointed debate 50 years ago on matters of consequence
facing the field. Their pointed exchange revealed important weaknesses in need
of redress. Some, such as the general efficacy of psychotherapy, have been
successfully addressed. Others, including how it works and can work better,
continue to divide the field. Beyond that, psychotherapy as a whole, and
individual practitioners in particular, face a number of stark challenges in
the future, not the least of which is remaining competitive. The authors
believe that focusing on what makes for a great performance currently holds the
most promise for meeting these challenges and advancing the understanding and practice
of psychotherapy.

Footnotes

1 The ORS was developed following the first
author’s long use of the Outcome Questionnaire 45 (OQ), a tool developed
by his professor, Michael J. Lambert, Ph.D. At a workshop Miller was teaching
on routine outcome measurement in Israel, he mentioned the time the measure
took to administer as well as the difficulty many of his clients experienced
completing the tool owing to its required literacy level. A psychologist in
attendance, Haim Omer, Ph.D., suggested bypassing the language-dependent items
and using a visual analogue scale to capture the major domains assessed by the
longer tool. Miller’s experience with the Line Bissection Test (Schenkenberg,
Bradford, & Ajax, 1980) during his neuropsychology internship and subsequent
work on the development of scaling questions at the Brief Family Therapy
Center (Berg and Miller, 1992; Miller
and Berg, 1995) led him to suggest to his colleague, Barry Duncan,
Psy.D., that a measure be created with four lines, each 10 centimeters in length,
representing the four domains of client functioning assessed by the OQ 45 (Miller,
2010a). A similar process led to the creation of the SRS (Miller,
2010b). Once again, a mentor and supervisor, Lynn Johnson, Ph.D.,
developed a 10-item likert scale for assessing the quality of the therapeutic
interaction (including alliance [Johnson, 1995]).
The author had used the scale but wanted a simpler, briefer scale to fit with
the demands of an inner city clinic. The measure was shortened and converted
into a visual analogue scale capturing the major elements of a good therapeutic
alliance as originally defined by Bordin (1979).
Together with Barry Duncan, Psy.D., and others, measures for children, young
children, and groups were added and tested for reliability, validity, and
feasibility.

References

Addiction
and Mental Health Services. (2011). AMH approved practices and process.
Retrieved from http://www.oregon.gov/oha/amh/pp./ebp/practices.aspx

American
Psychological Association. (2004). Communicating the value of psychology to
the public. Washington, DC: American Psychological Association.

American
Psychological Association. (2012, August9). Resolution on the recognition of
psychotherapy effectiveness. American Psychological Association. Retrieved
from http://www.apa.org/news/press/releases/2012/08/resolution-psychotherapy.aspx

American
Psychological Association Presidential Task Force on Evidence-Based Practice.
(2006). Evidence-based practice in psychology. American Psychologist, 61,
271–285. doi:10.1037/0003-066X.61.4.271

Anderson,
T., Ogles, B., Patterson, C., Lambert, M., & Vermeersch, D. (2009).
Therapist effects: Facilitative interpersonal skills as a predictor of
therapist success. Journal of Clinical Psychology, 65, 755–768.
doi:10.1002/jclp.20583

Anker,
M. G., Duncan, B. L., & Sparks, J. A. (2009). Using client feedback to
improve couple therapy outcomes: A randomized clinical trial in a naturalistic
setting. Journal of Consulting & Clinical Psychology, 77,
693–704. doi:10.1037/a0016062

APA
Monitor. (2010, April). Psychology salaries decline. Monitor on Psychology,
41, 11.

Atkins,
D. C., & Christiansen, A. (2001). Is professional training worth the
bother? A review of the impact of psychotherapy training on client outcome. Australian
Psychologist, 36, 122–130. doi:10.1080/00050060108259644

Bachelor,
A., & Horvath, A. (1999). The therapeutic relationship. In M. A.Hubble, B.
L.Duncan, & S. D.Miller (Eds.), The heart and soul of change: What works
in therapy (pp. 133–178). Washington, DC: American Psychological
Association. doi:10.1037/11132-004

Baldwin,
S., Wampold, B., & Imel, Z. (2007). Untangling the alliance-outcome
correlation: Exploring the relative importance of therapist and patient
variability in the alliance. Journal of Consulting and Clinical Psychology,
75, 842–852.

Barlow,
D. H. (2004). Psychological treatments. American Psychologist, 59,
869–878. doi:10.1037/0003-066X.59.9.869

Benish,
S. G., Imel, Z., & Wampold, B. (2008). The relative efficacy of bona fide
psychotherapies for treating posttraumatic stress disorder: A meta-analysis of
direct comparisons. Clinical Psychology Review, 28, 746–758.
doi:10.1016/j.cpr.2007.10.005

Berg,
I. K., & Miller, S. D. (1992). Working with the problem drinker: A
solution-focused approach. New York: Norton.

Bertolino,
B., & Miller, S. D. (Eds.) (2012). ICCE manuals on feedback-informed
treatment (Vol. 1–6). Chicago, IL: ICCE Press.

Beutler,
L. E., Malik, M., Alimohamed, S., Harwood, T. M., Talebi, H., Noble, S., &
Wong, E. (2004). Therapist variables. In M. J.Lambert (Ed.), Bergin and
Garfield’s handbook of psychotherapy and behavior change (5th ed., pp.
227–306). New York: Wiley.

Bickman,
L., Rosof-Williams, J., Salzer, M. S., Summerfelt, W., Noser, K., Wilson, S.
J., & Karver, M. S. (2000). What information do clinicians value for
monitoring adolescent client progress and outcomes?Professional Psychology:
Research and Practice, 31, 70–74. doi:10.1037/0735-7028.31.1.70

Bohanske,
R., & Franczak, M. (2010). Transforming public behavioral health care: A
case example of consumer-directed services, recovery, and the common factors.
In B.Duncan, S.Miller, B.Wampold, & M.Hubble (Eds.), The heart and soul
of change: Delivering what works in therapy (2nd ed., pp. 299–322).
Washington, DC: APA Press. doi:10.1037/12075-010

Bordin,
E. S. (1979). The generalizability of the psychoanalytic concept of the working
alliance. Psychotherapy: Theory, Research, and Practice, 16,
252–260. doi:10.1037/h0085885

Bradshaw,
J. (2012, September). Petition seeks to dump DSM and adopt ICD. National
Psychologist. Retrieved from http://nationalpsychologist.com/2012/09

Brown,
G. S., & Minami, T. (2010). Outcomes management, reimbursement, and the
future of psychotherapy. In B.Duncan, S.Miller, B.Wampold, & M.Hubble
(Eds.), The heart and soul of change: Delivering what works in therapy
(2nd ed., pp. 267–297). Washington, DC: APA Press. doi:10.1037/12075-009

Chambless,
D. L., & Hollon, S. (1998). Defining empirically supported therapies. Journal
of Consulting and Clinical Psychology, 66, 7–18.
doi:10.1037/0022-006X.66.1.7

Chow,
D., Miller, S. D., Kane, R., & Thornton, J. (n.d.). The study of
supershrinks: Development and deliberate practices of highly effective
psychotherapists. Manuscript in preparation.

Clay,
R. A. (2012). Improving disorder classification, worldwide. Monitor on
Psychology, 43, 40.

Colvin,
G. (2008). Talent is overrated: What really separates world-class performers
from everybody else. New York: Penguin.

Cooper,
J. L., & Aratani, Y. (2009). The status of states’ policies to support
evidence-based practices in children’s mental health. Psychiatric Services,
60, 1672–1675. doi:10.1176/appi.ps.60.12.1672

Coyle,
D. (2009). The talent code: Greatness isn’t born. It’s grown. Here’s how.
New York: Bantam Dell.

Crits-Christoph,
P., Baranackie, K., Kurcias, J., Beck, A. T., Carroll, K., Perry, K., . .
.Zitrin, C. (1991). Meta-analysis of therapist effects in psychotherapy outcome
studies. Psychotherapy Research, 1, 81–91.
doi:10.1080/10503309112331335511

Crits-Christoph,
P., & Mintz, J. (1991). Implications of therapist effects for the design
and analysis of comparative studies of psychotherapies. Journal of
Consulting and Clinical Psychology, 59, 20–26.
doi:10.1037/0022-006X.59.1.20

Cummings,
N., & O’Donohue, W. (2008). Eleven blunders that cripple psychotherapy
in America: A remedial unblundering. New York: Routledge.

de
Jong, K., van Sluis, P., Nugter, M. A., Heiser, W. J., & Spinhoven, P.
(2012). Understanding the differential impact of outcome monitoring: Therapist
variables that moderate feedback effects in a randomized clinical trial. Psychotherapy
Research, 22, 464–474. doi:10.1080/10503307.2012.673023

Duncan,
B. (2010). On becoming a better therapist. Washington, DC: APA Press.
doi:10.1037/12080-000

Duncan,
B. L., Miller, S. D., Reynolds, L., Sparks, J., Claud, D., Brown, J., &
Johnson, L. D. (2003). The session rating scale: Preliminary psychometric
properties of a “working” alliance scale. Journal of Brief Therapy, 3,
3–12.

Duncan,
B. L., Miller, S. D., Wampold, B. E., & Hubble, M. A. (Eds.). (2010). The
heart and soul of change: Delivering what works in therapy (2nd ed.)
Washington, DC: APA Press. doi:10.1037/12075-000

Eid,
M., & Larsen, R. J. (2008). The science of subjective well-being.
New York: Guilford.

Ericsson,
K. A. (1996). The acquisition of expert performance: An introduction to some of
the issues. In K. A.Ericsson (Ed.), The road to excellence: The acquisition
of expert performance in the arts and sciences, sports, and games (pp.
1–50). Mahwah, NJ: Lawrence Erlbaum Associates.

Ericsson,
K. A. (2006). The Influence of experience and deliberate practice on the
development of superior expert performance. In K. A.Ericsson, N.Charness, P.
J.Feltovich & R. R.Hoffman (Eds.), The Cambridge handbook of expertise
and expert performance (pp. 683–703). Cambridge: Cambridge University
Press. doi:10.1017/CBO9780511816796.038

Ericsson,
K. A. (2009a). Enhancing the development of professional performance:
Implications from the study of deliberate practice. In Development of
professional expertise: Toward measurement of expert performance and design of
optimal learning environments (pp. 405–431). New York: Cambridge University
Press.

Ericsson,
K. A. (Ed.). (2009b). Development of professional expertise: Toward
measurement of expert performance and design of optimal learning environments.
New York: Cambridge University Press. doi:10.1017/CBO9780511609817

Ericsson,
K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (2006). The
Cambridge handbook of expertise and expert performance. Cambridge:
Cambridge University Press. doi:10.1017/CBO9780511816796

Ericsson,
K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The role of deliberate
practice in the acquisition of expert performance. Psychological Review,
100, 363–406. doi:10.1037/0033-295X.100.3.363

Ericsson,
K. A., & Staszewski, J. (1989). Skilled memory and expertise: Mechanisms of
exceptional performance. In D.Klahr & K.Kotovsky (Eds.), Complex
information processing: The Impact of Herbert A. Simon (pp. 265–268).
Hillsdale, NJ: Lawrence Erlbaum.

Eysenck,
H. J. (1952). The effects of psychotherapy: An evaluation. Journal of
Consulting Psychology, 16, 319–324. doi:10.1037/h0063633

Eysenck,
H. (1964). The outcome problem in psychotherapy: A reply. Psychotherapy:
Theory, Research & Practice, 1, 97–100. doi:10.1037/h0088591

Frank,
J. D. (1992). Historical developments in research centers: The Johns Hopkins
Psychotherapy Research Project. In D. K.Freedheim (Ed.), A history of
psychotherapy: A century of change (pp. 392–396). Washington, DC: APA
Press.

Frank,
J. D., & Frank, J. B. (1991). Persuasion and healing: A comparative
study of psychotherapy. Baltimore, MD: Johns Hopkins University Press.

Froyd,
J., & Lambert, M. (1989, May). A 5-year survey of outcome measures in
psychotherapy research. Paper presented at the Western Psychological
Association Conference, Reno, NV.

Gambrill,
E. (1990). Critical thinking in clinical practice. San Francisco, CA:
Jossey-Bass.

Garfield,
S. L. (1997). The therapist as a neglected variable in psychotherapy research. Clinical
Psychology: Science & Practice, 4, 40–43.
doi:10.1111/j.1468-2850.1997.tb00097.x

Grencavage,
L. M., & Norcross, J. C. (1990). Where are the commonalities among the
therapeutic common factors?Professional Psychology: Research and Practice,
21, 372–378. doi:10.1037/0735-7028.21.5.372

Hatfield,
D. R., & Ogles, B. M. (2004). The use of outcome measures by psychologists
in clinical practice. Professional Psychology: Research and Practice, 35,
485–491. doi:10.1037/0735-7028.35.5.485

Hedges,
L. V., & Olkin, I. (1985). Statistical methods for meta-analysis.
San Diego, CA: Academic Press.

Hilgard,
E. (1987). Psychology in American: A historical survey. New York: HBJ.

Horvath,
A. (2001). The alliance. Psychotherapy: Theory/Research/Practice/Training,
38, 365–372. doi:10.1037/0033-3204.38.4.365

Horvath,
A. O., Del Re, A., Fluckiger, C., & Symonds, D. (2011). Alliance in
individual psychotherapy. Psychotherapy, 48, 9–16.
doi:10.1037/a0022186

Howard,
K. I., Moras, K., Brill, P. L., Martinovich, Z., & Lutz, W. (1996).
Efficacy, effectiveness, and patient progress. American Psychologist, 51,
1059–1064. doi:10.1037/0003-066X.51.10.1059

Imel,
Z. E., Wampold, B. E., Miller, S. D., & Fleming, R. R. (2008). Distinctions
without a difference: Direct comparisons of psychotherapies for alcohol use
disorders. Psychology of Addictive Behaviors, 22, 533–543. doi:10.1037/a0013171

Jacobson,
N. S., & Truax, P. (1991). Clinical significance: A statistical approach to
defining meaningful change in psychotherapy research. Journal of Consulting
and Clinical Psychology, 59, 12–19. doi:10.1037/0022-006X.59.1.12

Johnson,
L. D. (1995). Psychotherapy in the age of accountability. New York:
Norton.

Kim,
D.-M., Wampold, B. E., & Bolt, D. M. (2006). Therapist effects in
psychotherapy: A random-effects modeling of the National Institute of Mental
Health Treatment of Depression Collaborative Research Program data. Psychotherapy
Research, 16, 161–172. doi:10.1080/10503300500264911

Lambert,
M. J. (2010). Yes, it is time for clinicians to routinely monitor treatment
outcome. In B.Duncan, S.Miller, B.Wampold, & M.Hubble (Eds.), The heart
and soul of change: Delivering what works in treatment (pp. 239–266).
Washington, DC: APA Press. doi:10.1037/12075-008

Lambert,
M. J. (2012). Helping clinicians to use and learn from research-based systems:
The OQ-analyst. Psychotherapy (Chicago, Ill), Training, 49,
109–114. doi:10.1037/a0027110

Lambert,
M. J., & Shimokawa, K. (2011). Collecting client feedback. Psychotherapy,
48, 72–79. doi:10.1037/a0022238

Lambert,
M. J., Whipple, J. L., Smart, D. W., Vermeersch, D. A., Nielsen, S. L., &
Hawkins, E. J. (2001). The effects of providing therapists with feedback on
patient progress during psychotherapy: Are outcomes enhanced?Psychotherapy
Research, 11, 49–68. doi:10.1080/713663852

Luborsky,
L. (1954). Selecting psychiatric residents: Survey of the Topeka research. Bulletin
of the Menninger Clinic, 18, 252–259.

Luborsky,
L., Crits-Christoph, P., Mclellan, A. T., Woody, G., Piper, W., Liberman, B.,
Imber, S., & Pilkonis, P. (1986). Do therapists vary much in their success?
Findings from four outcome studies. American Journal of Orthopsychiatry,
56, 501–512. doi:10.1111/j.1939-0025.1986.tb03483.x

Luborsky,
L., McClellan, A. T., Diguer, L., Woody, G., & Seligman, D. A. (1997). The
psychotherapist matters: Comparison of outcomes across twenty-two therapists
and seven patient samples. Clinical Psychology: Science and Practice, 4,
53–65. doi:10.1111/j.1468-2850.1997.tb00099.x

Luborsky,
L., McClellan, A. T., Woody, G. E., O’Brien, C. P., & Auerbach, A. (1985).
Therapist success and its determinants. Archives of General Psychiatry, 42,
602–611. doi:10.1001/archpsyc.1985.01790290084010

Lutz,
W., Leon, S. C., Martinovich, Z., Lyons, J. S., & Stiles, W. B. (2007).
Therapist effects in outpatient psychotherapy: A three-level growth curve
approach. Journal of Counseling Psychology, 54, 32–39.
doi:10.1037/0022-0167.54.1.32

Miller,
S. D. (2010a). Finding feasible measures for practice-based evidence. Top
Performance Blog. Retrieved from http://www.scottdmiller.com/?q=taxonomy/term/70

Miller,
S. D. (2010b). Feedback, friends, and outcome in behavioral health. Top
Performance Blog. Retrieved from http://www.scottdmiller.com/?q=taxonomy/term/70

Miller,
S. D., & Berg, I. K. (1995). The miracle method: A radically new
approach to problem drinking. New York: Norton.

Miller,
S. D., & Duncan, B. L. (2000). The outcome and session rating scales.
Chicago, IL: International Center for Clinical Excellence. Retrieved from http://www.scottdmiller.com/?q=node/6

Miller,
S. D., Duncan, B. L., Brown, J., Sorrell, R., & Chalk, M. B. (2006). Using
formal client feedback to improve retention and outcome: Making ongoing
real-time assessment feasible. Journal of Brief Therapy, 5, 5–22.

Miller,
S. D., Duncan, B. L., Brown, J., Sparks, J., & Claud, D. (2003). The
outcome rating scale: A preliminary study of the reliability, validity, and
feasibility of a brief visual analog measure. Journal of Brief Therapy, 2,
91–100.

Miller,
S. D., Duncan, B. L., & Hubble, M. A. (1997). Escape from Babel: Toward
a unifying language for psychotherapy practice. New York: Norton.

Miller,
S. D., Duncan, B. L., & Hubble, M. A. (2004). Beyond integration: The
triumph of outcome over process in clinical practice. Psychotherapy in
Australia, 10, 2–19.

Miller,
S. D., Duncan, B. L., Sorrell, R., & Brown, J. (2005). The partners for
change outcome management system. Journal of Clinical Psychology, 61,
199–208. doi:10.1002/jclp.20111

Miller,
S. D., & Hubble, M. (2011). The road to mastery. Psychotherapy Networker,
35(2), 22–60.

Miller,
S. D., Hubble, M. A., & Duncan, B. L. (2007). Supershrinks. Psychotherapy
Networker, 31, 26–35, 56.

Miller,
S. D., Hubble, M. A., Duncan, B. L., & Wampold, B. (2010). Delivering what
works. In B. L.Duncan, S. D.Miller, B. E.Wampold, & M. A.Hubble (Eds.), The
heart and soul of change: Delivering what works in therapy (pp. 421–429).
Washington, DC: APA Press. doi:10.1037/12075-014

Miller,
S. D., Maeschalck, C., Axsen, R., & Seidel, J. (2011). The international
center for clinical excellence core competencies. Retrieved from http://centerforclinicalexcellence.com/wp-content/plugins/buddypress-group-documents/documents/1281032711-CoreCompetencies.pdf

Moore,
D. S. (1994). The craft of teaching. Address at the award ceremony for
distinguished college or university teaching of mathematics. San Francisco,
CA. Retrieved from http://www.stat.purdue.edu/~dsmoore/articles/Craft.pdf

Najavits,
L., & Strupp, H. (1994). Differences in the effectiveness of psychodynamic
therapies: A process-outcome study. Psychotherapy, 31, 114–123.
doi:10.1037/0033-3204.31.1.114

Nathan,
P. E. (1997). Fiddling while psychology burns?Register Report, 23,
1, 4–5, 10.

Neimeyer,
G., Taylor, J., & Wear, D. (2009). Continuing education in psychology:
Outcomes, evaluation, and mandates. Professional Psychology: Research and
Practice, 40, 617–624. doi:10.1037/a0016655

Nissen-Lie,
H. A., Monsen, J. T., & Ronnestad, M. H. (2010). Therapist predictors of
early patient-rated working alliance: A multilevel approach. Psychotherapy
Research, 20, 627–646. doi:10.1080/10503307.2010.497633

Norcross,
J. C. (1997). Emerging breakthroughs in psychotherapy integration: Three
predictions and one fantasy. Psychotherapy: Theory, Research, Practice,
Training, 34, 86–90. doi:10.1037/h0087757

Norcross,
J. (1999). Foreword. In M. A.Hubble, B. L.Duncan, & S. D.Miller (Eds.), The
heart and soul of change (pp. xvii–xix).

Norcross,
J. C. (2010). The therapeutic relationship. In B. L.Duncan, S. D.Miller, B.
E.Wampold & M. A.Hubble (Eds.), The heart and soul of change: Delivering
what works in therapy (2nd ed., pp. 113–142). Washington, DC: American
Psychological Association. doi:10.1037/12075-004

Nyman,
S., Nafziger, M., & Smith, T. (2010). Client outcomes across counselor
training level within a multitiered supervision model. Journal of Counseling
& Development, 88, 204–209.
doi:10.1002/j.1556-6678.2010.tb00010.x

Ogles,
B. M., Lambert, M. J., & Fields, S. (2002). Essentials of outcome
assessment. New York: John Wiley & Sons.

Ogles,
B., Lambert, M., & Masters, K. (1996). Assessing outcome in clinical
practice. Needham Heights, MA: Allyn & Bacon.

Okiishi,
J. C., Lambert, M. J., Eggett, D., Nielsen, S. L., Dayton, D. D., &
Vermeersch, D. A. (2006). An analysis of therapist treatment effects: Toward
providing feedback to individual therapists on their patients’ psychotherapy
outcome. Journal of Clinical Psychology, 62, 1157–1172.
doi:10.1002/jclp.20272

Okiishi,
J. C., Lambert, M. J., Nielsen, S. L., & Ogles, B. M. (2003). Waiting for
supershrink: An empirical analysis of therapist effects. Clinical Psychology
& Psychotherapy, 10, 361–373. doi:10.1002/cpp.383

Orlinsky,
D. E., & Ronnestad, M. H. (2005). How psychotherapists develop: A study
of therapeutic work and professional growth. Washington, DC: American
Psychological Association. doi:10.1037/11157-000

Phelps,
R., Eisman, E., & Kohout, J. (1998). Psychological practice and managed
care: Results of the CAPP practitioner survey. Professional Psychology:
Research and Practice, 29, 31–36. doi:10.1037/0735-7028.29.1.31

Reese,
R. J., Norsworthy, L. A., & Rowlands, S. R. (2009). Does a continuous
feedback system improve psychotherapy outcome?Psychotherapy: Theory,
Research, Practice, Training, 46, 418–431. doi:10.1037/a0017901

Reese,
R. J., Toland, M. D., Slone, N. C., & Norsworthy, L. A. (2010). Effect of
client feedback on couple psychotherapy outcomes. Psychotherapy: Theory,
Research, Practice, Training December, 47, 616–630.

Rosenzweig,
S. (1954). A transvaluation of psychotherapy: A reply to Hans Eysenck. The
Journal of Abnormal and Social Psychology, 49, 298–304.

Schenkenberg,
T., Bradford, D., & Ajax, E. (1980). Line bisection and unilater visual
neglect in patients with neurological impairment. Neurology, 30,
509–517. doi:10.1212/WNL.30.5.509

Shapiro,
D. A., Firth-Cozens, J., & Stiles, W. B. (1989). The question of
therapists’ differential effectiveness: A Sheffield Psychotherapy Project
addendum. British Journal of Psychiatry, 154, 383–385.
doi:10.1192/bjp.154.3.383

Shenk,
D. (2010). The genius in all of us: Why everything you’ve been told about
genetics, talent, and IQ is wrong. New York: Random House.

Smith,
M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome
studies. American Psychologist, 32, 752–760.
doi:10.1037/0003-066X.32.9.752

Strupp,
H. (1963). The outcome problem in psychotherapy revisited. Psychotherapy:
Theory, Research & Practice, 1, 1–13. doi:10.1037/h0088565

Strupp,
H. (1964). The outcome problem in psychotherapy: A rejoinder. Psychotherapy:
Theory, Research & Practice, 1, 101. doi:10.1037/h0088579

Strupp,
H., & Anderson, T. (1997). On the limitations of therapy manuals. Clinical
Psychology: Science and Practice, 4, 76–82. doi:10.1111/j.1468-2850.1997.tb00101.x

Syed,
M. (2010). Bounce: Mozart, Federer, Picasso, Beckham, and the science of
success. New York: Harper Collins.

Therapy
in America. (2004). A survey conducted by Harris Interactive on behalf of
Psychology Today and Pacificare Behavioral Health. Retrieved from http://www.napabipolardepression.org/images/therapy_in_america.pdf

VandenBos,
G. R., Cummings, N., & DeLeon, P. H. (1992). A century of psychotherapy: Economic
and environmental influences. In D. K.Freedheim (Ed.), A history of
psychotherapy: A century of change. Washington, DC: APA Press.
doi:10.1037/10110-002

Walfish,
S., McAlister, B., O’Donnell, P., & Lambert, M. J. (2012). An investigation
of self-assessment bias in mental health providers. Psychological Reports,
110, 639–644. doi:10.2466/02.07.17.PR0.110.2.639-644

Wampold,
B. E. (2001). The great psychotherapy debate: Models, methods, and findings.
Mahwah, NJ: Erlbaum.

Wampold,
B. E. (2005). Establishing specificity in psychotherapy scientifically: Design
and evidence issues. Clinical Psychology: Science & Practice Summer,
12, 194–197.

Wampold,
B. E. (2010). The research evidence for the common factor models: A
historically situated perspective. In B. L.Duncan, S. D.Miller, B. E.Wampold
& M. A.Hubble (Eds.), The heart and should of change: Delivering what
works in therapy (2nd ed., pp. 49–82). Washington, DC: American
Psychological Association. doi:10.1037/12075-002

Wampold,
B. E., & Bolt, D. M. (2006). Therapist effects: Clever ways to make them
(and everything else) disappear. Psychotherapy Research, 16,
184–187. doi:10.1080/10503300500265181

Wampold,
B. E., & Brown, G. S. (2005). Estimating variability in outcomes
attributable to therapists: A naturalistic study of outcomes in managed care. Journal
of Consulting and Clinical Psychology, 73, 914–923.
doi:10.1037/0022-006X.73.5.914

Wampold,
B. E., Mondin, G. W., Moody, M., & Ahn, H.-n. (1997). The flat earth as a
metaphor for the evidence for uniform efficacy of bona fide psychotherapies:
Reply to Crits-Christoph (1997), and Howard et al. (1997). Psychological
Bulletin, 122, 226–230. doi:10.1037/0033-2909.122.3.226

Wampold,
B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., & Ahn, H.-n.
(1997). A meta-analysis of outcome studies comparing bona fide psychotherapies:
Empirically, “all must have prizes.”Psychological Bulletin, 122,
203–215. doi:10.1037/0033-2909.122.3.203

Wilson,
G. T. (1995). Empirically validated treatments as a basis for clinical practice:
Problems and prospects. In S. C.Hayes, V. M.Follette, R. M.Dawes, & K.
E.Grady (Eds.), Scientific standards of psychological practice: Issues and
recommendations (pp. 163–196). Reno, NV: Context Press.

Zuriff,
G. E. (1985). Behaviorism: A conceptual reconstruction. New York:
Columbia University Press.

Submitted:
October 16, 2012 Accepted: October 17, 2012

This publication is protected by US and international copyright laws and
its content may not be copied without the copyright holders express written
permission except for the print or download capabilities of the retrieval
software used for access. This content is intended solely for the use of the
individual user.

Source: Psychotherapy. Vol. 50. (1), Mar, 2013
pp. 88-97)
Accession Number: 2013-08252-012
Digital Object
Identifier: 10.1037

"Do you need a similar assignment done for you from scratch? We have qualified writers to help you with a guaranteed plagiarism-free A+ quality paper. Discount Code: SUPER50!"

order custom paper