Go
back to Higher Education Articles & Chapters
Should
We Train Applied Behavior Analysts to Be Researchers?
Richard W. Malott1
Behavior Analysis Program
Department of Psychology
Western Michigan University
Download
Word version of this article
Note: The original version of this article appeared in the Journal
of Applied Behavior Analysis.
Abstract
Should we continue the tradition of training nearly all our masters
and doctoral students to be research scientists, or should we provide
different training for those who wish to be practitioners? In searching
for an answer to this question, the present paper involves informal
use of two general approaches of behavioral systems analysis: front-end
analysis and feasibility analysis.
Behavioral
Systems Analysis
To do a behavioral systems analysis, the practitioner should systematically
perform the following steps: Do a front-end analysis of the behavioral
system. Specify the goals of the system. Design the system. Implement
it. Evaluate it. And recycle through the preceding steps until the
goals are obtained (Malott, 1974; Mechner & Cook,1988; Redmon,
1991).
[In a behavioral system] the principal components
are organisms, usually human being, working together to accomplish
some set of ultimate goals or objectives. Organizations are behavioral
systems – for example, a factory, a hospital, a school, a city
government. But there are some behavioral systems that, by convention,
we do not usually call organizations – for example, on a large
scale, an entire country; on an intermediate scale, a department or
division of an organization; on a smaller scale, a family; and on
a tiny scale, we may consider individual people as behavior systems,
though not as organizations. In this latter case, the system’s
components might consist of various tasks the individual does. (Malott
& Garcia, 1987, p. 128)
We do the front-end analysis before designing and implementing our
intervention. It includes a goal analysis and a task analysis. The
goal analysis helps us select the goals for our system (Mager, 1984;
Malott & Garcia, 1987). Therefore, in designing an instructional
system to train applied behavior analysts, our goal analysis might
involve both a market analysis and a needs analysis. Furthermore,
Malott and Garcia (1987) argued that all systems should have the well-being
of humanity as their ultimate goal and that intermediate goals should
be selected so that they lead to the ultimate goal. This suggests
that in our goal analysis we should consider formally the relation
between our training of behavior analysts and the needs of humanity;
we should not take that relation for granted.
Recently, several behavior analysts have suggested, either directly
or indirectly, that we include a market analysis when we do a front-end
analysis to insure that people will use our products once we have
produced them. Geller (1991a) pointed to the importance of market
analysis by noting that W. Edwards Deming, credited with revolutionizing
Japan’s quality control systems, stressed the importance of
front-end market research in his 4-day seminar on quality enhancement.
Redmon (1991) also illustrated the need for market analysis, suggesting
that interventions are maintained only to the extent that their maintenance
benefits the decision makers in an organization and to the extent
that the benefits are apparent to those decision makers. As examples,
he cited the failure of management to maintain a refuse packaging
program that apparently benefited the garbage pickup crew but not
the managers. Similarly, a utility company failed to maintain a program
that successfully reduced electricity use, possibly because there
were no apparent benefits for the company, even though that program
might have benefited society in general.
Bailey (1991) also supported the need for front-end market analysis,
suggesting that much consumer resistance to behavior analysis has
occurred because “We did not do the front-end analysis with
potential consumers to discover exactly what they were looking for,
what form it should take, how it should be packaged and delivered,
and so forth” (p. 446)
In discussions of social validity, behavior analysts have argued for
the importance of doing front-end goal-directed needs analyses. Wolf
(1978) stressed the importance of subjective evaluations of the social
significance of intervention goals. However, Geller (1991b) implied
that the consumer’s subjective evaluations of appropriate goals
and procedures might no always be our best guide: “In the domain
of road safety, for example, most consumers would prefer increased
speed limits and no enforcement of safety belt laws. In the industrial
setting, most workers would vote to eliminate requirements to wear
uncomfortable and inconvenient personal protective equipment (e.g.,
safety glasses, hard hats, ear plugs, and face shields)” (p.
182). Geller (1991b) further suggested that we should not rely on
“personal or celebrity) opinion to determine allocation of priorities
[in goal selection] . . . Surely it would be more appropriate to determine
a priority ranking of socially significant problems [goals] by systematically
applying epidemiological statistics, cost-benefit ratios, and intervention
effectiveness data, as well as information about the availability
of pertinent resources and socially valid solutions” (pp. 183
– 184). In other words, Geller recommended a behavioral systems
approach to goal selection prior to intervention.
In addition to a goal analysis (market analysis and needs analysis),
a front-end analysis can include a task analysis to determine the
tasks and supporting skills needed to achieve the goals. Mager (1988)
said, “Task analysis is the name given to a collection of techniques
used to help make the components of competent performance visible
. . . Every job is made up of a collection of tasks. . . . A task
is a series of steps leading to a meaningful outcome . . . . A step
in a task . . . would be something like tighten a nut [or] pick up
a scalpel” (pp. 29- 30). For each task, the analysis specifies
the occasion for the task, the steps in the task, and the criteria
for successful completion of the task. In turn, an analysis of the
steps and of the student’s entering repertoire suggests the
skills the training program should establish. Although applied behavior
analysts use detailed task analyses in the design or programs to train
workers in industry and even programs to train the developmentally
disabled, they seem to make little use of such analyses in the design
of programs to train other applied behavior analysts.
Finally, the importance of feasibility analyses is just beginning
to receive formal recognition by behavior analysts. This is the essence
of Geller’s (1991b) recommendation that our efforts be guided
by “cost-benefit ratios and intervention effectiveness data,
as well as information about availability of pertinent resources”
(p. 184). In some senses we might consider such a feasibility analysis
to be part of a front-end analysis – something done before the
intervention; however, most often we must have some data from an intervention
before we can reasonably assess the feasibility of continuing that
intervention or implementing similar ones in the future.
Front-End
Analysis
Goal Analysis
We can now use these concepts from behavioral systems analysis to
consider whether programs to train applied behavior analysts should
emphasize the training of research skills.
Market analysis
To get a rough idea of the job market for behavior analysts, I used
a printout of the nonstudent membership of the Association for Behavior
Analysis. The practitioners constituted 38% of the PhDs, 52% of the
EdDs, and 86% of the MAs, although the sample may be biased in favor
of university teachers and researchers, who may join and maintain
memberships more often than practitioners. (By practitioner, I mean
anyone other than a professor or researcher.)
This preliminary market analysis implies that a large percentage of
behavior-analyst alumni of our graduate programs work mainly as practitioners,
not as teachers and researchers. So what should we teach the large
percentage of our graduate students who will become practitioners
so that they can better contribute to the well-being of humanity?
Needs analysis
We already have many effective applied behavior analysis procedures,
but few non-behavior analysts use them. Perhaps our main problem is
getting children and parents, students and teachers, employees and
employers, clients and therapists, and the governed and the government
to use what we already know. As Stoltz (1981) pointed out, “Applied
researchers develop useful innovative technologies experimentally,
and yet few of these technologies enjoy widespread adoption by our
society” (p. 491). Here is an infamous example: The national
education establishment failed to adopt the technology of direct instruction,
although “the largest experiment in history on instructional
methods” had shown it to be dramatically superior to eight other
popular methods of instruction in elementary education” (Watkins,
1988, p. 10). As another example, Reid (1991) pointed out,
Even in the field of developmental disabilities, .
. . the actual impact of behavior analysis is well below its potential
impact. There is a serious gap in typical service settings between
state-of-the-art services, as represented in the professional literature,
versus existing services. Indeed, most people who work in developmental
disabilities are not very well skilled, or skilled at all, in applied
behavior analysis. (p.438)
So we might spread the use of behavioral technology more reliably
by simply increasing the number of practitioners we graduate rather
than the number of researchers who generate more technology.
Traditionally, we train even our applied graduate students to be research
scientists rather than the staff managers and program administrators
that many, if not most, will become. We train them to value research
highly and to value those who produce it. Then the new graduates get
jobs as practitioners or as managers and administrators and find themselves
poorly trained to do the job they were not taught to value. In other
words, most of the people paying the pipers are calling for one set
of tunes, but the graduate students are teaching their students to
play and value a different set. Furthermore, the graduate schools
often fail to teach such an invaluable skill as behavioral systems
analysis.
Conclusions of the
goal analysis
This analysis suggests we should train fewer scientists and more practitioners.
But this does not mean practitioners and managers should not empirically
evaluate their work and the systems they manage, nor does it mean
they should not make their decisions as databased as possible. It
only means applied settings need a special sort of program evaluation
and systems analysis research, and this systems evaluation and research
is rarely of the sort that meets the standards of novelty and experimental
control property required for publication in prestigious research
journals.
Task Analysis
One useful rule of thumb from behaviorally oriented trainers in industry
is to teach only the repertoires essential to the job and the empirically
demonstrated prerequisites for acquiring those repertoires. How many
nonessential, and thus easily lost repertoires are we teaching our
graduate students in the name of science or in the name of the scientist/practitioner
model or in the name of education (as opposed to training)? For instance,
experiences tasks analysts suggest we look skeptically at the history
and theory parts of most curricula.
Advocates of training practitioners as scientists argue that the scientist’s
critical, data-based empirical analysis skills transfer to decision
making in the professional and personal lives of science-trained practitioners.
My frequent but informal observations suggest that most scientists
show little evidence of their scientists show little evidence of their
scientific training when making decisions outside their areas of expertise.
Another common argument is that scientific training will allow the
practitioner to read professional journals and stay abreast of the
latest empirically based behavioral technology. Again, my informal
observations suggest otherwise; I think, at most, practitioners usually
only skim a few behavior analysis textbooks or handbooks when searching
for a new technique – a more efficient technique than scouring
and critically evaluating the professional journals. Even if practitioners
do read scholarly journals, it may not be cost effective for them
to be trained as scientists for the purpose of weeding out poorly
conducted and analyzed research; the journal editors have had much
more experience doing that.
The sorts of systems analyses and program evaluations appropriate
to applied settings often depart greatly from typical research methodology:
The scientist carefully manipulates an independent variable to measure
its effects on a dependent variable. The practitioner must use intervention
or treatment packages to force the dependent variable into an acceptable
range as quickly as possible, with little concern for isolating the
crucial values of the independent variables.
So before designing our curriculum, we need to analyze what tasks
practitioners should do, as well as what they actually do. Those tasks
are the essentials. In stressing the essentials, we might reduce the
emphasis on history, theory, and methods of science, as well as experimental
theses and dissertations for most practitioners. We could then stress
areas such as basic quantitative concepts, program evaluation, empirical
behavioral systems analysis, social skills, accounting, computer use,
project management, management information systems, public speaking,
marketing (Bailey, 1991; Lindsley, 1991; Schwartz, 1991) and behavior
analysis. Johnston (1991) made a related argument:
We should make a clear distinction between technological
research and technological application . . . . Technological application
should not have to focus on asking experimental questions at all,
although these will sometimes arise when procedures fail to produce
the desired effects . . . . We should represent the different needs
of applied research versus practice in how we accept students into
graduate programs, how we train them, and how they are employed .
. . . It might even be argued that practitioners should receive training
that is more service oriented than research oriented. The scientists-practitioner
philosophy we seem to have uncritically borrowed from clinical psychology
. . . may be counter-productive for this new model. . . . Few careers
fit its assumptions very well. Not only are most holders of the doctorate
in psychology apparently uninterested in being both researchers and
practitioners, it is difficult to do both well. . . .As a general
approach to training practitioners the scientist-practitioner model
is easy to argue against. . . . The model I have suggested . . . should
be seen as enhancing rather than diminishing the role of practitioners.
. . . We would no longer need to define their value by such academic
credentials as research publications. (pp. 426 – 427)
Feasibility Analysis
I suggested some issues involved in deciding what we should teach.
That was part of an informal front-end analysis. Now we might consider
what we can feasibly teach. Even if we should train most practitioners
to be scientists, can we do so?
How Feasible Is
It to Train Successful Publishers?
How well do we train practitioner/scientists? An applied student of
behavior analysis might spend the equivalent of 2 to 4 years learning
to be a scientist – a heavy investment for all concerned. What
concrete returns does this investment produce? Should such extensive
training result in graduates publishing frequently in our most important
journals, Journal of Applied Behavior Analysis (JABA).” It does
not. During JABA's second decade, only 26 people published five or
more articles there – one article every 2 years.
Of the 784 applied behavior analysts at the doctoral level in the
Association for Behavior Analysis, only 2% are frequent publishers
in JABA. I took JAMA to be the journal of first choice for publication
of experimental work by applied behavior analysts, although that may
not always be the case. However, even if considering frequent publication
of empirical research in other prestigious journals tripled this estimate,
the percentage of frequent publishers of experimental work would be
only 6%. We invest much effort in training applied behavior analysts
to be scientists; but applied behavior analysts seem to have a low
rate of generating research of a type or quality adequate for publication
in JABA.
Who Can Train Experimental Scientists?
To acquire reliably the complex and subtle repertoire of a productive
experimental scientist, the student may need to apprentice with a
teacher who is a productive experimental scientist; book and classroom
learning may not be nearly enough. In this regard, who did the frequent
publishers of JABA’s second decade study with? At least 50%
(13 of 26) studied with people who themselves were frequent publishers
in either JABA’s first or second decade. And, if I may use a
double standard for productive research, at least two others studied
with a major research publisher, although he was not a frequent publisher
in JABA. Of course, several confounding factors can contribute to
these results; but in any event, the odds are low that someone who
is not a productive researcher will train a student who will become
a productive researcher. And 22 of the 26 frequent publishers were
university professors; so only 22 professors had the skills for productive
research in their active repertoire during JABA’s second decade.
If my criterion is too restrictive, we could triple the number and
still there would be only 66 of such professors. So what about the
great majority of the professors of applied behavior analysis?
If It Is Not Feasible
for Most of Use to Train Scientists, Who Can We Train?
Many poor scientists may be excellent practitioners (and of course
many excellent scientists may be poor practitioners). We should recognize
the value of our practitioner, teacher, and administrator skills and
teach those skills, without apologizing and without cloaking them
in the guise of scientific research. These are the skills most of
our graduates will be paid to use. If we cannot practice what we preach,
at least we should preach what we practice.
This is not an anti-intellectual, antiscience argument. It is merely
an argument that we should leave the training of scientists to those
who have science skills in their currently active repertoires; the
rest of us should concentrate on training practitioners in whatever
areas we effectively practice, whether it be education, industry,
the clinic or other areas. (One of the reviewers of this manuscript
raised the following point: “Do we need practitioner skills
in our repertoires to teach this? Some of my colleagues have neither practitioner nor research skills at this exemplary levels of excellence
advocated here.”)
Theses and Dissertations
As part of this preliminary front-end analysis, we have glanced at
our goals and a few of the relevant tasks needed to achieve those
goals. We have also considered the feasibility of teaching the various
repertoires. Now we examine the implications of this analysis for
theses and dissertations.
Applied students need high-quality training leading to the acquisition
and demonstration of professionally relevant repertoires, for example,
the skills of doing behavioral systems analysis in applied settings.
However, in many programs, when students attempt to do applied theses
and dissertations, they must distort their practical intentions to
create the illusion of science.
Proponents of the experimental dissertation often argue that the PhD
degree is a degree for scholars, not practitioners. Therefore the
dissertation must demonstrate scholarship, not practical skills. These
proponents of the experimental dissertation seem to imply that if
students want to be mere practitioners, then let them get PsyD degrees.
But the PhD is no longer just a degree for people who will become
professional scholars. I suspect most PhDs in applied behavior analysis
do not become professional researchers and scholars. And even if the
PsyD degree has the status of a PhD, few universities offer PsyD degrees
in applied behavior analysis. Perhaps this should change, at least
according to this preliminary needs analysis.
In considering the curricula for practitioners, one reviewer referred
to Redmon’s (1991) suggestion that interventions are maintained
only to the extent that their maintenance benefits the decision makers
in an organization: “Teachers and researchers only want to train
future teachers and researchers because of the benefits to them (e.g.,
publishing partners).”
Conclusion
The present analysis suggests that those few who are successfully
training productive scientists should be encouraged to train even
more. But the rest of us should take pride in concentrating on teaching
whatever useful skills we now possess (e.g., college teaching, one-on-one
clinical practice, behavioral systems analysis, or departmental administration).
The rest of us should redesign our thesis and dissertation requirements
to help our students acquire skills more relevant to practice rather
than skills more relevant to publication.
References
Bailey, J.S. (1991). Marketing behavior analysis requires different
talk. Journal of Applied Behavior Analysis, 24¸445
– 448.
Geller, E.S. (1991a). Is applied behavior analysis technological to
a fault? Journal of Applied Behavior Analysis, 24, 401 –
406
Geller, E.S. (1991b). Where’s the validity in social validity? Journal of Applied Behavior Analysis, 24, 179 – 184
Johnston, J.M. (1991). We need a new model of technology. Journal
of Applied Behavior Analysis, 24, 425 – 427.
Lindsley, O.R. (1991). From technical jargon to plain English for
application. Journal of Applied Behavior Analysis, 24, 449
– 458.
Mager, R.F. (1984). Goal analysis. Belmont, CA: David S.
Lake Publishers.
Mager, R.F. (1988). Making instruction work. Belmont, CA:
David S. Lake Publishers.
Malott, R.W. (1974). A behavioral-systems approach to the design of
human services. In D. Harshbarger & R. F. Maley (Eds.), Behavior
analysis and systems analysis: An integrative approach to mental health
programs (pp. 319 – 342). Kalamazoo, MI: Behaviordelia.
Malott, R.W., & Garcia, M.E. (1987). A goal directed model approach
for the design of human performance systems. Journal of Organizational
Behavior Management, 9, 125 – 159.
Mechner, F., & Cook, D.A. (1988). Performance analysis. Youth
Policy, 19(7), 36 – 42.
Redmon, W.K. (1991). Pinpointing the technological fault in applied
behavior analysis. Journal of Applied Behavior Analysis,
24, 441 – 444.
Reid, D.H. (1991). Technological behavior analysis and social impact:
A human services perspective. Journal of Applied Behavior Analysis,
24, 437 – 439.
Schwarz, I.S. (1991). The study of consumer behavior and social validity:
An essential partnership for applied behavior analysis. Journal
of Applied Behavior Analysis, 24, 241 – 244.
Stools, S.B. (1981). Adoption of innovations from applied behavioral
research: “Does anybody care?” Journal of Applied
Behavior Analysis, 14, 491 – 505.
Watkins, C.L. (1988). Project Follow Through: A story of the identification
and neglect of effective instruction. Youth Policy, 10(7),
7 – 11.
Wolf, M.M. (1978). Social validity: The case for subjective measurement,
or how behavior analysis is finding its heart. Journal of Applied
Behavior Analysis, 11, 203 – 214.
Go
back to Higher Education Articles & Chapters
1Click here to
view contact information for the author: http://old.dickmalott.com/contact.html
(Go back to article)
|
|
|