Chapters 2,3,4, and 5

Advanced
Study
Objectives
For
Principles of Behavior
Richard W. Malott
-1-
7/28/17
Advanced Study Objectives1
Introduction
Goal: If you master these objectives, you will have an excellent understanding of the most
commonly confused issues in the field of behavior analysis, issues about which even many
professional behavior analysts seem confused. (Incidentally, the confusion usually takes the
form of erroneously classifying two different processes, phenomena, or procedures as if they
were the same or treating an event as if it were an example of one process, phenomenon, or
procedure when it’s really an example of a different one.) Graduate students are able to become
fairly fluent in their mastery of these objectives in an introductory seminar on behavior analysis.
In addition to written quizzes, I give oral quizzes where I ask each grad student to answer orally
a sample objective, fluently, without hesitancy. We repeat the assignment until all are fluent.
But these objectives may be above and beyond what undergrad students can achieve, in the time
normally available for an undergraduate course; however, if they put in the extra time they
should also be able to achieve such mastery.2
In some cases, even for grad students, the professor may need to supplement these study
objectives. For example, I often ask, “What is the common confusion,” and the answer isn’t
always clear just from reading EPB. Think of this material as sort of guided lecture notes. They
are not self-contained, but they should help you figure out what you don’t yet know so you can
check in the book and ask your fellow students and your instructor for answers you can’t figure
out yourself. Go for it.
Overall Instructions: Compare and contrast concepts and procedures. When appropriate,
construct compare and contrast tables and diagram Skinner box, behavior modification3, and
everyday examples. Of course, you should also be able to define perfectly all of the relevant
technical terms.
1
Judi DeVoe originally developed these objectives as part of her doctoral comprehensive examination in 1993. Bill
Boetcher revised them for our MA seminar in the principles of behavior in 1994. I revised them for the EPB 3.0
instructor’s manual in 1996.
2
I’ll tell you, I’m really pleased with these objectives. They’ve evolved out of successive iterations of EPB and my
grad courses where I’ve used EPB. Contrary to our rhetoric that we determine our course objectives and then design
our textbooks and courses to accomplish those objectives, I find that I teach a course for a few years and only
gradually figure out what I’m trying to teach; it’s too hard or I’m too lazy to get it right in advance (I use the ready,
fire, aim approach to behavioral-systems design). In any event, I’m pleased to see what I think are high-level
objectives evolving out of EPB and yet not interfering with the use of EPB at a more basic level.
3
By behavior-modification example, I mean a performance management example, an added contingency where we
are intentionally trying to manage or modify someone’s behavior, for example, where we give an M&M to the girl
every time she completes an arithmetic problem.
-2-
7/28/17
When asked to compare and contrast using examples, make the examples as identical as
possible except for the crucial contrasting difference or differences. (When you’re
teaching the similarity and differences between concepts, that’s also the best way to do
it.)
Common confusion: The common confusion almost always, if not always, involves
treating two different phenomena as if they were the same or treating two similar
phenomena as if they were different. Usually it’s the former, failing to see important
differences between two phenomena or procedures or concepts.
The goal of science: The goal of science is the understanding of the world and the way it
works. Major intellectual breakthroughs in science often involve figuring out what’s the
same and what’s not (i.e., unconfusing the common confusions).
 Through out these objectives, keep in mind what I’m suggesting is the goal of
science and the basis for major intellectual breakthroughs. And, while I don’t
want to imply that these objectives and their related material constitute major
intellectual breakthroughs in science, they might be small breakthroughs for
behavior analysis. So be able to express a warm feeling of appreciation for each
of these humble little breakthroughs, as we go along.
-3-
7/28/17
NOTES PAGE
You’ll find that every other page is blank. This is so you can take notes and make
comments.
-4-
7/28/17
5
Chapters 2, 3, 4, and 5
The Four Basic Contingencies
1. Reinforcement by the presentation of
a reinforcer, reinforcement by the
removal of an aversive condition,
punishment by the presentation of an
aversive condition (or simply
punishment), and punishment by the
loss of a reinforcer (penalty).
a. Construct the 2x2 contingency
table.
2. Positive and negative reinforcement
and positive and negative
punishment.
a. Compare and contrast in terms of
the preferred nomenclature
(names) in EPB.
b. Diagram the three types of
examples (Skinner box, behavior
modification [performance
management contingency], and
everyday [natural contingency])
for each of the four basic
contingencies. Note 1: In this
course, always include the
reinforcement contingency for
the response of interest, when
you diagram a punishment or
penalty contingency). Note 2:
When we ask for the natural
contingency we do not
necessarily mean ineffective
natural contingency, it would
probably be an effective natural
contingency (unless we ask for
the ineffective natural
contingency)
b. What’s the common confusion?
3. According to the toothpaste theory,
what’s wrong with talking about
expressing things, not only
expressing anger but even
expressing love? Beware of the verb
to express; it will almost always lead
you away from the contingencies
controlling the behavior of concern.
6
Chapters 2,3,4, and 5
The Four Basic Contingencies
ANSWERS
1. Reinforcement by the presentation
of a reinforcer, reinforcement by
the removal of an aversive
condition, punishment by the
presentation of an aversive
condition (or simply punishment),
and punishment by the loss of a
reinforcer (penalty).
a) Construct the 2x2 contingency
table.
Stimuli/Event or
Present
Remove
Condition
Reinforcement Penalty
Reinforcer
Aversive
Punishment
Escape
Rudolp has no water
Rudolph presses lever
Rudolph has water
SKINNER BOX PENALTY
Rudolph has water
Rudolph presses lever
Rudolph has no water
SKINNER BOX PUNISHMENT
Rudolph has water
Rudolph presses lever
Rudolph has no water
SKINNER BOX ESCAPE
Shock
Rudolph presses lever
No shock
BEHAVIOR MODIFICATION
REINFORCEMENT
No candy
Use toilet
Candy
BEHAVIOR MODIFICATION
PENALTY
b) Diagram the three types of
examples (Skinner box, behavior
modification [performance
management contingency], and
everyday [natural contingency])
for each of the four basic
contingencies. Note 1: In this
course, always include the
reinforcement contingency for
the response of interest, when
you diagram a punishment or
penalty contingency). Note 2:
When we ask for the natural
contingency we do not
necessarily mean ineffective
natural contingency, it would
probably be an effective natural
contingency (unless we ask for
the ineffective natural
contingency)
Candy
Wet pants
No candy
BEHAVIOR MODIFICATION
PUNISHMENT
Candy
Wet pants
No candy
BEHAVIOR MODIFICATION
ESCAPE
Slap
Use Toilet
No Slap
PERFORMANCE MANAGEMENT
REINFORCEMENT
No points
Write one page
Points
PERFORMANCE MANAGEMENT
PENALTY
Points
Watch T.V.
No points
PERFORMANCE MANAGEMENT
PUNISHMENT
SKINNER BOX
REINFORCEMENT
7
No criticism
Watch T.V.
a. Compare and contrast in terms of
the preferred nomenclature
(names) in EPB.
Criticism
PERFORMANCE MANAGEMENT
ESCAPE
Aversive thoughts about
losing points
Write one page
Traditional
Positive reinforcer
Positive Reinforcement
No aversive thoughts about
losing points
Negative Reinforcer
Negative Reinforcement
Ours
Reinforcer
Reinforcement by the
Presentation of a reinforcer
Aversive condition
Reinforcement by the
removal of an aversive
condition
b. What’s the common confusion?
Answer: People think negative
reinforcement will decrease
behavior and positive
punishment will increase
behavior.
3. According to the toothpaste theory,
what’s wrong with talking about
expressing things, not only
expressing anger but even
expressing love? ANSWER:
Beware of the verb to express; it
will almost always lead you away
from the contingencies controlling
the behavior of concern.
2. Positive and negative
reinforcement and positive and
negative punishment.
8
Chapter 6
Extinction
c) What’s the common confusion?
4. Penalty contingency vs. extinction
following reinforcement. (with the
relevant reinforcement
contingencies for the response of
interest as well)
a) Examples from the three areas
i. Skinner box
d) Be able to construct, describe,
explain, and illustrate the
following table showing the
differences between extinction,
response cost, and time out.
ii. Behavior mod
Differences Between Extinction Following
Reinforcement, Response Cost, and Timeout
Procedure
Process or
Results
Stop giving
Response
Extinction
the reinforcer
Response
Cost
iii. Everyday Life
Time-out
b) Using the preceding examples,
distinguish between not giving
the reinforcer maintaining the
behavior and contingently
removing a separate reinforcer.
9
Contingent
loss of a
reinforcer
currently
possessed
Contingent
removal of
access to a
reinforcer
frequency
decreases
Rate may
decrease
rapidly
Rate may
decrease
rapidly
5. Extinction of escape v. not
presenting the aversive before
condition (e.g., not turning on the
shock in the Skinner box).
e) What’s the common confusion?
f) What’s the common confusion?
6. Extinction of cued avoidance
(answer this after reading Chapter
15)
7. Diagram extinction of avoidance.
10
Chapter 6
Extinction
ANSWERS
4. Penalty contingency vs. extinction
following reinforcement.
a. Examples from the three
areas (Skinner box, behavior
mod., and everyday) (with
the relevant reinforcement
contingencies for the
response of interest, as well)
Partial Answer – Behavior mod:
the absence of a reinforcer is
made contingent on the response.
For example (notice, when we do
compare and contrast examples
we make the pair as identical as
possible, except for the crucial
difference; that’s standard, good
instructional technology; you
should do the same):
Reinforcement
Dysfunctional reinforcement
contingency:
Helen has no attention.
Helen walks into nurses’ office.
Helen has attention.
Before
Tom sees no
shocked
expression.
Performance-management
penalty contingency:
Helen has tokens.
Helen walks into nurses’ office.
Helen has fewer tokens.
Before
Tom has
attention of
victim.
After
Tom sees
shocked
expression.
Behavior
Tom makes
obnoxious
remark.
After
Tom has no
attention of
victim.
Penalty
Performance-management
extinction “contingency”:
Helen has no attention.
Helen walks into nurses’ office.
Helen still has no attention.
But the following would be the correct
example of extinction:
Reinforcement
b. Using the preceding
examples, distinguish
between not giving the
reinforcer maintaining the
behavior and contingently
removing a separate
reinforcer.
c. What’s the common
confusion?
Answer: People often
erroneously offer a penalty
contingency as an example of
extinction. Their mistake is that
Before
Tom sees
no shocked
expression.
Behavior
Tom makes
obnoxious
remark.
After
Tom sees
shocked
expression.
Before
Tom sees
no shocked
expression.
Behavior
Tom makes
obnoxious
remark.
After
Tom sees
no shocked
expression.
Extinction
11
The acid test for extinction is to
remember that it’s like disconnecting the
lever in the Skinner box. During
extinction, the response has no effect.
d. Be able to construct,
describe, explain, and
illustrate the following table
showing the differences
between extinction, response
cost, and time out.
involves not presenting the
warning stimulus.
7. Here’s my better (I hope) everyday
example of resistance to extinction.
Dan lives in an old house. The basement light
has a defective switch, and he usually has to flip
it several times for the light to come on,
sometimes 2 times, sometimes 5, even up to 8
times in a row. The other light switches in the
house work fine – every time he flips a light
switch, the lights come on. A week ago, his
kitchen light didn’t come on when he flipped the
switch, he tried it two more times and then gave
up; he decided that the light bulb must have
burned out. (And it was). Yesterday, as Dan
tried to turn on the light in the basement, he
flipped the switch not just 4 times, not 8 times,
not even 12 times, but 18 times in a row (!)
before he gave up and checked the light bulb – it
was burned out, too! – Peter Dams, P610, Fall,
1998
Differences Between Extinction Following
Reinforcement, Response Cost, and Timeout
Procedure
Process or
Results
Stop giving
Response
Extinction
the reinforcer
Response
Cost
Time-out
Contingent
loss of a
reinforcer
currently
possessed
Contingent
removal of
access to a
reinforcer
frequency
decreases
Rate may
decrease
rapidly
Rate may
decrease
rapidly
a. Has Peter got it? If not, why
not?
b. You got a good one?
5. Extinction of escape v. not presenting
the aversive before condition (e.g., not
turning on the shock in the Skinner box).
a. What’s the common
confusion? Answer: People
think not presenting the
aversive before condition is
extinction of escape; but in
extinction, we would have:
shock on, press lever, shock
on. Remember: In
extinction, the response
must still occur, but no
longer produces the
outcome
6. Extinction of cued avoidance (answer
this after reading Chapter 15)
a. What’s the common
confusion? Answer: The
confusion is that people think
extinction of cued avoidance
8. Diagram extinction of avoidance.
12
Cued
Avoidance
Extinction of Cued
Avoidance
SD
Buzzer
On
SD
Buzzer
On
No
Shock in
3seconds
Shock in
3
seconds
Shock in
3
seconds
Shock in
3
seconds
Press
Lever
Press
Lever
Shock in
3
seconds
Shock in
3
seconds
S Delta
Buzzer
Off
S Delta
Buzzer
Off
13
Chapter 7
Differential Reinforcement and Differential Punishment
9. Differential reinforcement vs.
plain-vanilla, old-fashioned
reinforcement.
a. compare and contrast
b. Illustrate the relation between
differential reinforcement plain
reinforcement using diagrams of
a pair of examples based on the
presentation of reinforcers.
10.
12. Differential reinforcement vs.
differential penalizing.
a. Diagram a pair of
Skinner-box examples
showing the differences.
Differential escape vs. plainvanilla, old-fashioned escape.
a. compare and contrast
11. Illustrate the relation between
differential escape (the removal of an
aversive condition) and plain, oldfashioned escape using diagrams of a
Skinner box example.
14
Chapter 7
Differential Reinforcement and Differential Punishment
ANSWERS
9.
unreinforced set of responses. To
illustrate what we mean by “non-zero”
value: Suppose we’re going to
differentially reinforce lever pressing of
20g force. Anything under 20g will not
be reinforced. For example, a 19g lever
press will not be reinforced (19g is not
very close to zero so that’s a non-zero
value). But, if we put a naïve rat in the
Skinner box and reinforced all lever
presses then that wouldn’t be considered
differential reinforcement. It would be
just plain-vanilla reinforcement.
Actually, the rat might even touch the
lever or press it with ½ mg of force and
not receive a reinforcer because it isn’t
enough force to activate the
programming equipment and deliver the
reinforcer. By convention, we would not
call this differential reinforcement,
again that’s just plain vanilla
reinforcement. In other words, the
difference between not pressing the lever
at all and pressing it with ½ mg force is
so trivial that we just count it all as not
pressing the lever.
Compare and contrast
Compare: They are essentially
identical: both produce a high
frequency of one response class.
Contrast: Usually differential
reinforcement also involves
decreasing the frequency of a
previously reinforced response
class.
Illustrate the relation between
differential reinforcement and
plain-vanilla, old-fashioned
reinforcement using diagrams of
a pair of examples based on the
presentation of reinforcers.
Answer:
Differential Reinforcement
Behavior
Before
Rudolph
has no
water
Rudolph
presses lever
forcefully
Behavior
Rudolph
presses lever
weakly
After
Rudolph
has water
After
Rudolph
has no
water
10. Differential escape vs. plain
escape.
Compare: The contingency is
the same for both escape and
differential escape.
Contrast: The only difference is
the “behavior” that isn’t
reinforced. For example, for
differential escape reinforcement
in the Skinner box, the
unreinforced behavior might be
lever presses that are too wimpy.
But for simple escape
Plain Vanilla Reinforcement
Before
Rudolph
has no
water
Behavior
Rudolph
presses
lever
After
Rudolph
has water
In other words, we call the contingency
differential reinforcement, if there is a
non-zero value (along a response
dimension of interest) of the
15
reinforcement, the only time
escape doesn’t occur is when
Rudolph simply fails to press the
lever at all.
Differential Reinforcement
11. Illustrate the relation between
differential escape (the removal of an
aversive condition) and plain, oldfashioned escape using diagrams of a
Skinner box example.
Before
Rudolph
has no
water
Differential Escape
Before
Shock is on
Behavior
Rudolph
presses
lever
forcefully
After
Shock is off
Behavior
Rudolph
presses
lever
weakly
After
Shock is on
Plain Vanilla Escape
Before
Shock is on
Behavior
Rudolph
presses
lever
After
Shock is off
12a. Differential reinforcement vs.
differential penalizing. Diagram a
pair of Skinner-box examples
showing the differences
Behavior
Rudolph
presses
lever
forcefully
After
Rudolph
has water
Behavior
Rudolph
presses
lever
weakly
After
Rudolph
has no
water
That was differential reinforcement.
Now the next example is a completely
different example. In other words, the
preceding contingencies are not
concurrent with the following
contingencies; in fact, they are two
completely different rats. The next rat’s
name is Ralph.
Before we look at the penalty
contingency, we must look at the
reinforcement contingency, the one
causing Ralph to press the lever in the
first place. And just like the example
above with Rudi, water deprived Ralph’s
lever pressing is reinforced with water.
Reinforcement
Before
Ralph has
no water
Behavior
Ralph
presses
lever
After
Ralph has
water
But, in the case of reinforcement
contingency, Ralph will get water
reinforcers, regardless of the force of his
lever press; in other words, this is simple
reinforcement, not differential
reinforcement. (Also note that force of
the lever press is irrelevant with regard
16
to the water-reinforcement contingency,
though not the penalty with regard to the
penalty contingency that’s sneaking on
to the scene next.
Differential Penalty
We’ve put a bowl of food pellets in the
Skinner box, and Ralph can eat them
whenever he likes. However, if Ralph
presses the water-reinforced lever too
weakly, he loses his food – penalty. But,
if he presses the lever forcefully, or if he
doesn’t press it at all, the food stays in
the Skinner box.
Before
Ralph has
food



17
Behavior
Ralph
presses
lever
forcefully
After
Ralph has
food
Behavior
Ralph
presses
lever
weakly
After
Ralph has
no food
Differential reinforcement:
Put ATM card in slot, push
wrong buttons, and don’t get
money.
Differential penalty: Put
ATM card in slot, push
wrong buttons, and lose card.
Or take it to teaching the kids
table manners at
McDonald’s.
Chapter 8- Shaping
13. The differential reinforcement
procedure vs. shaping.
a. Compare and contrast with a
pair of Skinner-box examples.
(Use force of the
lever press as the response
dimension.)
14. Fixed-outcome shaping vs. variableoutcome shaping.
a. Give a pair of contrasting
Skinner-box examples (you’ll
have to create your
own S-box example of
variable-outcome shaping, but
I’ll bet you can do it.)
b. Also create a pair of
contrasting human examples.
c. Fill in the compare and
contrast table.
15. Shaping with reinforcement vs.
shaping with punishment.
a. Give contrasting Skinner-box
examples using force of the
lever press.
18
Chapter 8- Shaping ANSWERS
13. The differential reinforcement
procedure vs. shaping.
a. Compare and contrast with a
pair of Skinner-box examples.
(Use force of the
lever press as the response
dimension.)
Differential Reinforcement
Forceful
Press
contrast table.
a. Fixed – Outcome
1
Drop
H2O
5g
10 g
15 g
No
water
No
H2O
H2O
NO
H2O
Weak
Press
10 g
15 g
20 g
Variable – Outcome
No
H2O
Shaping
10 g
15 g
20 g
1
Drop
H2O
5g
10 g
15 g
1 drop
2 drops
3 drops
No
H2O
10 g
15 g
20 g
H2O
5g
10 g
15 g
No
H2O
No
H2O
14. Fixed-outcome shaping vs. variableoutcome shaping.
a. Give a pair of contrasting
Skinner-box examples (you’ll
have to create your
own S-box example of
variable-outcome shaping, but
I’ll bet you can do it.)
b. Also create a pair of
contrasting human examples.
c. Fill in the compare and
c.
# of outcome
sizes
Regressions to
earlier levels
Usual Source of
Shaping
19
Fixed
One
Variable
Many
No reinforcers
Weaker
reinforcers
Nature
Behavior modifier
15. Shaping with reinforcement vs.
shaping with punishment.
a. Give contrasting Skinner-box
examples using force of the
lever press.
Shaping with Punishment
10 g
15 g
20 g
No
Shock
5g
10 g
15 g
Shock
Shock
in 3
20
Chapter 9- 4Motivating operations5
16. Compare and contrast the EPB definition of motivating operation (EO) with
Michael’s by constructing and explaining this table. (Note that Michael and Malott are
saying the same thing, except Michael’s saying a little bit more.)
Definitions of Motivating operation
EPB
Michael
Affects
Learning
(how fast or how well it is learned –
the DV)
Affects
Performance
(after behavior is learned, the DV)
reinforcing effectiveness of
other events (IV)6
(which is really measured by
how fast or well the behavior
is learned)
the frequency of occurrence
of the type of behavior
consequated by those events
(DV) (that is, after behavior is
learned) This is Michael’s
evocative effect.
21
17. Now, please illustrate each cell of the EPB/Michael comparison table with a Skinnerbox, water reinforcer example. Do so by filling in the following table.
The Motivating operation and Learning
Learning
Performance
Day
Monday
Procedure
Tuesday
(measurement session)
Describe:
Only one lever press is
reinforced by the ________
______________________;
then the session is ended.
Why?
Group I: naïve, 24 – hour
Both groups are water deprived
water – deprived rats.
for ______________, so that
_____________
__________________. This allows
us to see _____________________
____________________________.
Compared to the other group, the
mean latency for this group will be
a. longer
b. shorter
Deprivation
(motivating
operation)
Group 2: naïve, 6 – hour
water – deprived rats.
Compared to the other group, the
mean latency for this group will be
a. longer
b. shorter
Demonstration This demonstrates the effect of the motivating operation on
learning (in Malott’s terms) or the effect of the motivating
operation on the reinforcing effectiveness of ___________, in
Michael’s terms. Because the ______________made the shock
termination a more effective reinforcer on Monday, the group with
higher _________on Monday had greater learning on Monday.
This is demonstrated by the ______________on Tuesday. The
______________on Tuesday shows what was learned on Monday.
The reason we just look at the latency of the first response is that
we’re looking at__________, before there is any chance for
_________on Tuesday, before any reinforcement or extinction can
occur on Tuesday.
22
The Motivating operation and Performance
Learning
Performance
Day
Monday
Procedure
Only one lever press is
reinforced; then the session
is ended.
Deprivation
(motivating
operation)
Group 1: naïve, 24-hourwater-deprived rats. In
other words, rats in this
experiment aren’t the same
ones as in the previous
experiment.
Tuesday
(measurement session)
We measure the latency of the first
response, from the time we put the
rat into the Skinner box. The
shorter the mean latency for each
group of rats on Tuesday, the more
effective was that deprivation level
on Tuesday, because everything
was the same for the two groups on
Monday during the learning.
Only Group I is deprived for 24hours today. This allows us to see
the effect of the differences in
deprivation during performance on
Tuesday.
Compared to the other group, the
mean latency for this group will be
a. longer
b. shorter
Group 2: naïve, 24-hourwater-deprived rats – just
like Group 1.
Only Group 2 is deprived for only
6-hours today. Compared to the
other group, the mean latency for
this group will be
a. longer
b. shorter
Demonstration This demonstrates the effect of the motivating operation on
performance (in Malott’s terms) or the effect of the motivating
operation on the frequency (latency) of occurrence of the type of
behavior (lever presses) consequated (reinforced) by those events
(water), in Michael’s terms.
Because water deprivation affected performance on Tuesday, the
group with longer deprivation on Tuesday had shorter latencies on
Tuesday. The reason we just look at the latency of the first
response is that we’re looking at performance, before there is any
chance for learning on Tuesday, before any reinforcement or
extinction can occur on Tuesday.
23
18. Now, please illustrate each cell of the EPB/Michael comparison table with a Skinnerbox, shock-escape example. Do so by filling in the following table.
Answer:
The Motivating operation and Learning
Learning
Performance
Day
Monday
Procedure
Tuesday
(measurement session)
Describe:
Only one lever press is
reinforced by the ________
______________________;
then the session is ended.
Why?
Group I: naïve, highBoth groups receive the ______intensity shock.
________, so that _____________
__________________. This allows
us to see _____________________
____________________________.
Compared to the other group, the
mean latency for this group will be
c. longer
d. shorter
Deprivation
(motivating
operation)
Group 2: naïve, lowintensity shock
Compared to the other group, the
mean latency for this group will be
c. longer
d. shorter
Demonstration This demonstrates the effect of the motivating operation on
learning (in Malott’s terms) or the effect of the motivating
operation on the reinforcing effectiveness of ___________, in
Michael’s terms. Because the ______________made the shock
termination a more effective reinforcer on Monday, the group with
higher _________on Monday had greater learning on Monday.
This is demonstrated by the ______________on Tuesday. The
______________on Tuesday shows what was learned on Monday.
The reason we just look at the latency of the first response is that
we’re looking at__________, before there is any chance for
_________on Tuesday, before any reinforcement or extinction can
occur on Tuesday.
24
The Motivating operation and Performance with Shock Escape
Learning
Day
Procedure
Deprivation
(motivating
operation)
Performance
Monday
Tuesday
(measurement session)
Only one lever press is
We measure the latency of ___
_______; then the session is _________, from the time we put
__________.
the rat into the Skinner box. The
shorter the mean latency for each
group of rats on Tuesday, the more
effective was that __________ level
on Tuesday, because everything
was the ________ for the two
groups on Monday during the
learning.
Group 1: naïve, _________ Only group 1 receives __________
______________________ ____________________________
today. This allows us to see the
effect of the differences in ______
___________ during __________
on Tuesday.
Compared to the other group, the
mean latency for this group will be
a. longer
b. shorter
Group 2: naïve, _________
______________________
Only Group 2 _________________
____________________________
today. Compared to the other
group, the mean latency for this
group will be
a. longer
b. shorter
Demonstration This demonstrates the effect of the motivating operation on
____________ (in Malott’s terms) or the effect of the motivating
operation on the frequency (latency) of occurrence of the type of
behavior (lever presses) consequated (reinforced) by those events
(_________________________________), in Michael’s terms.
Because _____________ affected performance on Tuesday, the
group with __________ __________ ____________ on Tuesday
had shorter latencies on Tuesday. The reason we just look at the
latency of the first response is that we’re looking at __________,
before there is any chance for _____________ on Tuesday, before
any _______________ or ___________ can occur on Tuesday.
25
19. Approaching7 (or getting) reinforcers
vs. escaping aversive conditions
a. Diagram and comment on the
implications of these
contingencies:
 the thermostat
Before
Behavior

the real rat experiment
Before
Behavior
After
Before
Behavior
After
After
:
Behavior
Before
After


the cake and the blizzard
Before
Before
Behavior
Behavior
the imaginary rat experiment
Before
Behavior
After
Before
Behavior
After
After
After
b. Conclusion from these diagrams?
7
In traditional approaches to learning, they often
find it convenient to talk about
approach/avoidance. That means approaching
or being attracted to or doing what it takes to get
reinforcers and avoiding or escaping aversive
conditions. And for similar reasons of
convenience I use approach in the above
context.
26
Chapter 9- 8Motivating operations9
ANSWERS
16. Compare and contrast the EPB definition of motivating operation (EO) with
Michael’s by constructing and explaining this table. (Note that Michael and Malott are
saying the same thing, except Michael’s saying a little bit more.)
Definitions of Motivating operation
EPB
Michael
Affects
Learning
(how fast or how well it is learned –
the DV)
Affects
Performance
(after behavior is learned, the DV)
reinforcing effectiveness of
other events (IV)10
(which is really measured by
how fast or well the behavior
is learned)
the frequency of occurrence
of the type of behavior
consequated by those events
(DV) (that is, after behavior is
learned) This is Michael’s
evocative effect.
8
Since the 5th edition of Principles of Behavior was published, Michael and his colleagues have published an article
advocating the use of “motivating operation” in place of the old term “establishing operation.” Therefore, we are
moving toward using “motivating” rather than “establishing” operation. Furthermore, almost everyone seems to agree
that “motivating operation” is easier to understand than “establishing operation.”
9 These headings refer to topics not chapter titles.
10 Reinforcing effectiveness is an independent variable, but it in turn is controlled by another independent variable,
level of deprivation. Level of Deprivation affects: independent variable 1 (level of deprivation), independent variable 2
(reinforcer effectiveness), and the dependent variable (behavior- either the amount of learning or the degree of the
performance)
27
17. Illustrate each cell of the previous table with a Skinner-box, water-reinforcer
example:
Answer:
The Motivating operation and Learning
Learning
Performance
Day
Monday
Procedure
Only one lever press is
reinforced; then the session
is ended. We want to see
the effects of this single
instance of reinforcement
(learning) on performance,
the next day.
Group I: naïve, 24-hourwater-deprived rats.
Tuesday
(measurement session)
We measure the latency of the first
response, from the time we put the
rat into the Skinner box. The
shorter the mean latency for each
group of rats on Tuesday, the more
effective was that single
reinforcement on Monday.
Both groups are deprived for 24hours today, so that the conditions
are identical for both groups. This
allows us to see the effect of the
differences in deprivation during
the learning on Monday. Compared
to the other group, the mean latency
for this group will be
b. shorter
Deprivation
(motivating
operation)
Group 2: naïve, 6-hourdeprived rats
Compared to the other group, the
mean latency for this group will be
a. longer
Demonstration This demonstrates the effect of the motivating operation on
learning (in Malott’s terms) or the effect of the motivating
operation on the reinforcing effectiveness of water presentation, in
Michael’s terms. Because the water deprivation made the water a
more effective reinforcer on Monday, the group with longer
deprivation on Monday had greater learning on Monday. This is
demonstrated by the shorter latency on Tuesday. The short
latency on Tuesday, shows what was learned on Monday. The
reason we just look at the latency of the first response is that we’re
looking at performance, before there is any chance for learning on
Tuesday, before any reinforcement or extinction can occur on
Tuesday.
28
The Motivating operation and Performance
Learning
Performance
Day
Monday
Procedure
Only one lever press is
reinforced; then the session
is ended.
Deprivation
(motivating
operation)
Group 1: naïve, 24-hourwater-deprived rats. In
other words, rats in this
experiment aren’t the same
ones as in the previous
experiment.
Tuesday
(measurement session)
We measure the latency of the first
response, from the time we put the
rat into the Skinner box. The
shorter the mean latency for each
group of rats on Tuesday, the more
effective was that deprivation level
on Tuesday, because everything
was the same for the two groups on
Monday during the learning.
Only Group I is deprived for 24hours today. This allows us to see
the effect of the differences in
deprivation during performance on
Tuesday.
Compared to the other group, the
mean latency for this group will be
b. shorter
Group 2: naïve, 24-hourwater-deprived rats – just
like Group 1.
Only Group 2 is deprived for only
6-hours today. Compared to the
other group, the mean latency for
this group will be
a. longer
Demonstration This demonstrates the effect of the motivating operation on
performance (in Malott’s terms) or the effect of the motivating
operation on the frequency (latency) of occurrence of the type of
behavior (lever presses) consequated (reinforced) by those events
(water), in Michael’s terms.
Because water deprivation affected performance on Tuesday, the
group with longer deprivation on Tuesday had shorter latencies on
Tuesday. The reason we just look at the latency of the first
response is that we’re looking at performance, before there is any
chance for learning on Tuesday, before any reinforcement or
extinction can occur on Tuesday.
29
18. Now, please illustrate each cell of the EPB/Michael comparison table with a Skinnerbox, shock-escape example. Do so by filling in the following table.
The Motivating operation and Learning
Learning
Performance
Day
Monday
Procedure
Only one lever press is
reinforced by the
termination of the shock;
then the session is ended.
Why? We want to see the
effects of this single
instance of reinforcement
on learning the next day.
Group I: naïve, highintensity shock.
Tuesday
(measurement session)
Describe: We measure the latency
between shock and response
Deprivation
(motivating
operation)
Both groups receive the High__intensity_, so that conditions are
identical. This allows us to see the
effects of learning on Monday.
Compared to the other group, the
mean latency for this group will be
b. shorter
Group 2: naïve, lowintensity shock
Compared to the other group, the
mean latency for this group will be
a. longer
Demonstration This demonstrates the effect of the motivating operation on
learning (in Malott’s terms) or the effect of the motivating
operation on the reinforcing effectiveness of shock termination,
in Michael’s terms. Because the high intensity made the shock
termination a more effective reinforcer on Monday, the group with
higher shock on Monday had greater learning on Monday. This is
demonstrated by the shorter latency on Tuesday. The
performance on Tuesday shows what was learned on Monday.
The reason we just look at the latency of the first response is that
we’re looking at performance, before there is any chance for
learning on Tuesday, before any reinforcement or extinction can
occur on Tuesday.
30
The Motivating operation and Performance with Shock Escape
Learning
Performance
Day
Monday
Procedure
Only one lever press is
reinforced; then the
session is ended.
Deprivation
(motivating
operation)
Group 1: naïve, high
intensity
Tuesday
(measurement session)
We measure the latency of ___
response, from the time we put the
rat into the Skinner box. The
shorter the mean latency for each
group of rats on Tuesday, the more
effective was that intensity level on
Tuesday, because everything was
the same for the two groups on
Monday during the learning.
Only group 1 receives high
intensity today. This allows us to
see the effect of the differences in
learning during performance on
Tuesday.
Compared to the other group, the
mean latency for this group will be
b. shorter
Group 2: naïve, high
intensity
Only Group 2 receives low –
intensity shock today. Compared
to the other group, the mean latency
for this group will be
a. longer
Demonstration This demonstrates the effect of the motivating operation on
_performance (in Malott’s terms) or the effect of the motivating
operation on the frequency (latency) of occurrence of the type of
behavior (lever presses) consequated (reinforced) by those events
(termination of shock), in Michael’s terms. Because shock
intensity affected performance on Tuesday, the group with high –
intensity shock on Tuesday had shorter latencies on Tuesday.
The reason we just look at the latency of the first response is that
we’re looking at performance, before there is any chance for
learning on Tuesday, before any reinforcement or extinction can
occur on Tuesday.
31

19. Approaching (or getting) reinforcers
vs. escaping aversive conditions
a. Diagram and comment on the
implications of these
contingencies
 the thermostat
Before
Behavior
After
the real rat experiment
Before
Behavior
After
Before
Behavior
After
:
Behavior
Before

After

the imaginary rat experiment
Before
Behavior
After
Before
Behavior
After
the cake and the blizzard
Before
Behavior
After
Before
Behavior
After
19b. Conclusions from these diagrams?
The answer: If the EO is an SR ,
then the contingency that EO supports
is a reinforcement contingency, not an
escape contingency. For example, if
you could take pill that would make
you hungry all the time, just so you
can eat cake, then cake is a reinforcer
and it is a reinforcement contingency.
Likewise, if you could take a pill to
make yourself cold all the time just so
you can get warm, then that too is a
reinforcement contingency, if not, it is
an escape contingency.
32
Chapter 11
Learned Reinforcers and Learned Aversive Conditions
20. Extinction of a previously reinforced response vs. removing the value of learned reinforcers
and aversive condition by stopping the pairing procedure.
a. What’s the common confusion?
Answer:_____________________
b. Diagram reinforcement
c. Diagram escape
contingencies and pairing
contingencies and pairing
procedures in the Skinner box to
procedures in the Skinner
show the crucial
box to show the crucial
differences.
differences.
Escape (from shock)
Reinforcement (with water)
Before
Behavior
Before
After
After
Behavior
After
Extinction
Extinction
Before
Behavior
Behavior
Before
After
Pairing Procedure with click and
water to establish click as a learned
reinforcer
Pairing Procedure with a buzzer and
shock to establish buzzer as learned
aversive condition
condition
Breaking the Pairing Procedure
(unpairing). The learned aversive
condition loses its value.
Breaking the Pairing Procedure
(unpairing). The learned
reinforcer loses its value.
click
o click
No
click
33
d. Compare and contrast extinction vs. removing the value of learned reinforcers
(converting them back to neutral stimuli).
21. What’s the motivating operation for a learned reinforcer?
a. The common confusion?
b.
How do we answer that one?
c.
Now apply that same analysis to
people.
22. Hedonic and instrumental learned reinforcers. Find out from Malott what they are
and then reread the previous material.
a. Describe the Zimmerman experiment and show it’s relation to hedonic
reinforcers.
b. Show how the smile from the passing stranger illustrates the hedonic reinforcer.
c. And how the finger from the passing stranger illustrates the hedonic aversive
condition.
23. Is it possible to satiate on learned reinforcers?
34
24. The rat presses the lever and gets a drop of water. As a result he learns to press the
lever; so the water is a learned reinforcer, Right? This is a common student error, not a
professional error. What’s wrong with this?
35
Chapter 11
Learned Reinforcers and Learned Aversive Conditions
ANSWERS
20. Extinction of a previously reinforced response vs. removing the value of learned
reinforcers and aversive condition by stopping the pairing procedure
a. What’s the common
confusion?
Answer: That removing the value of the
learned reinforcer is an example of
20c. Diagram escape
extinction.
contingencies and pairing
procedures in the Skinner
20b. Diagram reinforcement
box to show the crucial
contingencies and pairing procedures in
differences.
the Skinner box to show the crucial
Escape (from shock)
differences.
Reinforcement (with water)
Before
No
Water
Behavior
Dipper
Click
Before
Shock
After
No
Water
After
No
Shock
Behavior
Press
Lever
After
Water
Extinction
Before
Extinction
Before
Behavior
Press
Lever
Behavior
Dipper
Click
Shock
After
No
Water
Pairing Procedure with a buzzer and
shock to establish buzzer as learned
aversive condition
Pairing Procedure with click and
water to establish click as a learned
reinforcer
Click
Water
No Click
No Water
Buzzer
No Buzzer
condition
No water
o click
No
click
No water
Shock
No Shock
Breaking the Pairing Procedure
(unpairing). The learned aversive
condition loses its value.
Breaking the Pairing Procedure
(unpairing). The learned
reinforcer loses its value.
click
Shock
36
Buzzer
No Shock
No Buzzer
No Shock
21. What’s the motivating operation
for a learned reinforcer?
a. The common confusion?
Answer: Even behavior analysts
often think deprivation of the learned
reinforcer will make that learned
reinforcer more effective
b. How do we answer that one?
Take it to the Skinner box, of
course. The learned reinforcer is the
dipper click that’s been paired with
water. So now we want to make sure
21c. Now apply that same analysis to
people.
the dipper click is a learned
reinforcer. What do we do? We
don’t deprive Rudolph of the dipper
click. We deprive him of the water
on which the reinforcing value of the
dipper click is based. The more
water deprived Rudolph is, the more
effective should be the dipper click
as a learned reinforcer to increase
both learning and performance. No
water deprivation? Then forget it.

46.
If the baby is not food deprived the sight of Mother might not reinforce crying. In
other words, the sight of Mother is a learned reinforcer because it has been paired
with the delivery of the unlearned reinforcer, food, among other things. So if the
baby is not deprived of food, the sight of Mother will not function as a learned
reinforcer at that moment, and the sight will not reinforce crying; so the baby won’t
cry.
 Of course, the baby’s behavioral history may be such that the sight of Mother is a
generalized learned reinforcer, not just a simple learned reinforcer; in other words,
other deprivations will still ensure that the sight of Mother is functioning as a learned
reinforcer. Like Mother has also been paired with the unlearned reinforcers of
caressing and rocking. And that explains why it may look like attention deprivation is
an effective motivating operation, when it really ain’t. In other words, even though
the baby just drank a big meal, she still isn’t satiated on caressing and rocking; so
attention sill still act like a reinforcer.
 Well then, what about attention deprivation?
Tentative, lame answer: I know that attention seems like such a powerful and
pervasive reinforcer that we want it even when we don’t need it, even when we’ve got
everything else we could possibly want. But the what I think the Skinner box analysis
predicts.
 But, maybe, money is not a learned reinforcer, really, not a hedonic reinforcer
that you like to just look at, like you do at a friendly smile. Instead, maybe it’s a
rule-governed analog to a learned reinforcer that is only instrumental in
obtaining real reinforcers. For example I might tell you that these tokens are only
good for food. And you just had 2 Big Macs. But they will be the only way
you’ll get food tomorrow, and if you don’t press the lever in five minutes, you’re
screwed. Would you press the lever? Have those tokens been paired with food?
(Hedonic is like hedonistic, like for pleasure.)
Hedonic and instrumental learned reinforcers. Find out from Malott what they are
and
then reread the previous material.
c. Describe the Zimmerman experiment and show it’s relation to hedonic reinforcers.
d. Show how the smile from the passing stranger illustrates the hedonic reinforcer.
37
Chapter 12
Discrimination Training
27. Discriminative stimulus (SD) vs. the
before condition
a. Diagram discriminated escape
and use the diagram to explain
the difference between the SD
and the before condition.
25. Discrimination training based on
reinforcement vs. discrimination training
based on escape.
a. Compare and contrast
b. Diagram contrasting Skinner box
and everyday examples.
b. Construct the compare and
contrast table.
c. What’s the common confusion?
26. Discrimination training based on
reinforcement, vs. discrimination
training based on punishment.
a. Compare and contrast
d. What’s the official EPB position
on antecedent stimuli?
b. Diagram contrasting Skinner box
and everyday examples.
28. Discriminated vs. undiscriminated
contingency
a. What’s the common confusion?
38
Answer: That there’s always
an SD
b. What is the S∆ test?
Describe your example &diagram it:
SD
c. Illustrate the S∆ by diagramming
a contingency without an S∆ and
therefore without an SD.
Before
Describe your example & diagram it:
Before
Behavior
Behavior
S∆
After
d. What is the operandum test?
e. What’s the purpose of the
operandum test?
f. Illustrate the operandum test with
an example where the operandum
clearly differs from the SD, thus
showing that the mere presence
of an operandum doesn’t
guarantee an SD or a
discriminated contingency.
39
After
After
Chapter 12
Discrimination Training
Answers
46.
Discrimination training based on
reinforcement vs. discrimination
training based on escape.
a. Compare and contrast
Reinforcement is the presentation
of a reinforcer and escape is the
removal of an aversive condition.
Both reinforce the behavior
Punishment is the presentation of
an aversive condition
b. Diagram contrasting Skinner
box and everyday examples.
SD
Light
On
b.Diagram contrasting Skinner box
and everyday examples.
SD
Light on
Before
Water
Before
Shock
After
Water
Behavior
Press
Lever
S∆
Light off
After
No
water
SD
Light
On
After
No
Shock
Before
No
Shock
Behavior
Lever
Press
S∆
Light Off
After
No
Water
SD
Light
On
After
No
Shock
Behavior
Lever
Press
S∆
Lever
Press
46.
Behavior
Press
Lever
S∆
Light Off
46.
Before
No
Water
After
Shock
Discrimination training based on
reinforcement, vs. discrimination
training based on punishment.
a. Compare and contrast
40
After
Water
After
Shock
Discriminative stimulus (SD) vs. the
before condition
a. Diagram discriminated escape
and use the diagram to explain
the difference between the SD
and the before condition.
Answer: The SD signals the
differential availability of the
reinforcer and the before
condition makes the after
condition reinforcing.
e. What’s the purpose of the
operandum test?
Answer: To show that the SD and
operandum (manipulandum) are
not normally the same.
(Of course, the operandum can
function as an SD in the presence
of which, for instance, downward
movement of the paws will be
reinforced, because it is possible
for the rat to make the lever
pressing movements in the
absence of the lever. But that’s a
level of subtlety that probably
would cause more confusion than
help. Just wanted to let you
know it hasn’t escaped our
attention.)
f. Illustrate the operandum test with
an example where the
operandum clearly differs from
the SD, thus showing that the
mere presence of an
operandum doesn’t guarantee
an SD or a discriminated
contingency.
b. Construct the compare and
contrast table.
Occurs
Effects
Before
Before
Increase
bx
Yes
Make after
reinforcing
No
Increase
likelihood of
reinforcement
SD
Before
Decrease
bx
No
Yes
c. What’s the common confusion?
That they are the same thing
d. What’s the official EPB
position on antecedent stimuli?
46.
Discriminated vs. undiscriminated
contingency
a. What’s the common confusion?
Answer: That there’s always
an SD
b.What is the S∆ test?
Is there also a S∆ ? If not then
you also don’t have an SD
c. Illustrate the S∆ by diagramming
a contingency without an S∆
and therefore without an SD.
Describe your example & diagram it:
Before
Behavior
Describe your example &diagram it:
SD
Light
On
After
Water
After
Before
No
Water
Behavior
Lever
Press
S∆
Light Off
d.What is the operandum test?
Does the SD differ from the
operandum?
41
After
No
Water
Chapter 13
Stimulus Generalization
29. Stimulus generalization gradients
a. Describe the training procedure
and contingency. And what’s the
rational?
f. Graph more typical intermediate
results.
b. Describe the testing procedure
and contingency. And what’s the
rational?
c. Why is the testing “contingency”
extinction?
g. Please be able to talk fluently
about the following graph.
d. Graph the results for complete
generalization.
Stimulus-generalization Gradient
350
Number of Key Pecks
300
250
200
150
100
50
0
blue
e. Graph the results for complete
discrimination.
yellow-green
Test Colors
A pigeon’s key pecks were reinforced in the
presence of a yellow-green light and then
tested in the presence of yellow-green and
other colors.
42
red
Chapter 13
Stimulus Generalization
Answers
e. Graph the results for complete
discrimination.
The graph should show only
responding in the presence of the
yellow – green light.
29. Stimulus generalization gradients
a. Describe the training procedure
and contingency. And what’s
the rational?
A pigeon was trained in the presence
of a yellow – green light and
received intermittent reinforcement
for key pecks. They used intermittent
reinforcement to increase the
pigeon’s resistance to extinction.
f. Graph more typical
intermediate results.
The typical results look like the
graph below
b. Describe the testing procedure
and contingency. And what’s
the rational?
The test procedure involved
presenting novel stimuli and test
stimuli to the pigeon in an extinction
condition. It is important to use
extinction so that the pigeon was not
reinforced for any responses.
g. Please be able to talk fluently
about the following graph.
Stimulus-generalization Gradient
350
Number of Key Pecks
300
c. Why is the testing
“contingency” extinction?
To reduce the effects of
discrimination training
250
200
150
100
50
0
blue
yellow-green
Test Colors
d. Graph the results for complete
generalization.
The graph should show uniform
responding
A pigeon’s key pecks were reinforced in the
presence of a yellow-green light and then
tested in the presence of yellow-green and
other colors.
43
red
Chapter 14
Imitation
30. The theory of generalized imitation.
a. Explain the need for this theory.
b. Explain the theory.
c. Diagram the relevant procedure
and explain them.
44
Chapter 14
Imitation
Answers
30. The theory of generalized imitation.
a. Explain the need for this theory.
SD
Model
makes
novel
response
We need a theory to explain why we
get generalized imitation, even when
the imitator knows the experimenter
will provide no added reinforcers
contingent on such generalized
imitation. A person can perform
specific imitative responses without
direct reinforcement of that response,
but reinforcement of some other
imitative response must have had
occurred.
Before
Student
has no
learned
imitative
Sr+
Behavior
Student
makes
the novel
response
S∆
Model
makes
no novel
response
b. Explain the theory.
Generalized imitative responses
occur because they automatically
produce imitative reinforcer.
Imitative reinforcers: stimuli arising
from the match between the behavior
of the imitator and the behavior of
the model that function as
reinforcers.
c. Diagram the relevant procedure
and explain them.
45
After
Student
has a
learned
imitative
Sr+
After
Student
has no
learned
imitative
Sr+
Chapter 15
Avoidance
31. Extinction of cued avoidance.
a. Diagram cued avoidance and
then diagram extinction of cued
avoidance. Remember these two
general rules:
 In extinction, after = before.
 The extinction procedure
consists of disconnecting the
operandum. Rudolph can
press that lever ‘til the cows
come home, but those presses
will have no effect.
32. SD vs. warning stimulus.
a. Diagram discriminated cued
avoidance.
b. Use the diagram to explain the
general difference between
discriminative stimuli and
warning stimuli.
c. What’s the common confusion?
Cued Avoidance
Escape
Before
After
Behavior
After
Before
Avoidance
Extinction of Cued
Avoidance
33. Avoidance of an aversive condition
vs. punishment by the presentation of an
aversive condition.
a. Compare and contrast.
b. Diagram the two contingencies in
the Skinner box showing how
they differ.
c. What’s the common confusion?
d. When are they essentially the
same?
Escape
Before
After
Behavior
After
Before
Avoidance
b. What’s the common confusion?
46
34. Avoidance of the loss of a reinforcer
and punishment by the removal of a
reinforcer.
a. Compare and contrast.
b. Diagram the two contingencies in
the Skinner box showing how
they differ.
35.1 Define, give an example, and
discuss the Two Factor Theory of
avoidance.
35.2 Define, give an example, and
discuss the molar law of effect
35.3 Define, compare and contrast,
illustrate, and give your preference
for molecular vs. molar analysis
35. Analyze cued avoidance in terms of
the component escape and avoidance
contingencies.
a. Diagram the contingencies in a
Skinner-box example.
b. Diagram the creation of the
learned aversive condition.
c. Explain why we say the only role
of an avoidance contingency is
its function as a pairing
procedure.
47
Chapter 15
Avoidance
ANSWERS
31a. Diagram cued avoidance and then
diagram extinction of cued avoidance.
Remember these two general rules:
Answer: A warning stimulus signals
when an aversive will occur and an SD
signals the differential availability of a
reinforcer.
c. What’s the common confusion? As
usual, behavior analysts often fail to
make the distinction, thus illustrating
the dangers of a generic technical
term like antecedent, which tends to
gloss over these important
distinctions.
Cued Avoidance
Escape
Before
Buzzer
On
Behavior
Press
Lever
After
Buzzer
Off
After
Before
Shock in
3”
No Shock
in 3”
33. Avoidance of an aversive condition
vs. punishment by the presentation of an
aversive condition.
a. Compare and contrast.
Avoidance occurs when an organism
responds before contacting an aversive
condition and punishment occurs when
the organism responds and then contacts
the aversive condition.
Avoidance
Extinction of Cued
Avoidance
Escape
Before
Buzzer
On
Behavior
Press
Lever
After
Buzzer
Off
b.
Diagram the two contingencies in
the Skinner box showing how they
differ.
Punishment
After
Shock in
3”
Before
Shock in
3”
Avoidance
Before
31b. What’s the common confusion?
Behavior analysts often think you
extinguish avoidance by not turning on
the buzzer or shock.
No Shock
c.
46. SD vs. warning stimulus.
a. Diagram discriminated cued
avoidance.
See Above
b. Use the diagram to explain the
general difference between
discriminative stimuli and warning
stimuli.
48
Behavior
Press
Lever
After
Shock
What’s the common confusion?
Answer: There are two, and
they’re just the opposite of each
other: Traditionally,
psychologists and lay people
alike often say punishment is
avoidance (they call it passive
avoidance); for example, the rat
avoids pressing the lever,
because that response is
punished. They say that the rat
makes non-punished responses
(non-lever-press responses)
because those responses avoided
the aversive condition (the
shock). But that response class,
everything but the lever pressing,
is too large to be useful; or we
might say dead men can make
non-lever-press responses.
However, the thoughtful student
often suffers the opposite
confusion, saying that the
avoidance contingency is really a
punishment contingency; for
example, that all but the leverpress responses are being punished.
But the non-lever-press response
class is again too large to be useful;
or, again, we might say dead men
can make non-lever-press
responses.
d. When are they essentially the
same? When there are only two
response options; in other words,
those response options are
exhaustive and mutually exclusive –
for example, in a “forced choice”
procedure where the rat is forced to
jump from a home platform to one of
two other platforms, and one of those
two receiving platforms is shocked.
Then we can’t distinguish between
jumping to the shock platform being
punished and avoidance of the shock
platform by jumping to the nonshock platform.
34, Avoidance of the loss of a reinforcer
and punishment by the removal of a
reinforcer.
a. Compare and contrast.
Involves
removal
Removal
is
contingent
Keeping is
contingent
Frequency
of
Response
Avoidance
Yes
Removal
Yes
No
Yes
Yes
No
Increases
Decreases
b. Diagram the two contingencies in the
Skinner box showing how they
differ.
Avoidance
Before
Will lose
food in 3”
Behavior
Press
Lever
After
Won’t
lose food
in 3”
Behavior
Press
Lever
After
No food
Removal
Before
Food
35. Analyze cued avoidance in terms of
the component escape and avoidance
contingencies.
a. Diagram the contingencies in a
Skinner-box example.
b.
Diagram the creation of the
learned aversive condition.
c.
Explain why we say the only role
of an avoidance contingency is its
function as a pairing procedure.
35.1 Define, give an example, and
discuss the Two Factor Theory of
avoidance.
The warning stimulus becomes a
learned aversive stimulus, through
pairing with the original aversive
stimulus; and the so-called avoidance
response is really reinforced by the
contingent termination of the warning
49
stimulus and not by the avoidance of
the original aversive stimulus.
35.2 Define, give an example, and
discuss the molar law of effect
With an avoidance contingency, the
reduction in the overall frequency of
aversive stimulation reinforces the
“avoidance response.”
35.3 Define, compare and contrast,
illustrate, and give your preference for
molecular vs. molar analysis
50
Chapter 16
Punishment by prevention
36. Why is punishment by prevention of
the presentation of a reinforcer a better
label than differential reinforcement of
other behavior (DRO)? Don’t forget the
dead man. (See the Advance Enrichment
material for Chapter 16.)
a. Explain with a behavior
modification (applied behavior
analysis) example.
a. Explain with diagrams of the
two contingencies in the
Skinner box.
38. The four prevention contingencies.
a. Construct the 2x2 contingency
table.
b. Compare and contrast
c. Diagram the four contingencies
in the Skinner box showing how
they differ.
37. Punishment by prevention of the
presentation of a reinforcer vs.
punishment by the removal of a
Reinforcer.
51
Chapter 16
Punishment by prevention
Answers
38. The four prevention contingencies.
a. Construct the 2x2 contingency table.
Prevent
Prevent
Presentation Removal
Avoidance
Reinforcer Punishment
(bx decrease) (bx
increase)
Avoidance
Punishment
Aversive
(bx increase) (bx
decrease)
36. Why is punishment by prevention of
the presentation of a reinforcer a better
label than differential reinforcement of
other behavior (DRO)? Don’t forget the
dead man. (See the Advance Enrichment
material for Chapter 16.)
a. Explain with a behavior modification
(applied behavior analysis) example.
It is better to treat a specific behavior,
than a nonbehavior.
b. Compare and contrast
c. Diagram the four contingencies in
the Skinner box showing how they
differ.
37. Punishment by prevention of the
presentation of a reinforcer vs.
punishment by the removal of a
reinforcer.
a. Explain with diagrams of the two
contingencies in the Skinner box.
Removal
Before
Food
Behavior
Press
Lever
After
No Food
Behavior
Pull
Chain
After
Won’t
have food
in 3”
Prevention
Before
Will have
food in 3”
52
Chapters 17 and 18
Schedules
39. Ratio schedules vs. interval
schedules of reinforcement and
punishment.
a. Compare and contrast in the
Skinner box.
Hint: An intermittent schedule of
punishment might be continuous
reinforcement with intermittent
punishment for the same
response. However, we could
also have a concurrent schedule
of VI reinforcement combined
with FR punishment.
41. Fixed-interval vs. fixed-time
schedules.
a. Compare and contrast in the
Skinner box.
42. Fixed-interval schedules in the
Skinner box vs. term-paper schedules.
a. What’s the common confusion?
b. Compare and contrast.
40. Variable-ratio schedules in the
Skinner box vs. the Las Vegas schedule.
a. What’s the common confusion?
b. Compare and contrast.
43. Fixed-interval schedules in the
Skinner box vs. the congressional
schedule of passing legislation.
a. What’s the common confusion?
b. Compare and contrast.
53
44. Limited hold vs. deadline.
a. Compare and contrast.
Partial answer: Note that a
deadline allows for the
reinforcement of premature
responding.
b. What’s the common confusion?
45. Why does intermittent reinforcement
increase resistance to extinction?
54
Chapters 17 and 18
Schedules
ANSWERS
39. Ratio schedules vs. interval
schedules of reinforcement and
punishment.
a. Compare and contrast in the
Skinner box.
Hint: An intermittent schedule of
punishment might be continuous
reinforcement with intermittent
punishment for the same
response. However, we could
also have a concurrent schedule
of VI reinforcement combined
with FR punishment.
41. Fixed-interval vs. fixed-time
schedules.
a. Compare and contrast in the
Skinner box.
A Fixed – Time ratio is independent
of the response. No response is
necessary for the reinforcer to be
delivered.
42. Fixed-interval schedules in the
Skinner box vs. term-paper schedules.
a. What’s the common confusion?
The confusion is that the term paper
is a good example of an FI schedule.
Fixed Ratio
Before
No Food
Behavior
Every 5th
Press
After
Food
b. Compare and contrast.
The term paper is not an FI schedule
because it has a deadline, is uses
clocks and calendars, and one
response is not enough for
reinforcement to occur. The response
class is unclear and the reinforcer is
too delayed.
Fixed Interval
Before
No Food
Behavior
First
press
after 5’
After
Food
40. Variable-ratio schedules in the
Skinner box vs. the Las Vegas schedule.
a. What’s the common
confusion?
That Las Vegas is an example of
VR schedules
43. Fixed-interval schedules in the
Skinner box vs. the congressional
schedule of passing legislation.
a. What’s the common confusion?
That the congressional schedule is an
example of an FI schedule
b. Compare and contrast.
Las Vegas is more or less
continuous reinforcement using.
In a Skinner box there are no
added learned reinforcers and the
amount does not change from
ratio to ratio
b. Compare and contrast.
Congress does not respond after a
fixed amount of time.
55
44. Limited hold vs. deadline.
a. Compare and contrast.
Limited hold is the time during
which a response must be made.
45. Why does intermittent reinforcement
increase resistance to extinction?
Because there is greater stimulus
generalization between intermittent
reinforcement and extinction than
between continuous reinforcement and
extinction. Be able to explain this in
more detail.
b. What’s the common confusion?
The standard, that the two
procedures or concepts are the
same.
56
Chapter 20
Rate Contingencies
47. What’s the problem with differential
punishment of high rates?
46. Differential reinforcement of low
rates (DRL) vs. differential punishment
of high rates by the prevention of the
presentation of a reinforcer (differential
penalizing) (DPL).
a. Compare and contrast the
contingencies themselves.
b. Diagram the contingencies in a
human applied behavior analysis
example to show the crucial
differences.
c. What’s the common confusion?
d. Give an example of the common
confusion.
48. Compare and contrast DRO and
DRL by diagramming them.
57
Chapter 20
Rate Contingencies
ANSWERS
46. Differential reinforcement of low
rates (DRL) vs. differential
punishment of high rates by the
prevention of the presentation of
a reinforcer (differential
penalizing) (DPL).
a. Compare and contrast the
contingencies themselves.
Answer: Both procedures result
in a low rate of behavior.
However, in DRL, the response
must occur for the reinforcer to
be delivered. In DPL no
response is needed for the
reinforcer to be delivered.
b. Diagram the contingencies in a
human applied behavior analysis
example to show the crucial
differences.
DRL, you want the behavior to
occur, just not as much
DPL, the behavior need not occur at
all to be reinforced
c. What’s the common confusion?
Answer: Behavior analysts
confuse differential
reinforcement of low rate with
differential punishment of high
rate when they really want a zero
rate.
d. Give an example of the common
confusion.
47. What’s the problem with differential
punishment of high rates?
You won’t eliminate the behavior and
other procedures are better.
48. Compare and contrast DRO and
DRL by diagramming them.
58
Chapter 21
Respondent Conditioning
49. Operant conditioning vs. respondent
conditioning.
a. Compare and contrast.
b. Diagram the procedures, using
the same stimuli.
c. Explain this alternative way of
diagramming and show how it is
like the POB way.
51. SD vs. CS11
a. Compare and contrast.
b. Diagram the procedures
52. How can you experimentally tell if
the bell is an SD or a CS?
50. Operant extinction vs. respondent
extinction.
a. What’s the common confusion?
b. Compare and contrast.
c. Diagram the procedures. In other
words, diagram a Skinner-box
example of extinction and then a
similar respondent extinction
procedure.
d. Now, on second thought, maybe
they are the same. What do you
think?
11
It may not be of epidemic proportions, but there is some
professional confusion between the SD and the CS and also
the motivating operation. I think this confusion is increased
by the common professional practice of referring to all three
concepts as antecedents. So behavior analysts then speak of
the bell as an antecedent, but it’s not clear whether they mean
as an SD or a CS. I think it is often unclear to them as well.
Sometimes it’s useful to have generic terms, like behavioral
consequence, when the subsumed terms have some important
similarity; but sometimes it’s confusing, like antecedents,
when the subsumed terms have only a superficial, structural
similarity, like coming before the response.
59
53. Escape/avoidance with an aversive
stimulus such as lemon juice vs.
respondent conditioning with lemon
juice.
a. What’s the common confusion?
b. Compare and contrast
c. Diagram the operant
interpretation.
55. Operant conditioning vs. the socalled unconditioned reflex.
a. What’s the common confusion?
b. Diagram the procedures
presumed unconditioned reflex
and the alternative operant
conditioning interpretation.
54. Consider respondent conditioning
with food. Provide an operant
interpretation of that procedure in terms
of reinforcement with a reinforcer such
as food.
56. The operant pairing procedure along
with the value-altering principle vs.
respondent conditioning.
a. What’s the common confusion?
b. Compare and contrast the
procedures and results
60
Chapter 21
Respondent Conditioning
ANSWERS
49. Operant conditioning vs. respondent
conditioning
a. Compare and contrast
Operant develops learned
reinforcers and aversives;
Respondent develops conditioned
eliciting stimuli
Unconditioned Reflex
UR & CR:
Salivate
US: Food
CS: Bell
b. Diagram the procedures,
using the same stimuli
c. Explain this alternative way
of diagramming and show
how it is like the POB way.
Answer:
Conditioned
reflex
50. Operant Extinction vs. Respondent
extinction
a. What’s the common confusion?
Behavior Analysts often say they are the
same thing
Generic Respondent Diagram
Unconditioned Reflex
Unconditioned
Stimulus
Conditioned
Stimulus
Unconditioned
and Conditioned
Response
b. Compare and Contrast
In operant extinction the response must
occur and stop presenting the reinforcer
or aversive.
Conditioned
reflex
In respondent extinction, the response
doesn’t have to occur and stop pairing
the CS with the US
Example of Respondent
Conditioning
61
operant procedure and the bell is an SD.
And the most straightforward way to do
this is to use operant extinction. For
example, you might run the salivary
ducts outside or tie them off. Then the
salivary response would presumably
have no reinforcing effect, such as
making it easier for the dog to taste the
food powder that’s coming right after
the bell. Be able to diagram that
operant extinction procedure.
c. Diagram the procedures. In other
words, diagram a Skinner – Box
example of extinction and then a
similar respondent extinction
procedure.
SD
Bell
Before
No food
After
Food
Behavior
Salivate
(lever
press)
SD
Bell
S∆
No bell
After
No food
Before
Tasteless
food in .5
sec
US: No Food
CS: Bell
Behavior
Salivate
S∆
No bell
Unconditioned Reflex
UR & CR:
Salivate
After
Tasty
food in .5
sec
After
Tasteless
food in .5
sec
Conditioned
reflex
53. Escape/Avoidance with an aversive
stimulus such as lemon juice vs.
respondent conditioning with lemon
juice.
a. What’s the common confusion?
The assumption that respondent
conditioning has been demonstrated
when the demonstration may really
involve a confounding with operant
conditioning. In other words, failing to
discriminate between respondent
conditioning and operant contingencies.
The common error is that psychologists
often fail to control for or rule out a
possible operant contingency when
looking at presumed examples of
respondent conditioning.
d. Now, on second thought, maybe they
are the same. What do you think?
51. SD vs. CS
a. Compare and Contrast
b. Diagram the procedures.
The previous diagrams contrasting
operant and respondent conditioning
should also do the trick here. Just note
the bell’s relative place in the diagram
and its function in the two procedures.
52. How can you experimentally tell if
the bell is an SD or a CS?
Find out if the outcome of the response
influences the future frequency of that
response. If it does, then you’ve got an
62
53b. Compare and contrast
Confounding between escape/avoidance
contingencies and respondent
conditioning. In other words, it may be
possible to interpret most so-called
respondent conditioning as operant
conditioning, often escape/avoidance.
For example, perhaps the dog’s
salivation is reinforced by a reduction in
the aversive acidity when lemon juice is
squirted into its mouth. The bell
combined with the absence of salivation
would be the warning stimulus, which is
escaped by salivation.
This does not mean respondent
conditioning has not occurred; it just
means an unconfounded and
unambiguous demonstration is much
rarer than generally thought, if not
totally absent.
presence of which salivation will more
promptly disseminate the taste of the
food powder and perhaps facilitate the
consumption of the food powder.
Diagram the operant interpretation
SD
Spot hears
bell.
Before
Spot will
have no
taste in
0.5”
Before
Spot hears
bell with no
saliva.
Escape
After
Spot will
have no
taste in
0.5”
Salivation at the sound of the bell (0.5
seconds before the food is placed in the
mouth) is reinforced by getting the taste
of the food 0.5 seconds earlier than
would be the case if Spot waited for the
food before starting to salivate.
After
Spot hears
bell with
saliva
55. Operant conditioning vs. the socalled unconditioned reflex.
Behavior
Spot
salivates
Before
Spot will
have very
aversive
acid in 0.5”
Behavior
Spot
salivates.
S∆
Spot hears
no bell.
53c. Diagram the operant interpretation.
After
Spot will
have a
taste in
0.5”
a. What’s the common confusion?
The assumption that an unconditioned
reflex has been demonstrated when the
demonstration may really involve a
confounding with operant conditioning.
In other words, failing to discriminate
between an unconditioned reflex and
operant contingencies. For example,
consider the classic case of salivation in
the presence of the acid: The salivation
might simply be reinforced by the
dilution of the aversive acid; in other
words, an escape contingency.
Another example: Pupil contracts when
a light is shined into the pupil. But we
After
Spot will
not have
very
aversive
acid in 0.5”
Avoidance
54. Consider respondent conditioning
with food. Provide an operant
interpretation of that procedure in terms
of reinforcement with a reinforcer such
as food.
In the case of meat powder put in the
mouth, the bell would be an SD in the
63
might do an operant analysis of this in
terms of an escape contingency.
b. Diagram the procedures presumed
unconditioned reflex and the
alternative operant conditioning
interpretation.
Unconditioned Reflex
Pupil
Contracts
Bright Light
Before
Aversive
Bright
light
Behavior
Pupil
contracts
After
Less
aversive
bright
light
56. The operant pairing procedure along
with the value-altering principle vs.
respondent conditioning.
a. What’s the common confusion?
That operant value-altering process is a
result of or an example of respondent
conditioning – a failure to discriminate
between two processes.
b. Compare and contrast the procedures
and results
In operant conditioning, an actual
reinforcer is paired with a stimulus.
In respondent conditioning, a stimulus is
paired with a response.
64
Chapter 22
Analogs to Reinforcement, Part I
57. Direct-acting vs. indirect-acting
contingencies.
a. What’s the common confusion?
b. Compare and contrast.
c. Diagram a pair of contrasting
everyday contingencies and a
pair of contrasting performancemanagement contingencies,
showing how they compare and
contrast.
65
Chapter 22
Analogs to Reinforcement, Part I
ANSWERS
57. Direct-acting vs. indirect-acting
contingencies.
a. What’s the common confusion?
Answer: Another failure to
discriminate. The confused treat
all contingencies as if they were
direct acting.
b. Compare and contrast.
Direct: the response is controlled by the
outcome
Indirect: the response is controlled by
the statement of the rule describing the
contingency
c. Diagram a pair of contrasting
everyday contingencies and a pair of
contrasting performancemanagement contingencies, showing
how they compare and contrast.
66
Chapter 24
A Theory of Rule-Governed Behavior
58. The cause of poor self-management:
the myth vs. the truth (This myth applies
to the ineffective natural contingency.)
a. What’s the common confusion?
c. Give a hypothetical example to
show that delayed consequences
can control behavior and thus
that the myth is wrong. Then
give a related example showing
why true real-world
contingencies fail to control
behavior.
b. Compare and contrast.
67
Chapter 24
A Theory of Rule-Governed Behavior
Answers
58. The cause of poor self-management:
the myth vs. the truth (This myth applies
to the ineffective natural contingency.)
a. What’s the common confusion?
Answer: The myth itself. In
other words, behavior analysts
erroneously think that poor selfcontrol results from greater
control by immediate, rather
than delayed, consequences.
b. Compare and contrast.
outcome is sizable and probable, it
will control behavior. If the outcome
is small (although of cumulative
significance) and/or improbable it
will not control behavior.
c. Give a hypothetical example to
show that delayed consequences
can control behavior and thus
that the myth is wrong. Then
give a related example showing
why true real-world
contingencies fail to control
behavior.
The delay is not crucial, the size and
probability of the outcome make is
hard or easy to follow. If the
68
Chapter 26
Moral and Legal Control
59. Effective moral and legal control
must be basically aversive vs. we can
build a world without aversive
control.
69
Chapter 26
Moral and Legal Control
ANSWERS
59. Effective moral and legal control
must be basically aversive vs. we can
build a world without aversive
control.
Answer: In the case of moral and
legal control we are usually dealing
with indirect-acting contingencies.
And if we want to increase behavior,
we need deadlines to prevent
procrastination. But as soon as we
impose deadlines, we have an analog
to avoidance of the loss of an
opportunity (heaven) or a
straightforward analog to avoidance
of an aversive condition (hell). Of
course, if we want to decrease
behavior, we are almost always
talking punishment, penalty, or an
analog.
70
Chapters 27 and 28
Maintenance and Transfer
60. State and discuss the two myths of
performance maintenance.
b. Compare and contrast.
61. Transfer of training vs. stimulus
generalization.
a. What’s the common confusion?
c. Give an example illustrating this
confusion.
71
Chapters 27 and 28
Maintenance and Transfer
ANSWERS
60. State and discuss the two myths of
performance maintenance.
generalization is a proposed mechanism
or underlying process for explaining
transfer of training, namely that
performance of the acquired behavior in
a different setting results from the
physical similarity between the two
settings, between the training setting and
the transfer setting. This transfer occurs
because of a failure of the performer to
discriminate between the training and
the transfer settings; this is another way
of stating the transfer of training myth.
61. Transfer of training vs. stimulus
generalization
a. What’s the common confusion
The transfer of training myth: You can
explain all transfer of training in terms
of stimulus generalization. But that’s a
myth; the truth is that much transfer of
training cannot be explained in terms of
stimulus generalization.
b. Compare and Contrast
Comparable: both deal with the
acquisition of behavior in one setting
and the occurrence of that behavior in a
different setting.
Contrast: Transfer of training simply
states the fact that performance acquired
in one setting may occur in a different
setting. However, stimulus
c. Give an example illustrating this
confusion
Transfer of training of street
crossing skills from the model to the
street is an example. Note that this
is an example of clear transfer of
training even though it involves clear
discrimination. (You should be able
to detail and explain your example.)
72
Alllllll most there.
You’ve been through the whole trip, now. You know what the big questions are and probably
most of the big answers. You know what the common confusions are and how to avoid them.
Now, all that remains is to get fluent.
73