Behaviorism is a perspective on learning that focuses on changes in individuals’ observable behaviors—changes in what people say or do. At some point we all use this perspective, whether we call it “behaviorism” or something else. The first time that I drove a car, for example, I was concerned primarily with whether I could actually do the driving, not with whether I could describe or explain how to drive. And for another example: when I reached the point in life where I began cooking meals for myself, I was more focused on whether I could actually produce edible food in a kitchen than with whether I could explain my recipes and cooking procedures to others. And still another example—one often relevant to new teachers: when I began my first year of teaching, I was more focused on doing the job of teaching—on day-to-day survival—than on pausing to reflect on what I was doing.
Note that in all of these examples, focusing attention on behavior instead of on “thoughts” may have been desirable at that moment, but not necessarily desirable indefinitely or all of the time. Even as a beginner, there are times when it is more important to be able to describe how to drive or to cook than to actually do these things. And there definitely are many times when reflecting on and thinking about teaching can improve teaching itself. (As a teacher-friend once said to me, “Don’t just do something; stand there!”) But neither is focusing on behavior necessarily less desirable than focusing on students’ “inner” changes, such as gains in their knowledge or their personal attitudes. If you are teaching, you will need to attend to all forms of learning in students, whether inner or outward.
In classrooms, behaviorism is most useful for identifying relationships between specific actions by a student and the immediate precursors and consequences of the actions. It is less useful for understanding changes in students’ thinking; for this purpose we need a more cognitive (or thinking-oriented) theory, like the ones described later in this chapter. This fact is not really a criticism of behaviorism as a perspective, but just a clarification of its particular strength or source of usefulness, which is to highlight observable relationships among actions, precursors and consequences. Behaviorists use particular terms (or “lingo,” some might say) for these relationships. They also rely primarily on two basic images or models of behavioral learning, called respondent (or “classical”) conditioning and operant conditioning. The names are derived partly from the major learning mechanisms highlighted by each type, which I describe next.
Respondent Conditioning: Learning New Associations with Prior BehaviorsEdit
As originally conceived, respondent conditioning (sometimes also called classical conditioning) begins with the involuntary responses to particular sights, sounds, or other sensations (Lavond & Steinmetz, 2003). When I receive an injection from a nurse or doctor, for example, I cringe, tighten my muscles, and even perspire a bit. Whenever a contented, happy baby looks at me, on the other hand, I invariably smile in response. I cannot help myself in either case; both of the responses are automatic. In humans as well as other animals, there is a repertoire or variety of such specific, involuntary behaviors. At the sound of a sudden loud noise, for example, most of us show a “startle” response—we drop what we are doing (sometimes literally!), our heart rate shoots up temporarily, and we look for the source of the sound. Cats, dogs, and many other animals (even fish in an aquarium) show similar or equivalent responses.
Involuntary stimuli and responses were first studied systematically early in the twentieth century by the Russian scientist Ivan Pavlov (1927). Pavlov’s most well-known work did not involve humans, but dogs, and specifically their involuntary tendency to salivate when eating. He attached a small tube to the side of dogs’ mouths that allowed him to measure how much the dogs salivated when fed. But he soon noticed a “problem” with the procedure: as the dogs gained experience with the experiment, they often salivated before they began eating. In fact the most experienced dogs sometimes began salivating before they even saw any food, simply when Pavlov himself entered the room! The sight of the experimenter, which had originally been a neutral experience for the dogs, became associated with the dogs’ original salivation response. Eventually, in fact, the dogs would salivate at the sight of Pavlov even if he did not feed them.
This change in the dogs’ involuntary response, and especially its growing independence from the food as stimulus, eventually became the focus of Pavlov’s research. Psychologists named the process respondent conditioning because it describes changes in responses to stimuli (though some have also called it “classical conditioning” because it was historically the first form of behavioral learning to be studied systematically). Respondent conditioning has several elements, each with a special name. To understand these, look at Figure 1, and imagine a dog (such as my own, named Ginger) prior to any conditioning. At the beginning Ginger salivates (an unconditioned response (UR)) only when she actually tastes her dinner (an unconditioned stimulus (US)). As time goes by, however, a neutral stimulus—such as the sound of opening a bag containing fresh dog food—is continually paired with the eating/tasting experience. Eventually the neutral stimulus becomes able to elicit salivation even before any dog food is offered to Ginger, or even if the bag of food is empty! At this point the neutral stimulus is called a conditioned stimulus (UCS) and the original response is renamed as a conditioned response (CR). Now, after conditioning, Ginger salivates merely at the sound of opening any large bag, regardless of its contents. (I might add that Ginger also engages in other conditioned responses, such as looking hopeful and following me around the house at dinner time.)
Respondent Conditioning and StudentsEdit
“OK,” you may be thinking, “Respondent conditioning may happen to animals. But does anything like it happen in classrooms?” It might seem like not much would, since teaching is usually about influencing students’ conscious words and thoughts, and not their involuntary behaviors. But remember that schooling is not just about encouraging thinking and talking. Teachers, like parents and the public, also seek positive changes in students’ attitudes and feelings—attitudes like a love for learning, for example, and feelings like self-confidence. It turns out that respondent conditioning describes these kinds of changes relatively well.
Consider, for example, a child who responds happily whenever meeting a new person who is warm and friendly, but who also responds cautiously or at least neutrally in any new situation. Suppose further that the “new, friendly person” in question is you, his teacher. Initially the child’s response to you is like an unconditioned stimulus: you smile (the unconditioned stimulus) and in response he perks up, breathes easier, and smiles (the unconditioned response). This exchange is not the whole story, however, but merely the setting for an important bit of behavior change: suppose you smile at him while standing in your classroom, a “new situation” and therefore one to which he normally responds cautiously. Now respondent learning can occur. The initially neutral stimulus (your classroom) becomes associated repeatedly with the original unconditioned stimulus (your smile) and the child’s unconditioned response (his smile). Eventually, if all goes well, the classroom becomes a conditioned stimulus in its own right: it can elicit the child’s smiles and other “happy behaviors” even without your immediate presence or stimulus. Figure 2a diagrams the situation graphically. When the change in behavior happens, you might say that the child has “learned” to like being in your classroom. Truly a pleasing outcome for both of you!
But less positive or desirable examples of respondent conditioning also can happen. Consider a modification of the example that I just gave. Suppose the child that I just mentioned did not have the good fortune of being placed in your classroom. Instead he found himself with a less likeable teacher, whom we could simply call Mr. Horrible. Instead of smiling a lot and eliciting the child’s unconditioned “happy response,” Mr. Horrible often frowns and scowls at the child. In this case, therefore, the child’s initial unconditioned response is negative: whenever Mr. Horrible directs a frown or scowl at the child, the child automatically cringes a little, his eyes widen in fear, and his heart beat races. If the child sees Mr. Horrible doing most of his frowning and scowling in the classroom, eventually the classroom itself will acquire power as a negative conditioned stimulus. Eventually, that is, the child will not need Mr. Horrible to be present in order to feel apprehensive; simply being in the classroom will be enough. Figure 2b diagrams this unfortunate situation. Obviously it is an outcome to be avoided, and in fact does not usually happen in such an extreme way. But hopefully it makes the point: any stimulus that is initially neutral, but that gets associated with an unconditioned stimulus and response, can eventually acquire the ability to elicit the response by itself. Anything—whether it is desirable or not.
The changes described in these two examples are important because they affect students’ attitude about school, and therefore also their motivation to learn. In the positive case, the child becomes more inclined to please the teacher and to attend to what he or she has to offer; in the negative case, the opposite occurs. Note that even though the changes are elicited by events external to the child, the child’s newly learned attitude is best thought of as “inside” or belonging to the child—or intrinsic. The new responses suggest that the child has acquired one type of intrinsic motivation, meaning a desire or tendency to direct attention and energy in a particular way that originates from the child himself or herself. It is sometimes contrasted extrinsic motivation, which is a tendency to direct attention and energy that originates from outside of the child. As we will see, classical conditioning is one way to influence students’ intrinsic motivation, but not the only way to develop it. Many strategies for influencing students’ motivations focus less on their behavior, and more on their thoughts and beliefs about themselves and about what they are learning. I describe these in detail in [[.././Chapter 6/]] (the chapter called “Student Motivation”). First, though, let us look at three other features of classical conditioning that complicate the picture a bit, but also render conditioning a more accurate or appropriate description of students’ learning.
Three Key Ideas about Respondent ConditioningEdit
This term does not refer to the fate of dinosaurs, but to the disappearance of a link between the conditioned stimulus and the conditioned response. Imagine a third variation on the conditioning “story” described above. Suppose, as I suggested above, that the child begins by associating your happy behaviors—your smiles—to his being present in the classroom, so that the classroom itself becomes enough to elicit his own smiles. But now suppose there is a sad turn of events: you become sick and must therefore leave the classroom in the middle of the school year. A substitute is called in who is not Mr. Horrible, but simply someone who is not very expressive, someone we can call Ms. Neutral. At first the child continues to feel good (that is, to smile) whenever present in the classroom. But because the link between the classroom and your particular smiles is no longer repeated or associated, the child’s response gradually extinguishes, or fades until it has disappeared entirely. In a sense the child’s initial learning is “unlearned.”
Extinction can also happen with negative examples of classical conditioning. If Mr. Horrible leaves mid-year (perhaps because no one could stand working with him any longer!), then the child’s negative responses (cringing, eyes widening, heart beat racing, and so on) will also extinguish eventually. Note, though, that whether the conditioned stimulus is positive or negative, extinction does not happen suddenly or immediately, but unfolds over time. This fact can sometimes obscure the process if you are a busy teacher attending to many students.
When Pavlov studied conditioning in dogs, he noticed that the original conditioned stimulus was not the only neutral stimulus that elicited the conditioned response. If he paired a particular bell with the sight of food, for example, so that the bell became a conditioned stimulus for salivation, then it turned out that other bells, perhaps with a different pitch or type or sound, also acquired some ability to trigger salivation—though not as much as the original bell. Psychologists call this process generalization, or the tendency for similar stimuli to elicit a conditioned response. The child being conditioned to your smile, for example, might learn to associate your smile not only with being present in your own classroom, but also to being present in other, similar classrooms. His conditioned smiles may be strongest where he learned them initially (that is, in your own room), but nonetheless visible to a significant extent in other teachers’ classrooms. To the extent that this happens, he has generalized his learning. It is of course good news; it means that we can say that the child is beginning to “learn to like school” in general, and not just your particular room. Unfortunately, the opposite can also happen: if a child learns negative associations from Mr. Horrible, the child’s fear, caution, and stress might generalize to other classrooms as well. The lesson for teachers is therefore clear: we have a responsibility, wherever possible, to make classrooms pleasant places to be.
Generalization among similar stimuli can be reduced if only one of the similar stimuli is associated consistently with the unconditioned response, while the others are not. When this happens, psychologists say that discrimination learning has occurred, meaning that the individual has learned to distinguish or respond differently to one stimulus than to another. From an educational point of view, discrimination learning can be either desirable or not, depending on the particulars of the situation. Imagine again (for the fourth time!) the child who learns to associate your classroom with your smiles, so that he eventually produces smiles of his own whenever present in your room. But now imagine yet another variation on his story: the child is old enough to attend middle school, and therefore has several teachers across the day. You—with your smiles—are one, but so are Mr. Horrible and Ms. Neutral. At first the child may generalize his classically conditioned smiles to the other teachers’ classrooms. But the other teachers do not smile like you do, and this fact causes the child’s smiling to extinguish somewhat in their rooms. Meanwhile, you keep smiling in your room. Eventually the child is smiling only in your room and not in the other rooms. When this happens, we say that discrimination has occurred, meaning that the conditioned associations happen only to a single version of the unconditioned stimuli—in this case, only to your smiles, and not to the (rather rare) occurrences of smiles in the other classrooms. Judging by his behavior, the child is making a distinction between your room and others’.
In one sense the discrimination in this story is unfortunate in that it prevents the child from acquiring a liking for school that is generalized. But notice that an opposing, more desirable process is happening at the same time: the child is also prevented from acquiring a generalized dislike of school. The fear producing stimuli from Mr. Horrible, in particular, become discriminated from the happiness-producing smiles from you, so the child’s learns to confine his fearful responses to that particular classroom, and does not generalize them to other “innocent” classrooms, including your own. This is still not an ideal situation for the student, but maybe it is more desirable than disliking school altogether.
Operant Conditioning: New Behaviors Because of New ConsequencesEdit
Instead of focusing on associations between stimuli and responses, operant conditioning focuses on how the effects of consequences on behaviors. The operant model of learning begins with the idea that certain consequences tend to make certain behaviors happen more frequently. If I compliment a student for a good comment during a discussion, there is more chance that I will hear comments from the student more often in the future (and hopefully they will also be good ones!). If a student tells a joke to several classmates and they laugh at it, then the student is more likely to tell additional jokes in the future. And so on.
As with respondent conditioning, the original research about this model of learning was not done with people, but with animals. One of the pioneers in the field was a Harvard professor named B. F. Skinner, who published numerous books and articles about the details of the process and who pointed out many parallels between operant conditioning in animals and operant conditioning in humans (1938, 1948, 1988). Skinner observed the behavior of rather tame laboratory rats (not the unpleasant kind that sometimes live in garbage dumps). He or his assistants would put them in a cage that contained little except a lever and a small tray just big enough to hold a small amount of food. (Figure 3 shows the basic set-up, which is sometimes nicknamed a “Skinner box.”) At first the rat would sniff and “putter around” the cage at random, but sooner or later it would happen upon the lever and eventually happen to press it. Presto! The lever released a small pellet of food, which the rat would promptly eat. Gradually the rat would spend more time near the lever and press the lever more frequently, getting food more frequently. Eventually it would spend most of its time at the lever and eating its fill of food. The rat had “discovered” that the consequence of pressing the lever was to receive food. Skinner called the changes in the rat’s behavior an example of operant conditioning, and gave special names to the different parts of the process. He called the food pellets the reinforcement and the lever-pressing the operant(because it “operated” on the rat’s environment).
Skinner and other behavioral psychologists experimented with using various reinforcers and operants. They also experimented with various patterns of reinforcement (or schedules of reinforcement), as well as with various cues or signals to the animal about when reinforcement was available. It turned out that all of these factors—the operant, the reinforcement, the schedule, and the cues—affected how easily and thoroughly operant conditioning occurred. For example, reinforcement was more effective if it came immediately after the crucial operant behavior, rather than being delayed, and reinforcements that happened intermittently (only part of the time) caused learning to take longer, but also caused it to last longer.
Operant Conditioning and Students’ LearningEdit
As with respondent conditioning, it is important to ask whether operant conditioning also describes learning in human beings, and especially in students in classrooms. On this point the answer seems to be clearly “yes.” There are countless classroom examples of consequences affecting students’ behavior in ways that resemble operant conditioning, although the process certainly does not account for all forms of student learning (Alberto & Troutman, 2005). Consider the following examples. In most of them the operant behavior tends to become more frequent on repeated occasions:
- A seventh-grade boy makes a silly face (the operant) at the girl sitting next to him. Classmates sitting around them giggle in response (the reinforcement).
- A kindergarten child raises her hand in response to the teacher’s question about a story (the operant). The teacher calls on her and she makes her comment (the reinforcement).
- Another kindergarten child blurts out her comment without being called on (the operant). The teacher frowns ignores this behavior, but before the teacher calls on a different student, classmates are listening attentively (the reinforcement) to the student even though he did not raise his hand as he should have.
- A twelfth-grade student—a member of the track team—runs one mile during practice (the operant). He notes the time it takes him as well as his increase in speed since joining the team (the reinforcement).
- A child who is usually very restless sits for five minutes doing an assignment (the operant). The teaching assistant compliments him for working hard (the reinforcement).
- A sixth-grader takes home a book from the classroom library to read overnight (the operant). When she returns the book the next morning, her teacher puts a gold star by her name on a chart posted in the room (the reinforcement).
Hopefully these examples are enough to make four points about operant conditioning. First, the process is widespread in classrooms—probably more widespread than respondent conditioning. This fact makes sense, given the nature of public education: to a large extent, teaching is about making certain consequences for students (like praise or marks) depend on students’ engaging in certain activities (like reading certain material or doing assignments). Second, learning by operant conditioning is not confined to any particular grade, subject area, or style of teaching, but by nature happens in nearly every imaginable classroom. Third, teachers are not the only persons controlling reinforcements. Sometimes they are controlled by the activity itself (as in the track team example), or by classmates (as in the “giggling” example). A result of all of the above points is a fourth: that multiple examples of operant conditioning often happen at the same time. The Case Study for this chapter (The Decline and Fall of Jane Gladstone) suggests how this happened to someone completing student teaching.
Because operant conditioning happens so widely, its effects on motivation are a bit more complex than the effects of respondent conditioning. As in respondent conditioning, operant conditioning can encourage intrinsic motivation to the extent that the reinforcement for an activity can sometimes be the activity itself. When a student reads a book for the sheer enjoyment of reading, for example, he is reinforced by the reading itself; then we often say that his reading is “intrinsically motivated.” More often, however, operant conditioning stimulates both intrinsic and extrinsic motivation at the same time. The combining of both is noticeable in the examples that I listed above. In each example, it is reasonable to assume that the student felt intrinsically motivated to some partial extent, even when reward came from outside the student as well. This was because part of what reinforced their behavior was the behavior itself—whether it was making faces, running a mile, or contributing to a discussion. At the same time, though, note that each student probably was also extrinsically motivated, meaning that another part of the reinforcement came from consequences or experiences not inherently part of the activity or behavior itself. The boy who made a face was reinforced not only by the pleasure of making a face, for example, but also by the giggles of classmates. The track student was reinforced not only by the pleasure of running itself, but also by knowledge of his improved times and speeds. Even the usually restless child sitting still for five minutes may have been reinforced partly by this brief experience of unusually focused activity, even if he was also reinforced by the teacher aide’s compliment. Note that the extrinsic part of the reinforcement may sometimes be more easily observed or noticed than the intrinsic part, which by definition may sometimes only be experienced within the individual and not also displayed outwardly. This latter fact may contribute to an impression that sometimes occurs, that operant conditioning is really just “bribery in disguise”—that only the external reinforcements operate on students’ behavior. It is true that external reinforcement may sometimes alter the nature or strength of internal (or intrinsic) reinforcement, but this is not the same as saying that it destroys or replaces intrinsic reinforcement. But more about this issue later! (See especially Chapter 6: Student Motivation.)
Comparing Operant Conditioning and Respondent ConditioningEdit
Operant conditioning is made more complicated, but also more realistic, by many of the same concepts as used in respondent conditioning. In most cases, however, the additional concepts have slightly different meanings in each model of learning. Since this circumstance can make the terms confusing, let me explain the differences for three major concepts used in both models—extinction, generalization, and discrimination. Then I will comment on two additional concepts—schedules of reinforcement and cues—that are sometimes also used in talking about both forms of conditioning, but that are important primarily for understanding operant conditioning. The explanations and comments are also summarized in Table 2.
In both respondent and operant conditioning, extinction refers to the disappearance of “something.” In operant conditioning, what disappears is the operant behavior because of a lack of reinforcement. A student who stops receiving gold stars or compliments for prolific reading of library books, for example, may extinguish (i.e. decrease or stop) book-reading behavior. In respondent conditioning, on the other hand, what disappears is association between the conditioned stimulus (the CS) and the conditioned response (CR). If you stop smiling at a student, then the student may extinguish her association between you and her pleasurable response to your smile, or between your classroom and the student’s pleasurable response to your smile.
In both forms of conditioning, generalization means that something “extra” gets conditioned if it is somehow similar to “something.” In operant conditioning, the extra conditioning is to behaviors similar to the original operant. If getting gold stars results in my reading more library books, then I may generalize this behavior to other similar activities, such as reading the newspaper, even if the activity is not reinforced directly. In respondent conditioning, however, the extra conditioning refers to stimuli similar to the original conditioned stimulus. If I am a student and I respond happily to my teacher’s smiles, then I may find myself responding happily to other people (like my other teachers) to some extent, even if they do not smile at me. Generalization is a lot like the concept of transfer that I discussed early in this chapter, in that it is about extending prior learning to new situations or contexts. From the perspective of operant conditioning, though, what is being extended (or “transferred” or generalized) is a behavior, not knowledge or skill.
In both forms of conditioning, discrimination means learning not to generalize. In operant conditioning, though, what is not being overgeneralized is the operant behavior. If I am a student who is being complimented (reinforced) for contributing to discussions, I must also learn to discriminate when to make verbal contributions from when not to make verbal contributions—such as when classmates or the teacher are busy with other tasks. In respondent conditioning, what are not being overgeneralized are the conditioned stimuli that elicit the conditioned response. If I, as a student, learn to associate the mere sight of a smiling teacher with my own happy, contented behavior, then I also have to learn not to associate this same happy response with similar, but slightly different sights, such as a teacher looking annoyed.
In both forms of conditioning, the schedule of reinforcement refers to the pattern or frequency by which “something” is paired with “something else.” In operant conditioning, what is being paired is the pattern by which reinforcement is linked with the operant. If a teacher praises me for my work, does she do it every time, or only sometimes? Frequently or only once in awhile? In respondent conditioning, however, the schedule in question is the pattern by which the conditioned stimulus is paired with the unconditioned stimulus. If I am student with Mr. Horrible as my teacher, does he scowl every time he is in the classroom, or only sometimes? Frequently or rarely?
Behavioral psychologists have studied schedules of reinforcement extensively (for example, Ferster, et al., 1997; Mazur, 2005), and found a number of interesting effects of different schedules. For teachers, however, the most important finding may be this: partial or intermittent schedules of reinforcement generally cause learning to take longer, but also cause extinction of learning to take longer. This dual principle is important for teachers because so much of the reinforcement we give is partial or intermittent. Typically, if I am teaching, I can compliment a student a lot of the time, for example, but there will inevitably be occasions when I cannot do so because I am busy elsewhere in the classroom.
For teachers concerned both about motivating students and about minimizing inappropriate behaviors, this is both good news and bad. The good news is that the benefits of my praising students’ constructive behavior will be more lasting, because they will not extinguish their constructive behaviors immediately if I fail to support them every single time they happen. The bad news is that students’ negative behaviors may take longer to extinguish as well, because those too may have developed through partial reinforcement. A student who clowns around inappropriately in class, for example, may not be “supported” by classmates’ laughter every time it happens, but only some of the time. Once the inappropriate behavior is learned, though, it will take somewhat longer to disappear even if everyone—both teacher and classmates—make a concerted effort to ignore (or extinguish) it.
Finally, behavioral psychologists have studied the effects of cues. In operant conditioning, a cue is a stimulus that happens just prior to the operant behavior and that signals that performing the behavior may lead to reinforcement. Its effect is much like discrimination learning in respondent conditioning, except that what is “discriminated” in this case is not a conditioned behavior that is reflex-like, but a voluntary action, the operant. In the original conditioning experiments, Skinner’s rats were sometimes cued by the presence or absence of a small electric light in their cage. Reinforcement was associated with pressing a lever when, and only when, the light was on. In classrooms, cues are sometimes provided by the teacher or simply by the established routines of the class. Calling on a student to speak, for example, can be a cue that if the student does say something at that moment, then he or she may be reinforced with praise or acknowledgement. But if that cue does not occur—if the student is not called on—speaking may not be rewarded. In more everyday, non-behaviorist terms, the cue allows the student to learn when it is acceptable to speak, and when it is not.
Constructivism: Changes in How Students ThinkEdit
Behaviorist models of learning may be helpful in understanding and influencing what students do, but teachers usually also want to know what students are thinking and how to enrich what students are thinking...(read more...)
Learning Theories/Behavioralist Theories--a developing Wikibook on this topic, but with less focus on classroom applications. Note misspelling of title--this is necessary to link to the page.
Learning theory (education)--a brief explanation of types of learning theories, but without many examples.
- Lavond, D. & Steinmetz, J. (2003). Handbook of classical conditioning. Boston: Kluwer Academic Publishing
- Pavlov, I. (1927). Conditioned reflexes. London, UK: Oxford University Press
- Skinner, B. F. (1938). The behavior of organisms. New York: Appleton-Century-Crofts.
- Skinner, B. F. (1948). Walden Two. New York: Macmillan.
- The selection of behavior: The operant behaviorism of b. F. Skinner. New York: Cambridge University Press.
- Alberto, P. & Troutman, A. (2005). Applied behavior alaysis for teachers, 7th edition. Upper Saddle River, NJ: Prentice Hall.
- Ferster, C., Skiner, B.F., Cheney, C., Morese, W., & Dews, D. Schedules of reinforcement. New York: Copley Publishing Group.
- Mazur, J. (2005). Learning and behavior, 6th edition. Upper Saddle River, NJ: Prentice Hall.