Você está na página 1de 5

Operant Conditioning and its Application to Instructional Design

The following is an explanation of the relevance of operant conditioning to the instructional


design process, including its history and application in instructional strategies.

Operant conditioning is the foundation on which B.F. Skinner explored human behavior. A
branch of traditional behavioral science, operant conditioning came to the forefront of research in
the 1930's through the work of Skinner. Learning in operant conditioning occurs when "a proper
response is demonstrated following the presentation of a stimulus" (Ertmer & Newby, 1993, p.
55). This means that learning has taken place when there is an observable change in the behavior
of the learner after the instruction has been delivered. Skinner was preceded by theorists such as
J.B. Watson who studied the objective data of behavior and Ivan Pavlov, often referred to as the
Father of Classical Conditioning (Burton, 1981; Driscoll, 1994). Classical conditioning focuses
on the involuntary response of the learner following a stimulus.

Similar to classical conditioning, operant conditioning studies the response of the learner
following a stimulus; however, the response is voluntary and the concept of reinforcement is
emphasized. The relationship in operant conditioning includes three component parts: the
stimulus, a response, and the reinforcement following the response. According to Burton, operant
conditioning is based on "a functional and interconnected relationship between the stimuli that
preceded a response (antecedents), the stimuli that follow a response (consequences), and the
response (operant) itself" (1981, p. 50). Skinner determined that reinforcement following a
response would alter the operant, or response, by encouraging correct behavior or discouraging
incorrect behavior. Skinner referred to the operant as "any behavior that produced the same
effect on the environment" and the relationship between the operant and its consequences was
termed "contingency" (Cook, 1993, p. 63). Environmental factors influence learning, but most
important is the arrangement between the stimuli and the consequence, or reaction, of the learner
in his environment (Ertmer & Newby, 1993). Contingency, according to Cook, is a "kind of 'if-
then' relationship: if the response is made, then the reinforcement is delivered" (1993, p. 63). For
example, by eliciting a stimulus- teacher asks a question, the learner responds- child raises hand,
reinforcement is issued- teacher calls on student with hand raised.

Reinforcement serves one of two purposes: strengthening a response or weakening a response.


Types of reinforcement include positive and negative to strengthen a response, and punishment,
extinction, response cost, and timeout to weaken a response. Positive reinforcement is the
"presentation of a reinforcer (satisfying stimulus) contingent upon a response that results in the
strengthening of that response" (Driscoll, 1994, p. 32). An example of positive reinforcement
would be praise, a reward, or a gift after displaying appropriate behavior. A negative reinforcer
also strengthens a response, but by taking away the aversive stimulus subject to that response
(Driscoll, 1994). An example of negative reinforcement would a child finally doing his
homework just to stop his parents from nagging. Punishment is used to weaken a response, or
decrease an inappropriate behavior. It is what most of us are familiar with. Examples include
taking away a favorite toy when a child is acting up or grounding a teenager for coming home
past curfew. Other methods of weakening an undesired response include extinction, removal of
the reinforcement maintaining a response; response cost, removal of reinforcement contingent
upon behavior by imposing a fine; and timeout, removing the learner from the environment that
reinforces the incorrect behavior (Driscoll, 1994).

Maintenance of the newly acquired behavior is an important part of the operant conditioning
theory. Methods of maintenance include a ratio schedule of reinforcement and an interval
schedule of reinforcement. A ratio schedule relies on the number of times the appropriate
response is made after the stimulus is delivered. After a set number of correct responses, the
reinforcement is delivered by the instructor (Driscoll, 1994). Interval scheduling depends of a set
amount of time under which the correct answer is given before reinforcement will be delivered.
Both ratio and interval scheduling can be delivered under fixed amounts of responses/times or
variable number of responses or times (Driscoll, 1994).

Because the learner is reacting to the stimulus in the environment, behaviorism in general is
widely criticized for promoting a passive role of the learner in receiving information. According
to Ertmer and Newby, "the learner is characterized as reactive to conditions in the environment
as opposed to taking an active role in discovering the environment" (1993, p. 50). This is a
misinterpretation of what Skinner believed the role of the learner to be. He emphasized the active
role of the learner. According to Skinner, the learner "does not passively absorb knowledge from
the world around him but must play an active role" (Burton, 1981, p. 49). Skinner's statement is
reinforced by the central premise of behaviorism: the learner's change in observable behavior
indicates that learning has occurred. Skinner identifies three components necessary for learning:
doing, experiencing, and practice (Burton, 1981). These three components work together to
determine what has been learned, under what conditions, and the consequences that will support
the learned behavior. The types of learning that are achieved in an operant conditioning
environment are discrimination (recall of facts), generalizations (define and illustrate concepts),
association (apply explanations), and chaining (automatically perform a procedure) (Ertmer &
Newby, 1993). Instructional strategies for teaching these learning outcomes include shaping,
fading, and chaining. Shaping is used to teach relatively simple tasks by breaking the task down
into small components (Driscoll, 1994). Chaining is similar to shaping but used to break down
complex tasks; however, there is a difference regarding the reinforcement schedule. In shaping,
reinforcement is delivered all throughout the steps, whereas with chaining the reinforcement is
not delivered until the end and the learner can demonstrate the task in its entirety (Driscoll,
1994). Discrimination, according to Driscoll, is best learned using fading techniques. This
involves the gradual withdrawal of the reinforcement as the desired behavior is elicited (1994).
These prescriptive strategies aid the instructor in reaching the desired learning outcome.

In the 1960's, Skinner used Sydney Pressey's teaching machines as a basis for creating
programmed instruction. Pressey's teaching machines were developed in the mid-1920's first as a
self-scoring testing device and then evolved to include immediate reinforcement for the correct
answer (Burton, 1981). Research conducted on his teaching machines concluded that "errors
were eliminated more rapidly with meaningful material and found that students learned more
efficiently when they could correct errors immediately" (Burton, 1981, p. 23). Pressey's teaching
machines were popular with the U.S. Air Force after World War II. They were "variations of an
automatic self-checking technique" and "essentially allowed students to get immediate
information concerning accuracy of response" (Burton, 1981, p. 53). Skinner later applied
behaviorist theory to the basis of teaching machines and created programmed instruction.
Programmed instruction was popularized in the 1960's with Skinner. The technique was similar
to Pressey's teaching machines in the use of immediate feedback after the response and student-
controlled rate of instruction, but Skinner applied operant conditioning principles to programmed
instruction. The fact that learning is measured by the change in behavior and the maintenance of
the changed behavior, Skinner "required students to 'overtly' compose responses" (Burton, 1981,
p. 54). Pressey had used multiple-choice as the method of assessment; a method that Skinner
thought left chance for mistakes. Skinner required the student to write out the response as this
behavior could be observed (Burton, 1981). The content in programmed instruction is arranged
in small chunks and organized in a simple to complex sequence. The learner progresses by
responding correctly, receiving feedback, and moving forward. If the response is incorrect, the
learner repeats instruction until there are no mistakes. This allows the learner to set his own pace.
The instruction is linear with no paths diverging from the directed instruction.

Although programmed instruction is effective in achieving certain learning outcomes, it is


sometimes characterized as boring because of the monotony, repetition, and small steps towards
mastery. Crowder attempted to alleviate this problem by introducing branching to programmed
instruction. In branching, there are several possible answers and larger units of instruction. This
format also allows students to skip over what they already know and to be branched into
appropriate advanced or remedial sections (Driscoll, 1994). Whereas Skinner's programmed
instruction encouraged the overt response of the learner, Crowder reverted to Pressey's approach
and gave the learner multiple choice questions at the end of instruction. This does not follow the
principles of operant conditioning by not requiring an overt response, but it does provide
immediate feedback and reinforcement as in operant conditioning principles. According to
Burton, several studies compared found no differences in the type of response, overt or multiple
choice in the performance of the learner (1996).

Computer-based instruction originates from Skinner's programmed instruction. These computer


simulated instructional strategies follow closely Skinner's operant conditioning by presenting a
stimulus, eliciting a response, and providing immediate feedback. Computers added more
options and variety the instruction, and this solved some of the criticism of monotonous and
boring instruction. Computers changed the instruction by allowing for complex branching of
content, record of student response, graphics and speech, drill and practice, problem solving, and
tutorials (Driscoll, 1994). It also provides cueing and shaping techniques to guide the learner to
achievement. Computer based instruction is used currently in training and education based
models such as CBT (computer-based training) and CAI (computer-assisted instruction).
Although the technology has allowed for a more sophisticated presentation, the basis of the
instruction is primarily behaviorist in nature and based on Skinner's programmed instruction.

Behaviorism is influential on the standard instructional design process. Creators of programmed


instruction needed to determine when to begin instruction, and they did this by analyzing the
learner's prerequisite knowledge. The process of the learner analysis and identifying prerequisite
skills in the instructional design process was originated by behaviorists during the development
of their instruction, namely through teaching machines and programmed instruction. The Needs
Analysis phase of the ID process includes both the learner analysis and prerequisite skills.
One of the most important contributions of behaviorism to the instructional design process is the
identification and measurement of learning. Behaviorists agree that "learning has occurred when
learners evidence the appropriate response to a particular stimulus" (Smith and Ragan, 1999, p.
19). The emphasis on producing observable and measurable outcomes led to the creation of
performance objectives (Driscoll, 55). In the instructional design process, performance
objectives describe what the learner will accomplish, under what conditions, and how the learner
will be measured. These components are included in the Task Analysis phase of the ID process
and the assessment of the learner at the end of instruction.

In programmed instruction, the learner is required to pass each section before continuing to the
next segment of instruction. This technique encouraged mastery learning. In order to achieve
mastery, it is necessary that the content be organized from simple to complex. The learner needs
to grasp the basic information prior to moving on to more difficult tasks. Instructional designers
take this sequence into consideration when developing material. They must first determine the
prerequisite knowledge and then lay out the steps of the new content in a format conducive to
achieving mastery. Instructional designers also use instructional strategies of cueing, shaping,
and fading to guide the learner through the instruction. This process takes place in the Task
Analysis phase of ID. Before moving ahead with instruction, the learner is given feedback on
each answer. This is based on the reinforcement Skinner believed essential to learning. The
reinforcement of the learner impacts his performance. By encouraging the learner to achieve the
correct response and discouraging incorrect answers, the programmed instruction is using the
instructional strategies based on operant conditioning: reinforcement and feedback (Ertmer and
Newby, 1993). Finally, the use of practice and shaping in instruction has its roots in behaviorism.
The sequencing of practice from simple to complex and the use of prompts are strategies
Skinner applied in his research of operant conditioning. Successive approximations are
reinforced until the goal has been reached (Driscoll, 1994).

Operant conditioning has influenced education and continues to be a guide in developing


instruction. Although some techniques have changed and technology evolved, programmed
instruction is widely used and modified to suit individual needs. The cognitive perspective has
added to the instructional strategies and finds itself combined with behaviorism when
technology-based instruction is delivered. The influence of behaviorism to the instructional
design process is significant and still apparent in current design.

References

Burton, J.K., Moore, D.M., & Magliaro, S.G. (1996). Behaviorism and instructional technology.
In D.H. Jonassen (Ed.), Handbook of research for educational communications and
technology (pp. 46-67). New York, NY: Simon and Schuster.

Cook, D.A., (1993, October). Behaviorism evolves. Educational Technology, pp. 62-77.

Cooper, P.A., (1993, May). Paradigm shifts in designed instruction: From behaviorism to
cognitivism to constructivism. Educational Technology, 33(5), 12-19.

Driscoll, M.P. (1994). Psychology of learning for instruction. Boston: Allyn and Bacon.
Ertmer, P.A., & Newby, T.J., (1993). Behaviorism, cognitivism, constructivism: Comparing
critical features from an instructional design perspective. Performance Improvement
Quarterly, 6(4), 50-72.

Kunkel, J.H., (1996). What have behaviorists accomplished--and what more can they do?
Psychological Record, 46(1), 21-38.

Smith, P.L., & Ragan, T.J. (1999). Instructional design. New Jersey: Prentice-Hall.

Você também pode gostar