Theories of learning have been formulated to account for the ways in which learningtakes place. A brief description of the key theories of learning is presented here. These theories have been designated by the terms punishment, two process, one process, reinforcement, drive reduction, relative value, response deprivation, avoidance, conditioning, vicarious learning, generalization and discrimination.
It should be noted that there are variations within each theory and that it is usually impossible to classify most educational psychologists strictly s adherence of one or the other of these theories.
It is noteworthy also, that, although each of these theories has sought to explain learning wholly on a materialistic basis, yet there are some aspects of each which are considered to be of value.Theories of Punishment. Punishment is an aversive control technique.
As a control technique, punishment tends to polarize the attitudes of learners. Proponents of the theories of punishment cite all the research showing that consistent application of intense and immediate punishment can effectively reduce the strength of undesirable responses (Schwartz, 1989).
The successful use of theories of punishment is illustrated by the accomplishments of Ivar Lovaas of the University of California at Los Angeles. A girl named Beth, suffering from a severe childhood disorder called autism, repeatedly engaged in self-injurious behavior.
She often beat her head on the sharp corners of furniture, tore her skin with her teeth or nails, or burned herself. Reasoning that such behavior was being maintained by the attention it produced, Lovaas began to punish acts of self-mutilation with a slap or painful, but harmless, electric shock. The result was a rapid decline in the frequency of self-injurious behaviors (Carr & Lovaas, 1983).But many psychologists are concerned about the negative side effects common to the use of punishment.
Harsh environments that contain a heavy dose of aversive stimulation are known to foster hostility, aggression, and fear in animals and humans alike. Accordingly, any system that makes use of punishment as a control agent must be willing to accept an increase in these potentially destructive forces. It is not an issue of whether or not punishment works; it does. But along with the gains in control come the disturbing risks inherent in dealing with a volatile organism.
Two Process Theory. One of the major theories of the memory process is the two-process theory. These two processes are known as short-term memory and long-term memory. Short-term memory lasts for only 18 to 30 seconds.
If we do something with the information, i.e. think about it, analyze it, criticize it, etc., it passes into long term memory.
There it remains, as some evidence suggests, forever. Short-term memory can be enhanced by chunking and long-term memory by association (Brown, 1988).One Process Theory. The level of processing model suggests that there is only one memory facility.
It is the degree of processing you do which determines whether you retain the material. If you don’t do something with the trace just decays and you lose the information. If you rhyme the information with something you root it to some degree. If you compare it with something, you root it a bit deeper.
If you analyze, or criticize it significantly, you fix it in your head deeper still (Brown, 1988).Theories of Reinforcement. BF Skinner popularizes this theory of Reinforcement. Anything that increases the likelihood that a behavior will be repeated is referred to as reinforcement.
It is very important to note that the definition says nothing about whether the child “likes” or “dislikes” the reinforcement; whether it is deliberately given or not, or whether the behavior in question is the one desired. We have the two kinds of reinforcement, the positive reinforcement and the negative reinforcement. In positive reinforcement, also known as reward training, the emission of an operant response is followed by stimuli called positive reinforcers that make the actions that produce them more probable. Hence, positive reinforcers motivate the learner to repeat the behavior with increased frequency, duration, and intensity.
A reinforcer can be anything that the learner may perceive as reinforcing (e.g., grades, oral/written commendation).On the other hand, negative reinforcement is the process by which a response that leads to the removal of an aversive event increases that response.
It is a training procedure wherein operant behaviors terminate or postpone the delivery of aversive stimuli. As is true for any type of reinforcement manipulations, response probability increases. It is a general rule that more aversive stimuli produce greater performances than less aversive stimuli. Examples of negative reinforcement abound in everyday life.
Sometimes, the responses involved in such cases terminate aversive events that are already present, as when a window is closed to shut out traffic noise, or when a person takes aspirin to decrease headache pain. In other instances, negative reinforcement involves the removal of threatened aversive stimuli. Indeed, anyone who has ever studied to avoid falling on academic course has been controlled in this way (Domjan, 1998).Conditioned Reinforcement.
No one is likely to dispute that there are certain kinds of outcomes or reinforcers that are meaningful to every person on earth. Food, water, and a place to lie down at the close of an exhausting day are things we all need just to survive, and people understandably seek to obtain them. Because these reinforcers are natural and have biological relevance, they are called primary reinforcers. Primary reinforcers tend to be few in number, and they generally operate uniformly across species.
When an event acquires reinforcing properties because of an association with a primary reinforcer, that event is labeled a conditioned reinforcer. Conditioned reinforcers, also known as secondary reinforcers, constitute a distinctive category of stimuli that determine much of what we do. They include books, music, art, report cards, attention, and even other people. Essentially all the elements having membership in this class of events are fundamentally neutral, but due to their conditioning history they have taken on added meaning and power (Bolles, 1982).
Hulls drive reduction theory. Hulls drive reduction theory states that learning occurs as a result of an organism seeking to fulfill a physical need. Clark Leonard Hull’s drive reduction theory (i.e.
organisms, including humans, learn to perform behaviors that have the effect of reducing their biological drives) is based upon Hull`s mathematical formulation known as Hull`s law. The equation of Hull`s Law reads as follows:E = H x D, whereD = Drive: the strength of a biologically-based homeostatic need.H = Habit: the strength of a particular stimulus-response association.E = Energy or Response Potential: the energy for performing the behavior, which is directly related the probability of the behavior completed.
Hull`s law helps to explain behaviorism in humans better than the earlier theories of Skinner and Thorndike, which were able took fairly accurately at the stimulus-response association of animals. By means of the equation of Hull`s law, Hull was able to consider the drive and motivation of humans more explicitly by taking in such factors as environment, emotion, and prior training that would affect the stimulus-response association.This perspective focuses on the concept of homeostasis, which is the tendency towards the maintenance of a relatively stable internal environment. According to Hull`s system of motivation, responding occurs in an effort to restore homeostatic equilibrium that has been disturbed by internal or external events.
Basic to this restoration is the interaction between drives and needs. Specifically, Hull`s primary motivational principle asserts that behavior is activated by an internal psychological drive state arising out of a psychological need.Numerous findings have tarnished the once sterling image of the drive reduction model. For instance, many investigators have shown that animals will learn a new response when that response leads to an increased in the opportunity to explore.
And how would Hull explain someone one thing to learn to skydive? In these cases the motivational impetus for responding is associated with increasing, not decreasing arousal (Hull, 1952).Relative Value Theory and Premack Principle. The Premack principle, named for psychologist David Premack, states that a high-probability behavior can serve as a reinforcer for a low-probability behavior. Premack stressed that reinforcers may be better conceptualized as responses.
The argument is that food itself is not a reinforcer, but rather, the activity of eating the food is what is reinforcing. Possibly, even more important than questioning the identity of reinforcers was Premacks` almost heretical claim that reinforcers are relative, being effective with some responses and not others. Premack set out to show that the reinforcer is any response that is independently more probable than the response that you are trying to condition. This expression is referred to us the Premack Principle.
Implicit in this statement is the notion that reinforcers can be moved up and down a hierarchy by environmental manipulation; that is, most any behavior can be made highly probable (a high-valued reinforcer) or improbable (a low- valued reinforcer) if the external conditions are structures properly.Experimental confirmation of this hypothesis was first provided in a basic animal learning study (Premack, 1982). When environmental restrictions made rats thirsty but not idle, the opportunity to drink water could be used to strengthen the activity of running in a wheel mounted on the side of the test cage. Alternately, when the rats were restricted in terms of movement but had free access to water, then the opportunity to run in the wheel became a reinforcer for the behavior of drinking water.
So, it seems that different events can be accorded different reinforcement status, depending on the conditions under which they are introduced.The Premack principle circumvents many of the problems inherent in reinforcement applications by focusing on the need for individual training. When the Premack approach is used, what constitutes a reinforcer is individually determined, so you can feel relatively secure that what you are using as a reward is indeed valued.Despite its appeal, the Premack technique is not without shortcomings.
For instance, what is considered reinforcing activity one day may not be the next, even for the same person. But such vagaries in the status of reinforcing events do not undermine the entire Premack program, of course; they simply try the patience of behavior controllers. Actually, most of the time the system works because high-frequency behaviors tend to remain high- frequency behaviors and therefore are reliable rewards.Theories of Avoidance.
Theory of avoidance account of avoidance behavior by Miller’s (1948) classic study that demonstrated that a previously conditioned aversive CS can provide reinforcement for the learning of a new response once the US had been discontinued. Miller not only achieved this objective, but the study was replicated by Brown and Jacobs (1949) to rule out potential effects of frustration or conflict to which the results might be attributed instead of fear. We know of no other construct offered as a substitute for fear (e.g.
, a cognitive construct) that is capable of empirically producing this effect. Even Seligman and Johnston’s (1973) cognitive theory of avoidance required the addition of a fear construct to account for the acquisition of the avoidance response.Although a number of competing theories have challenged two-factor and related fear theories` account of avoidance learning (e.g.
Bolles, 1989; Seligman, 1983; Klein & Mowre, 1989) fear theory has withstood these challenges and still remains the dominant theory.Theories of Conditioning. Traditionally, theories of conditioning have come to mean that learning takes place when two or more events are associated because they occur together.Classical Conditioning.
Scientific references to classical conditioning are commonly associated with Ivan P. Pavlov (1849-1936) as he was the first person to discuss issues related to classical conditioning with others in the scientific community. Classical conditioning is a form of learning in which two stimulus events are associated. Typically, a conditioned stimulus (CS) is paired with an unconditioned stimulus (US) that naturally produces an unconditioned response (UR).
The result is that the conditioned stimulus acquires the capacity to elicit a new response (the conditioned response, or CR) that is similar in form to the unconditioned response.The basic theory of conditioning is behaviorism, which was formulated by the American behaviorists John B. Watson. This theory has been described as an evolutionary, psychological doctrine developed to support the evolutionistic theories of knowledge.
It holds that all man’s behavior, mental states and processes have a purely physiological origin and function consisting of neurological, glandular, and other bodily responses to sensory stimuli; and that under proper stimulation can be appropriately conditioned to produce any desired response.Operant Conditioning. Skinner (1953) developed the method of conditioning through what has been termed operant or instrumental conditioning. Skinner’s version of instrumental conditioning, called operant conditioning, is a technologically based model that has generated a great deal of research.
Operant conditioning involves voluntary behavior emitted by the learner which may be reinforced by its consequence. In operant conditioning, whether a response occurs in the future depends upon the nature of the contingency. If a response makes life better for the individual, it will likely occur in the future. If it makes life worse, it will likely not occur again in the future.
Thus, operant conditioning makes use of reinforcements.Anything that increases the likelihood that a behavior will be repeated is referred to by the behaviorists as reinforcement. It is very important to note that the definition says nothing about whether the student “likes” or “dislikes” the reinforcement; whether it is deliberately given or not, or whether the behavior in question is the one desired. In operant conditioning, when a student responds with a behavior that is close to what is expected by the teacher, the latter delivers a positive reinforcer.
Positive reinforcers motivate the learner to repeat the behavior with increased frequency, duration, and intensity. A reinforcer can be anything that the learner may perceive as reinforcing (e.g. grades, oral/written commendation).
Negative reinforcement on the other hand, is the process by which a response that leads to the removal of an aversive event increases that response. Further, behavior modification is a process of shaping a person’s behavior through the acquisition of new operants through a series of reinforcement and sequencing of desired responses. It involves changing behavior in a deliberate and predetermined way by reinforcing those responses that are in the desired directions.Theories of Generalization and Discrimination.
Generalization is the process whereby conditioned responses are transferred to novel stimuli on the basis of the similarity between stimuli. Generalization is restricted through discrimination training, which teaches the subject to respond to a select class of stimuli.Much of the early experimental work done on the theory of Generalization was performed by Carl Hovland. In Hovland`s 1937 investigation of generalization, a mild electrical shock was used as the US (unconditioned response) to elicit a galvanic skin response (GSR).
The GSR is a change in the electrical conductivity of the skin that is thought to fluctuate with changing emotions. A tone was used as the CS (conditioned response) that was paired with the shock. Following 16 such pairings, each subject was presented with the original tone CS (Test Stimulus 0) as well as three additional test tones (Test Stimuli 1,2, and 3) selected for their increasing distance from the original CS along the dimension of tonal frequency. Hovland found that GSR strength (CR strength) diminished with increasing differences between the original CS and the test tones.
Note than none of the added test tones had been paired with the US.Generalization is at work in a variety of other setting. It is usually beneficial that social behavior learned in one context transfer to other situation, as it is similarly beneficial that a healthy respect for the potential dangers of hot stoves extends to many different types of stoves. Through the process of generalization we associate responses with an array of stimulus elements without having actually to experience every stimulus member in the broader class of sensory events.
A theory related to generalization is discrimination. Whereas generalization theory is concerned with the extent to which newly acquired behaviors transfer across similar stimuli, discrimination involves the organism’s ability to detect differences among stimuli and respond to only one or a few such stimuli, to the exclusion of all others. In effect, you teach the animal or person not to generalize. In an experiment in discrimination training in a salivation study, a dog is placed in a harness, and the amount of salivation 9in drops) is continually noted.
The animal then was given 51 training trials consisting of 25 trials with a 1000-Hz tone and 25 trials with an 800-Hz tone, administered in a random order. On trials where the 1000-Hz CS is present, food is delivered following the termination of the tone. No such food deliveries follow the presentation of the 800-Hz tone. In the early part of the training, the animal is likely to experience some difficulty in distinguishing between the two sounds, and therefore conditioned salivation is expected to occur after both cues.
But with continued training, the dog eventually learns that the 1000-Hz tone is always followed by food and that the 800-Hz tone is never followed by food. (Bower & Hilgard, 1981).ReferencesBolles R. C.
(1982). “Reinforcement, expectancy and learning”. Psychological Review, 79,394-409.Bolles R.
C. (1989). Learning theory (2nd ed.).
New York: Holt, Rinehart & Winston.Bower G. H., & Hilgard E.
R. (1981). Theories of learning (5th ed.).
Englewood Cliffs, NJ:Prentice Hall.Brown, J.A. (1988).
Some tests of the process theory of memory. Quarterly Journal ofExperimental Psychology, 10, 12-21.Carr, E.G.
, & Lovaas, D.I. (1983). Contingent electric shock treatment for severe behaviorproblems.
In S. Axelrod & J. Apsche (Eds.).
The effects of punishment on humanbehavior. New York: Academic Press.Domjan M. (1998).
The principles of learning and behavior (4th ed.). Pacific Grove, CA:Brooks/Cole.Hull C.
L. (1952). A behavior system. New Haven, CT: Yale University Press.
Klein S. B., & Mowrer R. R.
(Eds.). (1989). Contemporary learning theories: Instrumentalconditioning and the impact of biological constraints on learning.
Hillsdale, NJ: LawrenceErlbaum Associates.Premack, D. (1982). Reversibility of the reinforcement relation.
Science, 136, 235-237. Schwartz, B. (1989). Psychology of learning and behavior.
New York: Norton.Seligman M. E. P.
(1983). On the generality of the laws of learning. Psychological Review,44, 406-418.Skinner B.
F. (1957). Verbal Behavior. New York: Appleton.
Watson J. B. (1930). Behaviorism (2nd ed.
). Chicago: University of Chicago Press.
Cite this Learning and Behavior
Learning and Behavior. (2017, Mar 14). Retrieved from https://graduateway.com/learning-and-behavior/