McGraw-Hill OnlineMcGraw-Hill Higher EducationLearning Center
Student Center | Instructor Center | Information Center | Home
Internet Connections
Chapter Outline
Multiple Choice Quiz
Fill in the Blanks
True or False
Glossary
Internet Exercises
Feedback
Help Center


Learning: Principles and Applications, 4/e
Stephen B Klein, Mississippi State University

Traditional Learning Theories

Chapter Outline


Chapter Outline

  1. TRADITIONAL LEARNING THEORIES

    Two types of theories explain the learning process. Historically older, S-R (stimulus-response) theorists adopt assumptions that learning involves more or less automatic "mechanistic" processes that permit adaptive changes to the environment. More recently, cognitive theorists suggest that higher-order (mentalistic) processes are involved in learning.

  2. S-R ASSOCIATIVE THEORIES

    The S-R theorists can be divided into two major theoretical approaches to how learning occurs. One class assumes that learning occurs only when reward is provided. The other assumes that S-R contiguity is the only requirement for learning to occur.

    1. Hull's Drive Theory: Based on Woodworth's earlier assumption that drive motivates behavior, Hull proposed a theory of learning that intended to create a mathematical equation for predicting behavior. The equation predicted behavioral potential, and predictor variables included drive, incentive motivation, habit strength, and inhibition. Prediction of behavior depends upon knowing the various values in the equation.

      1. Unconditioned Sources of Drive

        Some drives are triggered by internal deprivation conditions such as a lack of food or water for a period of time. This type of drive is unlearned and is termed SUR.

      2. Acquired Drives

        Other drives can develop through Pavlovian conditioning. Environmental cues that reliably predict a deprivation condition can come to elicit an acquired drive, or conditioned drive, state. This type of drive learned, is called habit strength, and is termed SHR.

      3. The Reinforcing Function of Drive Reduction

        Habit strength develops through learning, and depends upon how often a response decreases drive. Thus, learning depends upon the reduction of the drive state.

      4. The Elimination of Unsuccessful Behavior

        If drive persists, all behaviors are inhibited for a time, a process called reactive inhibition. After time, the habitual behavior occurs again and if not reinforced (if it does not reduce drive), it leads to conditioned inhibition of that particular response. But because drive is still high, other behaviors in the habit hierarchy are activated until one of them reduces drive and is thus reinforced.

    2. Incentive Motivation: Hull recognized that different reinforcers have different motivational value. That is, some large or rich reinforcers have greater incentive motivation than smaller or poorer reinforcers. Further, animals can learn about cues that predict different reinforcers, and these learned cues thus acquire conditioned properties that can motivate behavior.

    3. Spence's Acquired Motive Approach: Hull assumed that drive reduction and reward were synonymous. However, studies of the rewarding properties of electrical stimulation of brain (Olds & Milner, 1954) and of drive induction (Sheffield, 1966) posed problems for Hull's theory. Spence refined Hull's S-R approach to account for these situations.

      1. The Anticipation of Reward

        Spence suggested that a reward elicits an unconditioned goal response (RG), an internal response, which then produces an internal stimulus state (SG) that increases motivation and is similar to drive. Early in learning, the cues present at the time of reward are associated with reward and create a conditioned or anticipatory goal response (rG), which causes internal stimulus changes (sG) that motivate approach behavior, which increases arousal. By assuming that reward intensity determines a more intense rG than less intense rewards, and that animals associate various stimuli with responses that reduce drive, Spence's theory extended Hull's S-R theory to explain how behavior changes in predictable ways with different rewards.

      2. Avoidance of Frustrating Events

        Amsel suggested that the absence of expected reward generates frustration, which motivates avoidance behavior, and also suppresses approach behavior. The internal, unconditioned frustration state (RF) motivates the animal and produces internal stimulus states (SF). Cues present during frustration become conditioned to produce an anticipatory frustration response (rF), and produce internal frustration stimuli (sF) that motivate avoidance of frustrating situations.

    4. The Avoidance of Painful Events: Hull's theory suffered in the 1930's because of research showing that animals learn to avoid aversive events, perhaps because of cognitive processes. Mowrer extended drive-based theory to account for this research.

      1. Two-Factor Theory of Avoidance Learning

        Mowrer proposed a two-factor theory of avoidance learning. The theory assumes that subjects are motivated to escape fear and are not performing on the basis of the future expectation of an aversive event. Thus, the first factor of the theory assumes that fear is conditioned to environmental cues that precede the occurrence of the aversive event. The conditioned fear motivates the occurrence of an escape response that serves to terminate the CS. The second factor of the theory holds that the removal of the cue eliciting fear serves to reinforce the behavior. Thus, escaping from a CS that elicits fear serves as the means to avoid the aversive event.

      2. Criticisms of Two-Factor Theory

        Several problems have been noted with Mowrer's two-factor theory of avoidance conditioning. First, avoidance responding can be extremely resistant to extinction. Second, fear is apparently absent when the avoidance response is well practiced. Finally, avoidance behavior can be learned in situations such as the Sidman avoidance task, where there is no external CS preceding the delivery of the aversive event.

      3. D'Amato's View of Avoidance Learning

        D'Amato's theory assumes that the prevention of the aversive event is important in avoidance conditioning. According to D'Amato, the aversive event elicits an unconditioned pain response(RP)that has a stimulus consequence (sP). The painful stimulus consequence serves to motivate escape from the aversive event. When the aversive event terminates, the subject experiences an unconditioned relief response (RR) which also has a stimulus consequence (SR). Through conditioning, environmental CSs come to elicit an anticipatory pain response (rP) with a stimulus aftereffect (SP). The rP-sP mechanism motivates an escape response from the environmental CSs. Other environmental CSs become associated with the termination of the aversive event leading to the conditioning of an anticipatory relief response(rR) with its rewarding stimulus consequence (rs). The cues associated with conditioned relief provide a second motivational base for avoidance learning.

    5. Nature of Anticipatory Behavior: Spence suggested that conditioned anticipatory responses (rG) were peripheral nervous system events, but Rescorla and Solomon suggested an important role for the central nervous system.

    6. Guthrie's Contiguity View: Guthrie suggested that animals learn to associate stimuli and responses merely through their contiguity. Learning, in other words, depended on a response occurring close in time to particular stimuli. Reward was not necessary.

      1. Impact of Reward

        The function of reward according to Guthrie is not to strengthen the S-R connection as Hull proposed, but rather simply to change the stimulus (S) situation. When an animal is rewarded, the stimulus is changed, thereby preserving the previous S-R connection.

      2. The Function of Punishment

        According to Guthrie, punishing stimuli elicit other responses, any one of which may become associated with the preceding stimuli.

      3. The Importance of Practice

        Guthrie thought that learning of S-R connections occurs in one trial only. The fact that learned behavior changes with practice is due, he said, to animals attending to different aspects of the stimulus environment on each trial, and/or to associating different components of a complicated response with the stimuli on each trial.

      4. An Evaluation of Contiguity Theory

        Guthrie did not provide much empirical evidence for his theory. Later research confirmed the importance of contiguity and other aspects of Guthrie's theory, but other research has failed to confirm other predictions of the theory.

  3. COGNITIVE APPROACHES TO LEARNING

    1. Tolman's Purposive Behaviorism: Tolman proposed a more cognitive theory of learning that was not accepted well in the 1930's and 1940's when Hull's theory was predominant. Tolman's approach represents a forerunner to current ideas about learning found in cognitive psychology.

      1. Flexibility of Behavior

        In contrast to Hull who believed that learning was automatic and involved S-R associations, Tolman believed that organisms' behaviors are purposive and goal-directed. He emphasized the role of expectancies in guiding behavior and the ability of cues to convey information about where our goals are located.

      2. Motivation Processes

        Tolman proposed two motivational principles that are parallel to processes in Hull's theory. One principle was based on deprivation (drive) that produces an internal drive state that increases demand for the goal object. Second, environmental events can be associated with this demand. This process is called cathexis, and can motivate either approach behavior (positive cathexis) or avoidance (negative cathexis). Tolman's equivalence belief principle, analogous to Spence's anticipatory goal concept, explains how animals learn about and respond to secondary reinforcers (e.g., money).

      3. Is Reward Necessary for Learning?

        According to Tolman, reward is not necessary for learning to occur; however, reward is necessary as a motivating condition for the occurrence of learned behavior.

      4. An Evaluation of Purposive Behaviorism

        Hull and Tolman were contemporaries and Hull's theory enjoyed more popularity. However, Tolman forced Hull to make theoretical modifications that attempted to account for the purposive nature of behavior in mechanistic terms. In more modern times, the cognitive approach inspired by Tolman has gained wide appeal in the understanding of the learning process.

    2. Expectancy-Value Theory: Tolman's theory was the basis for Rotter's expectancy-value theory.

      1. Basic Tenets

        There are three main ideas in Rotter's theory. First is the assumption that preference for a particular event is determined by its reward value, which is itself determined by comparison with experience with other rewards. Second, each individual has a subjective expectation concerning the probability of obtaining a particular reward. Third, previous experiences with the reward in various situations govern our expectation that it can be obtained in a particular environment.

      2. Locus of Control

        Locus of control refers to an individual's beliefs about how rewards are obtained. An internal expectancy refers to beliefs that one's own actions/abilities are important, whereas an external expectancy refers to beliefs that outside forces/situations determine when rewards will be delivered.

      3. An Evaluation of Rotter's Expectancy-Value Theory

        Based on Tolman's earlier theorizing, Rotter emphasized that learning and behavior is often determined more by cognitive processes than by the external and biological events that were emphasized by drive theory.

  4. SKINNER'S BEHAVIORISTIC METHODOLOGY

    Skinner has been a major influence on learning theory and our understanding of how to predict and control behavior. He believed that the best way to study behavior is to understand how the environment, including reinforcers, control responding. His approach to behavior is sometimes known as behavior modification.

    1. The Importance of the Environment: Skinner developed a methodology, called operant conditioning, to study how environmental conditions control behavior. Skinner was most concerned with determining how reinforcement controls behavior. He defined a reinforcer as an environmental event that increases the probability of a response. The relationship between an operant response and a reinforcer is termed contingency. Skinner suggested that even very complex behavior, such as language, is controlled by various contingencies.

    2. The Role of Theory?: Skinner eschewed theory. He believed that research should be devoted to identifying the relationships between changes in the environment and changes in behavior. Many others do not agree that theory interferes with the progress of behavior analysis.