B.F. Skinner
Where to begin when discussing B.F. Skinner? Let’s first note that his version of behaviorism was, admittedly, not the standard thinking of his fellow behaviorists. In fact, he referred to his ideas as to how psychology should be conducted as Radical Behaviorism. His insistence that a scientific approach to the study of behavior should restrict itself only to reference of observable events and behaviors was not followed by figures such as Guthrie, Hull, and Spence---- who frequently referred to stimuli internal to the learner. Related to this issue, is his insistence that the study of learning proceed without theories of learning (in his 1950 “Are theories of learning necessary?”). He thinks that psychologists should stick to mapping out the relationships between stimuli, behavior, and reinforcement schedules that they discover in their experiments. Other of the neo-Behaviorists, such as Guthrie, Tolman, Hull, and Spence, proposed theories of learning that they believed accurately described the processes that accounted for the observed behavior.
Skinner was among the first (in the 1930s) to insist that classical conditioning as studied by Pavlov and instrumental conditioning as studied by Thorndike were two different types of learning. He noted that in classical conditioning a stimulus (unconditioned or conditioned) elicits the response; while, with instrumental conditioning, the behavior is emitted by the subject. Skinner referred to the two types of behavior involved as respondent behavior and operant behavior---- because the organism is operating on its environment in search of rewarding consequences (reinforcement). On the basis of this distinction, Skinner renamed instrumental conditioning as operant conditioning. The vast majority of his research was on the area of operant conditioning.
Skinner also made important distinctions among different types of reinforcement. A stimulus is a positive reinforcer if it is introduced after the occurrence of a behavior. A stimulus is a negative reinforcer if it is removed upon occurrence of a behavior. In general, a stimulus is reinforcing if it increases the probability of a response. So, food introduced to a hungry subject would be a positive reinforcer; while a shock terminated upon performance of a behavior would be a negative reinforcer. Reinforcement is contrasted with punishment--- a punisher is any stimulus which decreases the probability of a behavior. Skinner, consistent with his a-theoretical position noted above, offers no theory as to why a reinforcer is reinforcing.
Skinner also noted the difference between continuous reinforcement and partial reinforcement. The former consists in providing reinforcement (positive or negative) upon every occurrence of the behavior. The latter occurs when the experimenter only reinforces the behavior occasionally--- partial reinforcement itself is capable of being divided into different categories (fixed ratio, variable ratio, fixed interval; variable interval) each producing its own type of behavior pattern. A rather startling discovery was made by Humphreys in the 1930s, when he showed that if you try to extinguish (stop reinforcing) a previously reinforced operant response that the operant will persist longer when it had been reinforced partially as opposed to if it had been continuously reinforced. One can think of all sorts of explanations why this might be true, but Skinner remained silent here, as elsewhere, on answering why this result might have occurred.
One of Skinner’s most famous studies introduced a further reinforcement distinction. The above discussion assumes a relationship between a behavior and a reinforcer: namely, that the reinforcement is a consequence of the behavior (contingent reinforcement). But, what if there is no relationship between a behavior and a reinforcer? That is, what if a behavior is non-contingently reinforced? In this case reinforcers are presented to the subject at random. The result is what Skinner labelled superstitious behavior. The subject repeatedly exhibits behavior patterns that occurred coincidentally before a reinforcer was presented. We might say that the organism saw a pattern to the reinforcement where there was, in fact, none. Such an explanation is not available to Skinner who certainly does not want to reference cognitive events in his psychology, and has no interest in offering an explanation anyway.
Although Skinner was more interested in the relationship between a behavior and stimulus events that followed the behavior, he wasn’t totally uninterested in stimuli that occurred prior to a behavior. Suppose that we presented a green light to a subject before every behavior that will be reinforced. In this case, the green light becomes what Skinner called a discriminative stimulus. It signals to the organism that a reinforcement is coming upon completion of the behavior. Because the green light becomes associated with a reinforcer, it too becomes reinforcing. While it does not satisfy a biological need (primary reinforcer), it itself is a secondary reinforcer because of its association with a primary reinforcer. Much of Skinner’s research involved studying the three way relationship between discriminative stimulus-behavior-reinforcement.