Methods of Wider Applicability

Operant behavior is defined in Webster's New Collegiate Dictionary (1974) as "behavior or responses (as bar pressing by a rat to obtain food) that operate on the environment to produce rewarding and reinforcing effects." Most people have some familiarity with the basic situation but not with its ramifications. It is the type of behavior described in Section I as commonly used in behavioral pharmacology.

In the present context, the word "response" is not used in its dictionary sense but to mean an operant response. According to common usage, a response is a reactionary behavior "to" something. An operant response, however, is simply an elementary unit of behavior that (i) has a selected detectable effect on the environment (e.g., operation of a key) and (ii) can have its frequency of occurrence in similar circumstances in the future increased or maintained by an event called a reinforcer. A reinforcer is recognized because when it occurs in relation to an operant response the frequency of occurrence of the response in similar circumstances is first increased. Then, with repetition of a consistent relationship between responding and reinforcer, a consistent pattern of rates of responding develops and is maintained. Because rates of responding are commonly the focus of interest, a response usually calls for only brief operation of the key, but this is not necessarily so. For example, a response requirement could call for a minimum force to be exerted for many minutes or longer. Obviously, the definitions of operant response and reinforcer are circular, like the definitions of mass and force in Newtonian physics. This means that they have to be dealt with together. The implication in Webster's definition that a reinforcer must be rewarding is now known not to be true. There are reinforcers that no one would consider rewarding but that will maintain responding, as will be described later.

Of great importance was the discovery that responding can be maintained with only rare occurrences of the reinforcer, for example, after the elapse of hours with hundreds or even thousands of responses between occurrences. Indeed, the ability to maintain certain types of behavior over long periods depends critically on infrequent occurrences of reinforcers. For many reinforcers, e.g., food or water, frequent presentations lead to temporary loss of efficacy as reinforcers. The program that specifies when the reinforcer will occur is called the schedule of reinforcement. Two basic requirements that can be imposed by schedules are numbers of responses and elapsed time. In its simplest form, the former could specify that the reinforcer would occur when some number of responses has been made since a starting event, usually the last occurrence of the reinforcer (e.g., 30, 100, 300 responses—so-called FR schedules). In the second type, the reinforcer occurs in relation to a response when a certain amount of time has elapsed (e.g., 100,1000, or 10,000 sec—so-called FI schedules). Under a FI 1000-sec schedule, the reinforcer occurs when a response occurs after the lapse of 1000 sec since the start of timing of the interval. Schedules can have both number and time requirements (e.g., 10 responses with at least 10-sec elapsed time between responses) and sequential requirements. More than one schedule can be programmed in a session either with unique signals (usually lights or tones) paired with each schedule or with no distinctive signals associated with each schedule. More than one schedule can operate simultaneously. It is evident that by combining number and time and sequences and adding second and third keys, adding concurrent schedules and so on, an almost limitless variety of schedules can be devised; in fact, a great many have been studied. Typically, a subject is first exposed to an easy, undemanding preliminary schedule such as the reinforcer occurring immediately when a response occurs. Then parameters are changed incrementally toward the desired schedule parameters. The final schedule is then presented consistently until a similar performance is seen session after session. How long it takes to reach steady state depends on many factors, one being, not surprisingly, the complexity of the schedule. With standard, relatively simple schedules and signals, steady state may be reached in 10-30 sessions and pharmacological interventions can start. The computer programming the schedule can also perform analyses in real time that can be used to modulate schedule parameters within single sessions. The long and detailed description of schedules of intermittent reinforcement has been given because studies on such schedule-controlled patterns of responding have been prominent, even dominant, in the development of behavioral pharmacology. The following example is from work published in the mid-1950s.



Is there a cause or cure for autism? The Complete Guide To Finally Understanding Autism. Do you have an autistic child or know someone who has autism? Do you understand the special needs of an autistic person?

Get My Free Ebook

Post a comment