Scalar Timing Theory

Scalar timing theory (see Church, this volume; Gibbon et al., 1984) is an information-processing account of interval timing that grew out of scalar expectancy theory (SET)

(Gibbon, 1977). All models of interval timing require three basic functions: a clock function that measures elapsed time by converting it to some physical representation, a memory function in which a recorded time interval can be represented and stored, and a decision function that uses output from the clock and the memory components to control behavior (Church, 1997). In scalar timing theory these different functions are embodied in discrete components described as follows.

The clock subsystem consists of a pacemaker, a switch and an accumulator. Time measurement is achieved by collecting pulses from the pacemaker in the accumulator when the switch is closed. The pacemaker is assumed to continuously emit pulses at a rate A. When the switch is closed in response to an external signal to begin timing, pulses are transmitted to the accumulator, where they accumulate until the switch is opened again. The value in the accumulator, mt, thus represents the amount of time that has elapsed since the switch was closed and timing began. This estimate of real time grows linearly with real time, t, such that mt = A(t - T0)

where T0 is the mean latency between the external start signal and the beginning of time accumulation.

The current estimate of elapsed time, mt, can be transferred either to working memory for immediate use or to reference memory, where it can be stored for future reference. In a fixed-interval schedule the usual cue for transferring a value to reference memory is the delivery of food. When food is delivered (i.e., t = FI, the value of the fixed interval), the scalar timing theory assumes that the value of mt, which we will now refer to as mFI, is transferred to reference memory as where k* is a translation constant that is assumed to vary between trials. An important assumption of scalar timing theory is that mFI is assumed to be represented in reference memory as a distribution of the various values of mF*I transferred to it on different trials. The form of this distribution is determined by the way in which k* varies between trials. For the purposes of modeling timing performance on a fixed-interval procedure, the mean of k* is usually assumed to be unity such that the mean estimate in memory is equal to the mean estimate of the current time at which reinforcement occurs. Once an animal is fully trained on a fixed-interval schedule, it is assumed to have built up a reference memory representation of the interval that has a mean of mFI and a standard deviation proportional to this mean. If there is variance in the real time between reinforcements, as is the case, for instance, on a variable-interval schedule, then the memory representation formed will be equivalent to the sum of the distributions that would be formed for each of the component intervals in the variable mixture. Due to the fact that the standard deviation of the representation of an interval grows with the value of the interval, the memory distributions of variable intervals will be asymmetrical and skewed to the right. The different memory representations resulting from a range of different mixtures of intervals are illustrated in Figure 5.2.

Temporal stimulus Memory representation a)

0 0

Post a comment