|
Boysen, S. T., Bernston, G. G., Hannan, M. B., & Cacioppo, J. T. (1996). Quantity-based interference and symbolic representations in chimpanzees (Pan troglodytes). J Exp Psychol Anim Behav Process, 22(1), 76–86.
Abstract: Five chimpanzees with training in counting and numerical skills selected between 2 arrays of different amounts of candy or 2 Arabic numerals. A reversed reinforcement contingency was in effect, in which the selected array was removed and the subject received the nonselected candies (or the number of candies represented by the nonselected Arabic numeral). Animals were unable to maximize reward by selecting the smaller array when candies were used as array elements. When Arabic numerals were substituted for the candy arrays, all animals showed an immediate shift to a more optimal response strategy of selecting the smaller numeral, thereby receiving the larger reward. Results suggest that a response disposition to the high-incentive candy stimuli introduced a powerful interference effect on performance, which was effectively overridden by the use of symbolic representations.
|
|
|
Urcuioli, P. J., DeMarse, T. B., & Zentall, T. R. (1998). Transfer across delayed discriminations: II. Differences in the substitutability of initial versus test stimuli. J Exp Psychol Anim Behav Process, 24(1), 47–59.
Abstract: In 2 experiments, pigeons were trained on, and then transferred to, delayed simple discriminations in which the initial stimuli signalled reinforcement versus extinction following a retention interval. Experiment 1 showed that discriminative responding on the retention test transferred to novel test stimuli that had appeared in another delayed simple discrimination but not to stimuli having the same reinforcement history off-baseline. By contrast, Experiment 2 showed that performances transferred to novel initial stimuli whether they had been trained on-baseline or off-baseline. These results suggest that the test stimuli in delayed simple discriminations acquire control over responding only in the memory task itself. On the other hand, control by the initial stimuli, if coded as outcome expectancies, does not require such task-specific training.
|
|
|
Mills, D. S. (1998). Applying learning theory to the management of the horse: the difference between getting it right and getting it wrong. Equine Vet J Suppl, (27), 44–48.
Abstract: Horses constantly modify their behaviour as a result of experience. This involves the creation of an association between events or stimuli. The influence of people on the modification and generation of certain behaviour patterns extends beyond the intentional training of the horse. The impact of any action depends on how it is perceived by the horse, rather than the motive of the handler. Negative and positive reinforcement increase the probability of specific behaviours recurring i.e. strengthen the association between events, whereas punishment reduces the probable recurrence of a behaviour without providing specific information about the desired alternative. In this paper the term 'punishers' is used to refer to the physical aids, such as a whip or crop, which may be used to bring about the process of punishment. However, if their application ceases when a specific behaviour occurs they may negatively reinforce that action. Intended 'punishers' may also be rewarding (e.g. for attention seeking behaviour). Therefore, contingency factors (which define the relationship between stimuli, such as the level of reinforcement), contiguity factors (which describe the proximity of events in space or time) and choice of reinforcing stimuli are critical in determining the rate of learning. The many problems associated with the application of punishment in practice lead to confusion by both horse and handler and, possibly, abuse of the former. Most behaviour problems relate to handling and management of the horse and can be avoided or treated with a proper analysis of the factors influencing the behaviour.
|
|
|
Cooper, J. J. (1998). Comparative learning theory and its application in the training of horses. Equine Vet J Suppl, (27), 39–43.
Abstract: Training can best be explained as a process that occurs through stimulus-response-reinforcement chains, whereby animals are conditioned to associate cues in their environment, with specific behavioural responses and their rewarding consequences. Research into learning in horses has concentrated on their powers of discrimination and on primary positive reinforcement schedules, where the correct response is paired with a desirable consequence such as food. In contrast, a number of other learning processes that are used in training have been widely studied in other species, but have received little scientific investigation in the horse. These include: negative reinforcement, where performance of the correct response is followed by removal of, or decrease in, intensity of a unpleasant stimulus; punishment, where an incorrect response is paired with an undesirable consequence, but without consistent prior warning; secondary conditioning, where a natural primary reinforcer such as food is closely associated with an arbitrary secondary reinforcer such as vocal praise; and variable or partial conditioning, where once the correct response has been learnt, reinforcement is presented according to an intermittent schedule to increase resistance to extinction outside of training.
|
|
|
Clement, T. S., Feltus, J. R., Kaiser, D. H., & Zentall, T. R. (2000). “Work ethic” in pigeons: reward value is directly related to the effort or time required to obtain the reward. Psychon Bull Rev, 7(1), 100–106.
Abstract: Stimuli associated with less effort or with shorter delays to reinforcement are generally preferred over those associated with greater effort or longer delays to reinforcement. However, the opposite appears to be true of stimuli that follow greater effort or longer delays. In training, a simple simultaneous discrimination followed a single peck to an initial stimulus (S+FR1 S-FR1) and a different simple simultaneous discrimination followed 20 pecks to the initial stimulus (S+FR20 S-FR20). On test trials, pigeons preferred S+FR20 over S+FR1 and S-FR20 over S-FR1. These data support the view that the state of the animal immediately prior to presentation of the discrimination affects the value of the reinforcement that follows it. This contrast effect is analogous to effects that when they occur in humans have been attributed to more complex cognitive and social factors.
|
|
|
Dorrance, B. R., & Zentall, T. R. (2001). Imitative learning in Japanese quail (Coturnix japonica) depends on the motivational state of the observer quail at the time of observation. J Comp Psychol, 115(1), 62–67.
Abstract: The 2-action method was used to examine whether imitative learning in Japanese quail (Coturnix japonica) depends on the motivational state of the observer quail at the time of observation of the demonstrated behavior. Two groups of observers were fed before observation (satiated groups), whereas 2 other groups of observers were deprived of food before observation (hungry groups). Quail were tested either immediately following observation or after a 30-min delay. Results indicated that quail in the hungry groups imitated, whereas those in the satiated groups did not, regardless of whether their test was immediate or delayed. The results suggest that observer quail may not learn (through observation) behavior that leads to a reinforcer for which they are unmotivated at the time of test. In addition, the results show that quail are able to delay the performance of a response acquired through observation (i.e., they show deferred imitation).
|
|
|
Ferguson, D. L., & Rosales-Ruiz, J. (2001). Loading the problem loader: the effects of target training and shaping on trailer-loading behavior of horses. J Appl Behav Anal, 34(4), 409–423.
Abstract: The purpose of this study was to develop an effective method for trailer loading horses based on principles of positive reinforcement. Target training and shaping were used to teach trailer-loading behavior to 5 quarter horse mares in a natural setting. All 5 had been trailer loaded before through the use of aversive stimulation. Successive approximations to loading and inappropriate behaviors were the dependent variables. After training a horse to approach a target, the target was moved to various locations inside the trailer. Horses started training on the left side of a two-horse trailer. After a horse was loading on the left side, she was moved to the right side, then to loading half on the right and half on the left. A limited-hold procedure and the presence of a companion horse seemed to facilitate training for 1 horse. Inappropriate behaviors fell to zero immediately after target training, and all the horses successfully completed the shaping sequence. Finally, these effects were observed to generalize to novel conditions (a different trainer and a different trailer).
|
|
|
Kaiser, D. H., Zentall, T. R., & Neiman, E. (2002). Timing in pigeons: effects of the similarity between intertrial interval and gap in a timing signal. J Exp Psychol Anim Behav Process, 28(4), 416–422.
Abstract: Previous research suggests that when a fixed interval is interrupted (known as the gap procedure), pigeons tend to reset memory and start timing from 0 after the gap. However, because the ambient conditions of the gap typically have been the same as during the intertrial interval (ITI), ambiguity may have resulted. In the present experiment, the authors found that when ambient conditions during the gap were similar to the ITI, pigeons tended to reset memory, but when ambient conditions during the gap were different from the ITI, pigeons tended to stop timing, retain the duration of the stimulus in memory, and add to that time when the stimulus reappeared. Thus, when the gap was unambiguous, pigeons timed accurately.
|
|
|
Dorrance, B. R., & Zentall, T. R. (2002). Imitation of conditional discriminations in pigeons (Columba livia). J Comp Psychol, 116(3), 277–285.
Abstract: In the present experiments, the 2-action method was used to determine whether pigeons could learn to imitate a conditional discrimination. Demonstrator pigeons (Columba livia) stepped on a treadle in the presence of 1 light and pecked at the treadle in the presence of another light. Demonstration did not seem to affect acquisition of the conditional discrimination (Experiment 1) but did facilitate its reversal of the conditional discrimination (Experiments 2 and 3). The results suggest that pigeons are not only able to learn a specific behavior by observing another pigeon, but they can also learn under which circumstances to perform that behavior. The results have implications for proposed mechanisms of imitation in animals.
|
|
|
Clement, T. S., & Zentall, T. R. (2002). Second-order contrast based on the expectation of effort and reinforcement. J Exp Psychol Anim Behav Process, 28(1), 64–74.
Abstract: Pigeons prefer signals for reinforcement that require greater effort (or time) to obtain over those that require less effort to obtain (T. S. Clement, J. Feltus, D. H. Kaiser, & T. R. Zentall, 2000). Preference was attributed to contrast (or to the relatively greater improvement in conditions) produced by the appearance of the signal when it was preceded by greater effort. In Experiment 1, the authors of the present study demonstrated that the expectation of greater effort was sufficient to produce such a preference (a second-order contrast effect). In Experiments 2 and 3, low versus high probability of reinforcement was substituted for high versus low effort, respectively, with similar results. In Experiment 3, the authors found that the stimulus preference could be attributed to positive contrast (when the discriminative stimuli represented an improvement in the probability of reinforcement) and perhaps also negative contrast (when the discriminative stimuli represented reduction in the probability of reinforcement).
|
|