|
Boysen, S. T., Bernston, G. G., Hannan, M. B., & Cacioppo, J. T. (1996). Quantity-based interference and symbolic representations in chimpanzees (Pan troglodytes). J Exp Psychol Anim Behav Process, 22(1), 76–86.
Abstract: Five chimpanzees with training in counting and numerical skills selected between 2 arrays of different amounts of candy or 2 Arabic numerals. A reversed reinforcement contingency was in effect, in which the selected array was removed and the subject received the nonselected candies (or the number of candies represented by the nonselected Arabic numeral). Animals were unable to maximize reward by selecting the smaller array when candies were used as array elements. When Arabic numerals were substituted for the candy arrays, all animals showed an immediate shift to a more optimal response strategy of selecting the smaller numeral, thereby receiving the larger reward. Results suggest that a response disposition to the high-incentive candy stimuli introduced a powerful interference effect on performance, which was effectively overridden by the use of symbolic representations.
|
|
|
Cerutti, D. T., & Staddon, J. E. R. (2004). Immediacy versus anticipated delay in the time-left experiment: a test of the cognitive hypothesis. J Exp Psychol Anim Behav Process, 30(1), 45–57.
Abstract: In the time-left experiment (J. Gibbon & R. M. Church, 1981), animals are said to compare an expectation of a fixed delay to food, for one choice, with a decreasing delay expectation for the other, mentally representing both upcoming time to food and the difference between current time and upcoming time (the cognitive hypothesis). The results of 2 experiments support a simpler view: that animals choose according to the immediacies of reinforcement for each response at a time signaled by available time markers (the temporal control hypothesis). It is not necessary to assume that animals can either represent or subtract representations of times to food to explain the results of the time-left experiment.
|
|
|
Christensen, J. W., Rundgren, M., & Olsson, K. (2006). Training methods for horses: habituation to a frightening stimulus. Equine Vet J, 38(5), 439–443.
Abstract: REASONS FOR PERFORMING STUDY: Responses of horses in frightening situations are important for both equine and human safety. Considerable scientific interest has been shown in development of reactivity tests, but little effort has been dedicated to the development of appropriate training methods for reducing fearfulness. OBJECTIVES: To investigate which of 3 different training methods (habituation, desensitisation and counter-conditioning) was most effective in teaching horses to react calmly in a potentially frightening situation. HYPOTHESES: 1) Horses are able to generalise about the test stimulus such that, once familiar with the test stimulus in one situation, it appears less frightening and elicits a reduced response even when the stimulus intensity is increased or the stimulus is presented differently; and 2) alternative methods such as desensitisation and counter-conditioning would be more efficient than a classic habituation approach. METHODS: Twenty-seven naive 2-year-old Danish Warmblood stallions were trained according to 3 different methods, based on classical learning theory: 1) horses (n = 9) were exposed to the full stimulus (a moving, white nylon bag, 1.2 x 0.75 m) in 5 daily training sessions until they met a predefined habituation criterion (habituation); 2) horses (n = 9) were introduced gradually to the stimulus and habituated to each step before the full stimulus was applied (desensitisation); 3) horses (n = 9) were trained to associate the stimulus with a positive reward before being exposed to the full stimulus (counter-conditioning). Each horse received 5 training sessions of 3 min per day. Heart rate and behavioural responses were recorded. RESULTS: Horses trained with the desensitisation method showed fewer flight responses in total and needed fewer training sessions to learn to react calmly to test stimuli. Variations in heart rate persisted even when behavioural responses had ceased. In addition, all horses on the desensitisation method eventually habituated to the test stimulus whereas some horses on the other methods did not. CONCLUSIONS AND POTENTIAL RELEVANCE: Desensitisation appeared to be the most effective training method for horses in frightening situations. Further research is needed in order to investigate the role of positive reinforcement, such as offering food, in the training of horses.
|
|
|
Clement, T. S., & Zentall, T. R. (2002). Second-order contrast based on the expectation of effort and reinforcement. J Exp Psychol Anim Behav Process, 28(1), 64–74.
Abstract: Pigeons prefer signals for reinforcement that require greater effort (or time) to obtain over those that require less effort to obtain (T. S. Clement, J. Feltus, D. H. Kaiser, & T. R. Zentall, 2000). Preference was attributed to contrast (or to the relatively greater improvement in conditions) produced by the appearance of the signal when it was preceded by greater effort. In Experiment 1, the authors of the present study demonstrated that the expectation of greater effort was sufficient to produce such a preference (a second-order contrast effect). In Experiments 2 and 3, low versus high probability of reinforcement was substituted for high versus low effort, respectively, with similar results. In Experiment 3, the authors found that the stimulus preference could be attributed to positive contrast (when the discriminative stimuli represented an improvement in the probability of reinforcement) and perhaps also negative contrast (when the discriminative stimuli represented reduction in the probability of reinforcement).
|
|
|
Coleman, K., Tully, L. A., & McMillan, J. L. (2005). Temperament correlates with training success in adult rhesus macaques. Am. J. Primatol., 65(1), 63–71.
Abstract: In recent years there has been a marked increase in awareness of issues involving the psychological well-being of nonhuman primates (NHPs) used in biomedical research. As a result, many facilities are starting to train primates to voluntarily cooperate with veterinary, husbandry, and research procedures, such as remaining still for blood draws or injections. Such training generally reduces the stress associated with these procedures, resulting in calmer animals and, ultimately, better research models. However, such training requires great investments in time, and there can be vast individual differences in training success. Some animals learn tasks quickly, while others make slower progress in training. In this study, we examined whether temperament, as measured by response to a novel food object, correlated with the amount of time it took to train 20 adult female rhesus macaques to perform a simple task. The monkeys were categorized as “exploratory” (i.e., inspected a novel object placed in the home cage within 10 sec), “moderate” (i.e., inspected the object within 10-180 sec), or “inhibited” (i.e., did not inspect the object within 3 min). We utilized positive reinforcement techniques to train the monkeys to touch a target (PVC pipe shaped like an elbow) hung on their cage. Temperament correlated with training success in this study (Pearson chi2=7.22, df=2, P=0.03). We easily trained over 75% of the animals that inspected the novel food (i.e., exploratory or moderate individuals) to touch the target. However, only 22% of the inhibited monkeys performed the task. By knowing which animals may not respond to conventional training methods, we may be able to develop alternate training techniques to address their specific needs. In addition, these results will allow us to screen monkeys to be assigned to research projects in which they will be trained, with the goal of obtaining the best candidates for those studies.
|
|
|
Kaiser, D. H., Zentall, T. R., & Neiman, E. (2002). Timing in pigeons: effects of the similarity between intertrial interval and gap in a timing signal. J Exp Psychol Anim Behav Process, 28(4), 416–422.
Abstract: Previous research suggests that when a fixed interval is interrupted (known as the gap procedure), pigeons tend to reset memory and start timing from 0 after the gap. However, because the ambient conditions of the gap typically have been the same as during the intertrial interval (ITI), ambiguity may have resulted. In the present experiment, the authors found that when ambient conditions during the gap were similar to the ITI, pigeons tended to reset memory, but when ambient conditions during the gap were different from the ITI, pigeons tended to stop timing, retain the duration of the stimulus in memory, and add to that time when the stimulus reappeared. Thus, when the gap was unambiguous, pigeons timed accurately.
|
|
|
Neuringer, A. (2004). Reinforced variability in animals and people: implications for adaptive action. Am Psychol, 59(9), 891–906.
Abstract: Although reinforcement often leads to repetitive, even stereotyped responding, that is not a necessary outcome. When it depends on variations, reinforcement results in responding that is diverse, novel, indeed unpredictable, with distributions sometimes approaching those of a random process. This article reviews evidence for the powerful and precise control by reinforcement over behavioral variability, evidence obtained from human and animal-model studies, and implications of such control. For example, reinforcement of variability facilitates learning of complex new responses, aids problem solving, and may contribute to creativity. Depression and autism are characterized by abnormally repetitive behaviors, but individuals afflicted with such psychopathologies can learn to vary their behaviors when reinforced for so doing. And reinforced variability may help to solve a basic puzzle concerning the nature of voluntary action.
|
|
|
Nevin, J. A., & Shettleworth, S. J. (1966). An analysis of contrast effects in multiple schedules. J Exp Anal Behav, 9(4), 305–315.
|
|
|
Shettleworth, S. J. (1978). Reinforcement and the organization of behavior in golden hamsters: Pavlovian conditioning with food and shock unconditioned stimuli. J Exp Psychol Anim Behav Process, 4(2), 152–169.
Abstract: The effects of Pavlovian conditioned stimuli (CSs) for food or shock on a variety of behaviors of golden hamsters were observed in three experiments. The aim was to see whether previously reported differences among the behaviors produced by food reinforcement and punishment procedures could be accounted for by differential effects of Pavlovian conditioning on the behaviors. There was some correspondence between the behaviors observed to the CSs and the previously reported effects of instrumental training. However, the Pavlovian conditioned responses (CRs) alone would not have predicted the effects of instrumental training. Moreover, CRs depended to some extent on the context in which training and testing occurred. These findings, together with others in the literature, suggest that the results of Pavlovian conditioning procedures may not unambiguously predict what system of behaviors will be most readily modified by instrumental training with a given reinforcer.
|
|
|
Shettleworth, S. J., & Juergensen, M. R. (1980). Reinforcement and the organization of behavior in golden hamsters: brain stimulation reinforcement for seven action patterns. J Exp Psychol Anim Behav Process, 6(4), 352–375.
Abstract: Golden hamsters were reinforced with intracranial electrical stimulation of the lateral hypothalamus (ICS) for spending time engaging in one of seven topographically defined action patterns (APs). The stimulation used as reinforcer elicited hoarding and/or feeding and supported high rates of bar pressing. In Experiment 1, hamsters were reinforced successively for digging, open rearing, and face washing. Digging increased most in time spent, and face washing increased least. Experiments 2-5 examined these effects further and also showed that “scrabbling,” like digging, was performed a large proportion of the time, almost without interruption, for contingent ICS but that scratching the body with a hindleg and scent-marking showed relatively little effect of contingent ICS, the latter even in an environment that facilitated marking. In Experiment 6, naive hamsters received ICS not contingent on behavior every 30 sec (fixed-time 30-sec schedule). Terminal behaviors that developed on this schedule were APs that were easy to reinforce in the other experiments, but a facultative behavior, face washing, was one not so readily reinforced. Experiment 7 confirmed a novel prediction from Experiment 6--that wall rearing, a terminal AP, would be performed at a high level for contingent ICS. All together, the results point to both motivational factors and associative factors being involved in the considerable differences in performance among different reinforced activities.
|
|