|
Cerutti, D. T., & Staddon, J. E. R. (2004). Immediacy versus anticipated delay in the time-left experiment: a test of the cognitive hypothesis. J Exp Psychol Anim Behav Process, 30(1), 45–57.
Abstract: In the time-left experiment (J. Gibbon & R. M. Church, 1981), animals are said to compare an expectation of a fixed delay to food, for one choice, with a decreasing delay expectation for the other, mentally representing both upcoming time to food and the difference between current time and upcoming time (the cognitive hypothesis). The results of 2 experiments support a simpler view: that animals choose according to the immediacies of reinforcement for each response at a time signaled by available time markers (the temporal control hypothesis). It is not necessary to assume that animals can either represent or subtract representations of times to food to explain the results of the time-left experiment.
|
|
|
de Waal, F. B. M., & Davis, J. M. (2003). Capuchin cognitive ecology: cooperation based on projected returns. Neuropsychologia, 41(2), 221–228.
Abstract: Stable cooperation requires that each party's pay-offs exceed those available through individual action. The present experimental study on brown capuchin monkeys (Cebus apella) investigated if decisions about cooperation are (a) guided by the amount of competition expected to follow the cooperation, and (b) made instantaneously or only after a period of familiarization. Pairs of adult monkeys were presented with a mutualistic cooperative task with variable opportunities for resource monopolization (clumped versus dispersed rewards), and partner relationships (kin versus nonkin). After pre-training, each pair of monkeys (N=11) was subjected to six tests, consisting of 15 2 min trials each, with rewards available to both parties. Clumped reward distribution had an immediate negative effect on cooperation: this effect was visible right from the start, and remained visible even if clumped trials alternated with dispersed trials. The drop in cooperation was far more dramatic for nonkin than kin, which was explained by the tendency of dominant nonkin to claim more than half of the rewards under the clumped condition. The immediacy of responses suggests a decision-making process based on predicted outcome of cooperation. Decisions about cooperation thus take into account both the opportunity for and the likelihood of subsequent competition over the spoils.
|
|
|
Hogan, D. E., Zentall, T. R., & Pace, G. (1983). Control of pigeons' matching-to-sample performance by differential sample response requirements. Am J Psychol, 96(1), 37–49.
Abstract: Pigeons were trained on a matching-to-sample task in which sample hue and required sample-specific observing behavior provided redundant, relevant cues for correct choices. On trials that involved red and yellow hues as comparison stimuli, a fixed-ratio 16 schedule (FR 16) was required to illuminate the comparisons when the sample was red, and a differential-reinforcement-of-low-rates 3-sec schedule (DRL 3-sec) was required when the sample was yellow. On trials involving blue and green hues as comparison stimuli, an FR 16 schedule was required when the sample was blue and a DRL 3-sec schedule was required when the sample was green. For some pigeons, a 0-sec delay intervened between sample offset and comparison onset, whereas other pigeons experienced a random mixture of 0-sec and 2-sec delay trials. Test trial performance at 0-sec delay indicated that sample-specific behavior controlled choice performance considerably more than sample hue did. Test performance was independent of whether original training involved all 0-sec delay trials or a mixture of 0-sec and 2-sec delays. Sample-specific observing response requirements appear to facilitate pigeons' matching-to-sample performance by strengthening associations between the observing response and correct choice.
|
|
|
Kaiser, D. H., Zentall, T. R., & Neiman, E. (2002). Timing in pigeons: effects of the similarity between intertrial interval and gap in a timing signal. J Exp Psychol Anim Behav Process, 28(4), 416–422.
Abstract: Previous research suggests that when a fixed interval is interrupted (known as the gap procedure), pigeons tend to reset memory and start timing from 0 after the gap. However, because the ambient conditions of the gap typically have been the same as during the intertrial interval (ITI), ambiguity may have resulted. In the present experiment, the authors found that when ambient conditions during the gap were similar to the ITI, pigeons tended to reset memory, but when ambient conditions during the gap were different from the ITI, pigeons tended to stop timing, retain the duration of the stimulus in memory, and add to that time when the stimulus reappeared. Thus, when the gap was unambiguous, pigeons timed accurately.
|
|
|
Schwartz, B. L., Colon, M. R., Sanchez, I. C., Rodriguez, I. A., & Evans, S. (2002). Single-trial learning of “what” and “who” information in a gorilla (Gorilla gorilla gorilla): implications for episodic memory. Anim. Cogn., 5(2), 85–90.
Abstract: Single-trial learning and long-term memory of “what” and “who” information were examined in an adult gorilla (Gorilla gorilla gorilla). We presented the gorilla with a to-be-remembered food item at the time of study. In Experiment 1, following a retention interval of either approximately 7 min or 24 h, the gorilla responded with one of five cards, each corresponding to a particular food. The gorilla was accurate on 70% of the short retention-interval trials and on 82% of the long retention-interval trials. In Experiment 2, the food stimulus was provided by one of two experimenters, each of whom was represented by a card. The gorilla identified the food (55% of the time) and the experimenter (82% of the time) on the short retention-interval trials. On the long retention-interval trials, the gorilla was accurate for the food (73%) and for the person (87%). The results are interpreted in light of theories of episodic memory.
|
|
|
Shettleworth, S. J. (1978). Reinforcement and the organization of behavior in golden hamsters: Pavlovian conditioning with food and shock unconditioned stimuli. J Exp Psychol Anim Behav Process, 4(2), 152–169.
Abstract: The effects of Pavlovian conditioned stimuli (CSs) for food or shock on a variety of behaviors of golden hamsters were observed in three experiments. The aim was to see whether previously reported differences among the behaviors produced by food reinforcement and punishment procedures could be accounted for by differential effects of Pavlovian conditioning on the behaviors. There was some correspondence between the behaviors observed to the CSs and the previously reported effects of instrumental training. However, the Pavlovian conditioned responses (CRs) alone would not have predicted the effects of instrumental training. Moreover, CRs depended to some extent on the context in which training and testing occurred. These findings, together with others in the literature, suggest that the results of Pavlovian conditioning procedures may not unambiguously predict what system of behaviors will be most readily modified by instrumental training with a given reinforcer.
|
|
|
Shettleworth, S. J., & Plowright, C. M. (1992). How pigeons estimate rates of prey encounter. J Exp Psychol Anim Behav Process, 18(3), 219–235.
Abstract: Pigeons were trained on operant schedules simulating successive encounters with prey items. When items were encountered on variable-interval schedules, birds were more likely to accept a poor item (long delay to food) the longer they had just searched, as if they were averaging prey density over a short memory window (Experiment 1). Responding as if the immediate future would be like the immediate past was reversed when a short search predicted a long search next time (Experiment 2). Experience with different degrees of environmental predictability appeared to change the length of the memory window (Experiment 3). The results may reflect linear waiting (Higa, Wynne, & Staddon, 1991), but they differ in some respects. The findings have implications for possible mechanisms of adjusting behavior to current reinforcement conditions.
|
|
|
Waite, T. A. (2002). Interruptions improve choice performance in gray jays: prolonged information processing versus minimization of costly errors. Anim. Cogn., 5(4), 209–214.
Abstract: Under the assumption that selection favors minimization of costly errors, erroneous choice may be common when its fitness cost is low. According to an adaptive-choice model, this cost depends on the rate at which an animal encounters the choice: the higher this rate, the smaller the cost of choosing a less valuable option. Errors should thus be more common when interruptions to foraging are shorter. A previous experiment supported this prediction: gray jays, Perisoreus canadensis, were more error prone when subjected to shorter delays to access to food rewards. This pattern, though, is also predicted by an attentional-constraints model. Because the subjects were able to inspect the rewards during delays, their improved performance when subjected to longer delays could have been a byproduct of the experimentally prolonged opportunity for information processing. To evaluate this possibility, a follow-up experiment manipulated both delay to access and whether rewards could be inspected during delays. Depriving jays of the opportunity to inspect rewards (using opaque lids) induced only a small, nonsignificant increase in error rate. This effect was independent of length of delay and so the jays' improved performance when subjected to longer delays was not simply a byproduct of prolonged information processing. More definitively, even when the jays were prevented from inspecting rewards during delays, their performance improved when subjected to longer delays. The findings are thus consistent with the adaptive-choice model.
|
|
|
Zentall, T. R. (2005). Timing, memory for intervals, and memory for untimed stimuli: the role of instructional ambiguity. Behav. Process., 70(3), 209–222.
Abstract: Theories of animal timing have had to account for findings that the memory for the duration of a timed interval appears to be dramatically shorted within a short time of its termination. This finding has led to the subjective shortening hypothesis and it has been proposed to account for the poor memory that animals appear to have for the initial portion of a timed interval when a gap is inserted in the to-be-timed signal. It has also been proposed to account for the poor memory for a relatively long interval that has been discriminated from a shorter interval. I suggest here a simpler account in which ambiguity between the gap or retention interval and the intertrial interval results in resetting the clock, rather than forgetting the interval. The ambiguity hypothesis, together with a signal salience mechanism that determines how quickly the clock is reset at the start of the intertrial interval can account for the results of the reported timing experiments that have used the peak procedure. Furthermore, instructional ambiguity rather than memory loss may account for the results of many animal memory experiments that do not involve memory for time.
|
|
|
Zentall, T. R., & Sherburne, L. M. (1994). Transfer of value from S+ to S- in a simultaneous discrimination. J Exp Psychol Anim Behav Process, 20(2), 176–183.
Abstract: Value transfer theory has been proposed to account for transitive inference effects (L. V. Fersen, C. D. L. Wynne, J. D. Delius, & J. E. R. Staddon, 1991), in which following training on 4 simultaneous discriminations (A+B-, B+C-, C+D-, D+E-) pigeons show a preference for B over D. According to this theory, some of the value of reinforcement acquired by each S+ transfers to the S-. In the transitive inference experiment, C (associated with both reward and nonreward) can transfer less value to D than A (associated only with reward) can transfer to B. Support for value transfer theory was demonstrated in 2 experiments in which an S- presented in the context of a stimulus to which responses were always reinforced (S+) was preferred over an S- presented in the context of a stimulus to which responses were sometimes reinforced (S +/-).
|
|