|
Santos, L. R., Miller, C. T., & Hauser, M. D. (2003). Representing tools: how two non-human primate species distinguish between the functionally relevant and irrelevant features of a tool. Anim. Cogn., 6(4), 269–281.
Abstract: Few studies have examined whether non-human tool-users understand the properties that are relevant for a tool's function. We tested cotton-top tamarins (Saguinus oedipus) and rhesus macaques (Macaca mulatta) on an expectancy violation procedure designed to assess whether these species make distinctions between the functionally relevant and irrelevant features of a tool. Subjects watched an experimenter use a tool to push a grape down a ramp, and then were presented with different displays in which the features of the original tool (shape, color, orientation) were selectively varied. Results indicated that both species looked longer when a newly shaped stick acted on the grape than when a newly colored stick performed the same action, suggesting that both species perceive shape as a more salient transformation than color. In contrast, tamarins, but not rhesus, attended to changes in the tool's orientation. We propose that some non-human primates begin with a predisposition to attend to a tool's shape and, with sufficient experience, develop a more sophisticated understanding of the features that are functionally relevant to tools.
|
|
|
Washburn, D. A., & Astur, R. S. (2003). Exploration of virtual mazes by rhesus monkeys (Macaca mulatta). Anim. Cogn., 6(3), 161–168.
Abstract: A chasm divides the huge corpus of maze studies found in the literature, with animals tested in mazes on the one side and humans tested with mazes on the other. Advances in technology and software have made possible the production and use of virtual mazes, which allow humans to navigate computerized environments and thus for humans and nonhuman animals to be tested in comparable spatial domains. In the present experiment, this comparability is extended even further by examining whether rhesus monkeys (Macaca mulatta) can learn to explore virtual mazes. Four male macaques were trained to manipulate a joystick so as to move through a virtual environment and to locate a computer-generated target. The animals succeeded in learning this task, and located the target even when it was located in novel alleys. The search pattern within the maze for these animals resembled the pattern of maze navigation observed for monkeys that were tested on more traditional two-dimensional computerized mazes.
|
|
|
McGonigle, B., Chalmers, M., & Dickinson, A. (2003). Concurrent disjoint and reciprocal classification by Cebus apella in seriation tasks: evidence for hierarchical organization. Anim. Cogn., 6(3), 185–197.
Abstract: We report the results of a 4-year-long study of capuchin monkeys ( Cebus apella ) on concurrent three-way classification and linear size seriation tasks using explicit ordering procedures, requiring subjects to select icons displayed on touch screens rather than manipulate and sort actual objects into groups. The results indicate that C. apella is competent to classify nine items concurrently, first into three disjoint classes where class exemplars are identical to one another, then into three reciprocal classes which share common exemplar (size) features. In the final phase we compare the relative efficiency of executive control under conditions where both hierarchical and/or linear organization can be utilized. Whilst this shows a superiority of categorical based size seriation for a nine item test set suggesting an adaptive advantage for hierarchical over linear organization, Cebus nevertheless achieved high levels of principled linear size seriation with sequence lengths not normally achieved by children below the age of six years.
|
|
|
Iversen, I. H., & Matsuzawa, T. (2003). Development of interception of moving targets by chimpanzees (Pan troglodytes) in an automated task. Anim. Cogn., 6(3), 169–183.
Abstract: The experiments investigated how two adult captive chimpanzees learned to navigate in an automated interception task. They had to capture a visual target that moved predictably on a touch monitor. The aim of the study was to determine the learning stages that led to an efficient strategy of intercepting the target. The chimpanzees had prior training in moving a finger on a touch monitor and were exposed to the interception task without any explicit training. With a finger the subject could move a small “ball” at any speed on the screen toward a visual target that moved at a fixed speed either back and forth in a linear path or around the edge of the screen in a rectangular pattern. Initial ball and target locations varied from trial to trial. The subjects received a small fruit reinforcement when they hit the target with the ball. The speed of target movement was increased across training stages up to 38 cm/s. Learning progressed from merely chasing the target to intercepting the target by moving the ball to a point on the screen that coincided with arrival of the target at that point. Performance improvement consisted of reduction in redundancy of the movement path and reduction in the time to target interception. Analysis of the finger's movement path showed that the subjects anticipated the target's movement even before it began to move. Thus, the subjects learned to use the target's initial resting location at trial onset as a predictive signal for where the target would later be when it began moving. During probe trials, where the target unpredictably remained stationary throughout the trial, the subjects first moved the ball in anticipation of expected target movement and then corrected the movement to steer the ball to the resting target. Anticipatory ball movement in probe trials with novel ball and target locations (tested for one subject) showed generalized interception beyond the trained ball and target locations. The experiments illustrate in a laboratory setting the development of a highly complex and adaptive motor performance that resembles navigational skills seen in natural settings where predators intercept the path of moving prey.
|
|
|
Leighty, K. A., & Fragaszy, D. M. (2003). Primates in cyberspace: using interactive computer tasks to study perception and action in nonhuman animals. Anim. Cogn., 6(3), 137–139.
|
|
|
Fragaszy, D., Johnson-Pynn, J., Hirsh, E., & Brakke, K. (2003). Strategic navigation of two-dimensional alley mazes: comparing capuchin monkeys and chimpanzees. Anim. Cogn., 6(3), 149–160.
Abstract: Planning is an important component of cognition that contributes, for example, to efficient movement through space. In the current study we presented novel two-dimensional alley mazes to four chimpanzees and three capuchin monkeys to identify the nature and efficiency of planning in relation to varying task parameters. All the subjects solved more mazes without error than expected by chance, providing compelling evidence that both species planned their choices in some manner. The probability of making a correct choice on mazes designed to be more demanding and presented later in the testing series was higher than on earlier, simpler mazes (chimpanzees), or unchanged (capuchin monkeys), suggesting microdevelopment of strategic choice. Structural properties of the mazes affected both species' choices. Capuchin monkeys were less likely than chimpanzees to take a correct path that initially led away from the goal but that eventually led to the goal. Chimpanzees were more likely to make an error by passing a correct path than by turning onto a wrong path. Chimpanzees and one capuchin made more errors on choices farther in sequence from the goal. Each species corrected errors before running into the end of an alley in approximately 40% of cases. Together, these findings suggest nascent planning abilities in each species, and the prospect for significant development of strategic planning capabilities on tasks presenting multiple simultaneous or sequential spatial relations. The computerized maze paradigm appears well suited to investigate movement planning and spatial perception in human and nonhuman primates alike.
|
|
|
Imura, T., & Tomonaga, M. (2003). Perception of depth from shading in infant chimpanzees ( Pan troglodytes). Anim. Cogn., 6(4), 253–258.
Abstract: We investigated the ability to perceive depth from shading, one of the pictorial depth cues, in three chimpanzee infants aged 4-10 months old, using a preferential reaching task commonly used to study pictorial depth perception in human infants. The chimpanzee infants reached significantly more to three-dimensional toys than to pictures thereof and more to the three-dimensional convex than to the concave. Furthermore, two of the three infants reached significantly more to the photographic convex than to the photographic concave. These infants also looked longer at the photographic convex than the concave. Our results suggest that chimpanzees perceive, at least as early as the latter half of the first year of life, pictorial depth defined by shading information. Photographic convexes contain richer information about pictorial depth (e.g., attached shadow, cast shadow, highlighted area, and global difference in brightness) than simple computer-graphic graded patterns. These cues together might facilitate the infants' perception of depth from shading.
|
|
|
Merchant, H., Fortes, A. F., & Georgopoulos, A. P. (2004). Short-term memory effects on the representation of two-dimensional space in the rhesus monkey. Anim. Cogn., 7(3), 133–143.
Abstract: Human subjects represent the location of a point in 2D space using two independent dimensions (x-y in Euclidean or radius-angle in polar space), and encode location in memory along these dimensions using two levels of representation: a fine-grain value and a category. Here we determined whether monkeys possessed the ability to represent location with these two levels of coding. A rhesus monkey was trained to reproduce the location of a dot in a circle by pointing, after a delay period, on the location where a dot was presented. Five different delay periods (0.5-5 s) were used. The results showed that the monkey used a polar coordinate system to represent the fine-grain spatial coding, where the radius and angle of the dots were encoded independently. The variability of the spatial response and reaction time increased with longer delays. Furthermore, the animal was able to form a categorical representation of space that was delay-dependent. The responses avoided the circumference and the center of the circle, defining a categorical radial prototype around one third of the total radial length. This radial category was observed only at delay durations of 3-5 s. Finally, the monkey also formed angular categories with prototypes at the obliques of the quadrants of the circle, avoiding the horizontal and vertical axes. However, these prototypes were only observed at the 5-s delay and on dots lying on the circumference. These results indicate that monkeys may possess spatial cognitive abilities similar to humans.
|
|
|
Parr, L. A. (2004). Perceptual biases for multimodal cues in chimpanzee (Pan troglodytes) affect recognition. Anim. Cogn., 7(3), 171–178.
Abstract: The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.
|
|
|
Kaminski, J., Call, J., & Tomasello, M. (2004). Body orientation and face orientation: two factors controlling apes' behavior from humans. Anim. Cogn., 7(4), 216–223.
Abstract: A number of animal species have evolved the cognitive ability to detect when they are being watched by other individuals. Precisely what kind of information they use to make this determination is unknown. There is particular controversy in the case of the great apes because different studies report conflicting results. In experiment 1, we presented chimpanzees, orangutans, and bonobos with a situation in which they had to request food from a human observer who was in one of various attentional states. She either stared at the ape, faced the ape with her eyes closed, sat with her back towards the ape, or left the room. In experiment 2, we systematically crossed the observer's body and face orientation so that the observer could have her body and/or face oriented either towards or away from the subject. Results indicated that apes produced more behaviors when they were being watched. They did this not only on the basis of whether they could see the experimenter as a whole, but they were sensitive to her body and face orientation separately. These results suggest that body and face orientation encode two different types of information. Whereas face orientation encodes the observer's perceptual access, body orientation encodes the observer's disposition to transfer food. In contrast to the results on body and face orientation, only two of the tested subjects responded to the state of the observer's eyes.
|
|