|   | 
Details
   web
Records
Author Seyfarth, R.M.; Cheney, D.L.
Title Signalers and receivers in animal communication Type Journal Article
Year (up) 2003 Publication Annual review of psychology Abbreviated Journal Annu Rev Psychol
Volume 54 Issue Pages 145-173
Keywords Affect; *Animal Communication; Animals; Arousal; Auditory Perception; Motivation; *Social Behavior; Social Environment; Species Specificity; *Vocalization, Animal
Abstract In animal communication natural selection favors callers who vocalize to affect the behavior of listeners and listeners who acquire information from vocalizations, using this information to represent their environment. The acquisition of information in the wild is similar to the learning that occurs in laboratory conditioning experiments. It also has some parallels with language. The dichotomous view that animal signals must be either referential or emotional is false, because they can easily be both: The mechanisms that cause a signaler to vocalize do not limit a listener's ability to extract information from the call. The inability of most animals to recognize the mental states of others distinguishes animal communication most clearly from human language. Whereas signalers may vocalize to change a listener's behavior, they do not call to inform others. Listeners acquire information from signalers who do not, in the human sense, intend to provide it.
Address Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA. seyfarth@psych.upenn.edu
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0066-4308 ISBN Medium
Area Expedition Conference
Notes PMID:12359915 Approved no
Call Number refbase @ user @ Serial 690
Permanent link to this record
 

 
Author Parr, L.A.
Title Perceptual biases for multimodal cues in chimpanzee (Pan troglodytes) affect recognition Type Journal Article
Year (up) 2004 Publication Animal Cognition Abbreviated Journal Anim. Cogn.
Volume 7 Issue 3 Pages 171-178
Keywords Acoustic Stimulation; *Animal Communication; Animals; Auditory Perception/physiology; Cues; Discrimination Learning/*physiology; Facial Expression; Female; Male; Pan troglodytes/*psychology; Perceptual Masking/*physiology; Photic Stimulation; Recognition (Psychology)/*physiology; Visual Perception/physiology; *Vocalization, Animal
Abstract The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.
Address Division of Psychobiology, Yerkes National Primate Research Center, Emory University, 954 Gatewood Road, GA 30329, Atlanta, USA. parr@rmy.emory.edu
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1435-9448 ISBN Medium
Area Expedition Conference
Notes PMID:14997361 Approved no
Call Number Equine Behaviour @ team @ Serial 2544
Permanent link to this record
 

 
Author Friederici, A.D.; Alter, K.
Title Lateralization of auditory language functions: a dynamic dual pathway model Type Journal Article
Year (up) 2004 Publication Brain and Language Abbreviated Journal Brain Lang
Volume 89 Issue 2 Pages 267-276
Keywords Auditory Pathways/physiology; Brain Mapping; Comprehension/*physiology; Dominance, Cerebral/*physiology; Frontal Lobe/*physiology; Humans; Nerve Net/physiology; Phonetics; Semantics; Speech Acoustics; Speech Perception/*physiology; Temporal Lobe/*physiology
Abstract Spoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis the system has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as suprasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathway model of auditory language comprehension syntactic and semantic information are primarily processed in a left hemispheric temporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody is processed in a right hemispheric temporo-frontal pathway. The relative lateralization of these functions occurs as a result of stimulus properties and processing demands. The observed interaction between syntactic and prosodic information during auditory sentence comprehension is attributed to dynamic interactions between the two hemispheres.
Address Max Planck Institute of Cognitive Neuroscience, P.O. Box 500 355, 04303 Leipzig, Germany. angelafr@cns.mpg.de
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0093-934X ISBN Medium
Area Expedition Conference
Notes PMID:15068909 Approved no
Call Number Equine Behaviour @ team @ Serial 4722
Permanent link to this record
 

 
Author Gentner, T.Q.; Fenn, K.M.; Margoliash, D.; Nusbaum, H.C.
Title Recursive syntactic pattern learning by songbirds Type Journal Article
Year (up) 2006 Publication Nature Abbreviated Journal Nature
Volume 440 Issue 7088 Pages 1204-1207
Keywords Acoustic Stimulation; *Animal Communication; Animals; Auditory Perception/*physiology; Humans; *Language; Learning/*physiology; Linguistics; Models, Neurological; Semantics; Starlings/*physiology; Stochastic Processes
Abstract Humans regularly produce new utterances that are understood by other members of the same language community. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.
Address Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois 60637, USA. tgentner@ucsd.edu
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1476-4687 ISBN Medium
Area Expedition Conference
Notes PMID:16641998 Approved no
Call Number refbase @ user @ Serial 353
Permanent link to this record
 

 
Author Harland, M.M.; Stewart, A.J.; Marshall, A.E.; Belknap, E.B.
Title Diagnosis of deafness in a horse by brainstem auditory evoked potential Type Journal Article
Year (up) 2006 Publication The Canadian Veterinary Journal. La Revue Veterinaire Canadienne Abbreviated Journal Can Vet J
Volume 47 Issue 2 Pages 151-154
Keywords Acoustic Stimulation/veterinary; Animals; Deafness/congenital/diagnosis/*veterinary; Evoked Potentials, Auditory, Brain Stem/*physiology; Horse Diseases/congenital/*diagnosis; Horses; Male; Pigmentation/physiology; Sensitivity and Specificity
Abstract Deafness was confirmed in a blue-eyed, 3-year-old, overo paint horse by brainstem auditory evoked potential. Congenital inherited deafness associated with lack of facial pigmentation was suspected. Assessment of hearing should be considered, especially in paint horses, at the time of pre-purchase examination. Brainstem auditory evoked potential assessment is well tolerated and accurate.
Address Department of Clinical Sciences, College of Veterinary Medicine, Auburn University, Wire Road, Auburn, Alabama, USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0008-5286 ISBN Medium
Area Expedition Conference
Notes PMID:16579041 Approved no
Call Number Equine Behaviour @ team @ Serial 5680
Permanent link to this record
 

 
Author Lemasson, A.; Koda, H.; Kato, A.; Oyakawa, C.; Blois-Heulin, C.; Masataka, N.
Title Influence of sound specificity and familiarity on Japanese macaques' (Macaca fuscata) auditory laterality Type Journal Article
Year (up) 2010 Publication Behavioural Brain Research Abbreviated Journal Behav. Brain. Res.
Volume 208 Issue 1 Pages 286-289
Keywords Auditory processing; Hemispheric specialisation; Specificity; Familiarity; Head-turn paradigm; Macaque
Abstract Despite attempts to generalise the left hemisphere-speech association of humans to animal communication, the debate remains open. More studies on primates are needed to explore the potential effects of sound specificity and familiarity. Familiar and non-familiar nonhuman primate contact calls, bird calls and non-biological sounds were broadcast to Japanese macaques. Macaques turned their heads preferentially towards the left (right hemisphere) when hearing conspecific or familiar primates supporting hemispheric specialisation. Our results support the role of experience in brain organisation and the importance of social factors to understand laterality evolution.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0166-4328 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number Equine Behaviour @ team @ Serial 5081
Permanent link to this record
 

 
Author Lampe, J.F.; Andre, J.
Title Cross-modal recognition of human individuals in domestic horses (Equus caballus) Type Journal Article
Year (up) 2012 Publication Abbreviated Journal Animal Cognition
Volume 15 Issue 4 Pages 623-630
Keywords Cross-modal; Recognition of humans; Horse; Equus caballus; Human–horse interaction; Animal cognition; Visual recognition; Auditory recognition; Voice discrimination; Interspecific
Abstract This study has shown that domestic horses are capable of cross-modal recognition of familiar humans. It was demonstrated that horses are able to discriminate between the voices of a familiar and an unfamiliar human without seeing or smelling them at the same moment. Conversely, they were able to discriminate the same persons when only exposed to their visual and olfactory cues, without being stimulated by their voices. A cross-modal expectancy violation setup was employed; subjects were exposed both to trials with incongruent auditory and visual/olfactory identity cues and trials with congruent cues. It was found that subjects responded more quickly, longer and more often in incongruent trials, exhibiting heightened interest in unmatched cues of identity. This suggests that the equine brain is able to integrate multisensory identity cues from a familiar human into a person representation that allows the brain, when deprived of one or two senses, to maintain recognition of this person.
Address
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1435-9448 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number Equine Behaviour @ team @ Serial 5698
Permanent link to this record