Records |
Author |
Friederici, A.D.; Alter, K. |
Title |
Lateralization of auditory language functions: a dynamic dual pathway model |
Type |
Journal Article |
Year |
2004 |
Publication |
Brain and Language |
Abbreviated Journal |
Brain Lang |
Volume |
89 |
Issue |
2 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
267-276 |
Keywords |
Auditory Pathways/physiology; Brain Mapping; Comprehension/*physiology; Dominance, Cerebral/*physiology; Frontal Lobe/*physiology; Humans; Nerve Net/physiology; Phonetics; Semantics; Speech Acoustics; Speech Perception/*physiology; Temporal Lobe/*physiology |
Abstract |
Spoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis the system has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as suprasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathway model of auditory language comprehension syntactic and semantic information are primarily processed in a left hemispheric temporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody is processed in a right hemispheric temporo-frontal pathway. The relative lateralization of these functions occurs as a result of stimulus properties and processing demands. The observed interaction between syntactic and prosodic information during auditory sentence comprehension is attributed to dynamic interactions between the two hemispheres. |
Address |
Max Planck Institute of Cognitive Neuroscience, P.O. Box 500 355, 04303 Leipzig, Germany. angelafr@cns.mpg.de |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0093-934X |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
PMID:15068909 |
Approved |
no |
Call Number |
Equine Behaviour @ team @ |
Serial |
4722 |
Permanent link to this record |
|
|
|
Author |
Lemasson, A.; Koda, H.; Kato, A.; Oyakawa, C.; Blois-Heulin, C.; Masataka, N. |
Title |
Influence of sound specificity and familiarity on Japanese macaques' (Macaca fuscata) auditory laterality |
Type |
Journal Article |
Year |
2010 |
Publication |
Behavioural Brain Research |
Abbreviated Journal |
Behav. Brain. Res. |
Volume |
208 |
Issue |
1 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
286-289 |
Keywords |
Auditory processing; Hemispheric specialisation; Specificity; Familiarity; Head-turn paradigm; Macaque |
Abstract |
Despite attempts to generalise the left hemisphere-speech association of humans to animal communication, the debate remains open. More studies on primates are needed to explore the potential effects of sound specificity and familiarity. Familiar and non-familiar nonhuman primate contact calls, bird calls and non-biological sounds were broadcast to Japanese macaques. Macaques turned their heads preferentially towards the left (right hemisphere) when hearing conspecific or familiar primates supporting hemispheric specialisation. Our results support the role of experience in brain organisation and the importance of social factors to understand laterality evolution. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0166-4328 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
|
Approved |
no |
Call Number |
Equine Behaviour @ team @ |
Serial |
5081 |
Permanent link to this record |
|
|
|
Author |
Heffner, R.S.; Heffner, H.E. |
Title |
Hearing in large mammals: Horses (Equus caballus) and cattle (Bos taurus) |
Type |
Journal Article |
Year |
1983 |
Publication |
Behavioral Neuroscience |
Abbreviated Journal |
|
Volume |
97 |
Issue |
2 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
299-309 |
Keywords |
auditory range & sensitivity, horses vs cattle |
Abstract |
Determined behavioral audiograms for 3 horses and 2 cows. Horses' hearing ranged from 55 Hz to 33.3 kHz, with a region of best sensitivity from 1 to 16 kHz. Cattle hearing ranged from 23 Hz to 35 kHz, with a well-defined point of best sensitivity at 8 kHz. Of the 2 species, cattle proved to have more acute hearing, with a lowest threshold of –21 db (re 20 μN/m–2) compared with the horses' lowest threshold of 7 db. Comparative analysis of the hearing abilities of these 2 species with those of other mammals provides further support for the relation between interaural distance and high-frequency hearing and between high- and low-frequency hearing. (39 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved) |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
American Psychological Association |
Place of Publication |
Us |
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1939-0084(Electronic);0735-7044(Print) |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
|
Approved |
no |
Call Number |
Equine Behaviour @ team @ 1983-29540-001 |
Serial |
5633 |
Permanent link to this record |
|
|
|
Author |
Lampe, J.F.; Andre, J. |
Title |
Cross-modal recognition of human individuals in domestic horses (Equus caballus) |
Type |
Journal Article |
Year |
2012 |
Publication |
|
Abbreviated Journal |
Animal Cognition |
Volume |
15 |
Issue |
4 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
623-630 |
Keywords |
Cross-modal; Recognition of humans; Horse; Equus caballus; Human–horse interaction; Animal cognition; Visual recognition; Auditory recognition; Voice discrimination; Interspecific |
Abstract |
This study has shown that domestic horses are capable of cross-modal recognition of familiar humans. It was demonstrated that horses are able to discriminate between the voices of a familiar and an unfamiliar human without seeing or smelling them at the same moment. Conversely, they were able to discriminate the same persons when only exposed to their visual and olfactory cues, without being stimulated by their voices. A cross-modal expectancy violation setup was employed; subjects were exposed both to trials with incongruent auditory and visual/olfactory identity cues and trials with congruent cues. It was found that subjects responded more quickly, longer and more often in incongruent trials, exhibiting heightened interest in unmatched cues of identity. This suggests that the equine brain is able to integrate multisensory identity cues from a familiar human into a person representation that allows the brain, when deprived of one or two senses, to maintain recognition of this person. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1435-9448 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
|
Approved |
no |
Call Number |
Equine Behaviour @ team @ |
Serial |
5698 |
Permanent link to this record |
|
|
|
Author |
Zentall, S.S.; Zentall, T.R. |
Title |
Activity and task performance of hyperactive children as a function of environmental stimulation |
Type |
Journal Article |
Year |
1976 |
Publication |
Journal of consulting and clinical psychology |
Abbreviated Journal |
J Consult Clin Psychol |
Volume |
44 |
Issue |
5 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
693-697 |
Keywords |
Achievement; Acoustic Stimulation; *Arousal; Auditory Perception; Child; Humans; Hyperkinesis/*etiology; Photic Stimulation; Visual Perception |
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0022-006X |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
PMID:965541 |
Approved |
no |
Call Number |
refbase @ user @ |
Serial |
272 |
Permanent link to this record |
|
|
|
Author |
Gentner, T.Q.; Fenn, K.M.; Margoliash, D.; Nusbaum, H.C. |
Title |
Recursive syntactic pattern learning by songbirds |
Type |
Journal Article |
Year |
2006 |
Publication |
Nature |
Abbreviated Journal |
Nature |
Volume |
440 |
Issue |
7088 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
1204-1207 |
Keywords |
Acoustic Stimulation; *Animal Communication; Animals; Auditory Perception/*physiology; Humans; *Language; Learning/*physiology; Linguistics; Models, Neurological; Semantics; Starlings/*physiology; Stochastic Processes |
Abstract |
Humans regularly produce new utterances that are understood by other members of the same language community. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation. |
Address |
Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois 60637, USA. tgentner@ucsd.edu |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1476-4687 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
PMID:16641998 |
Approved |
no |
Call Number |
refbase @ user @ |
Serial |
353 |
Permanent link to this record |
|
|
|
Author |
Seyfarth, R.M.; Cheney, D.L. |
Title |
The acoustic features of vervet monkey grunts |
Type |
Journal Article |
Year |
1984 |
Publication |
The Journal of the Acoustical Society of America |
Abbreviated Journal |
J Acoust Soc Am |
Volume |
75 |
Issue |
5 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
1623-1628 |
Keywords |
*Acoustics; Animals; Auditory Perception; Cercopithecus/*physiology; Cercopithecus aethiops/*physiology; Cues; Dominance-Subordination; Female; Male; Social Behavior; Sound Spectrography; *Vocalization, Animal |
Abstract |
East African vervet monkeys give short (125 ms), harsh-sounding grunts to each other in a variety of social situations: when approaching a dominant or subordinate member of their group, when moving into a new area of their range, or upon seeing another group. Although all these vocalizations sound similar to humans, field playback experiments have shown that the monkeys distinguish at least four different calls. Acoustic analysis reveals that grunts have an aperiodic F0, at roughly 240 Hz. Most grunts exhibit a spectral peak close to this irregular F0. Grunts may also contain a second, rising or falling frequency peak, between 550 and 900 Hz. The location and changes in these two frequency peaks are the cues most likely to be used by vervets when distinguishing different grunt types. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0001-4966 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
PMID:6736426 |
Approved |
no |
Call Number |
refbase @ user @ |
Serial |
703 |
Permanent link to this record |