|
Seyfarth, R. M., & Cheney, D. L. (1984). The acoustic features of vervet monkey grunts. J Acoust Soc Am, 75(5), 1623–1628.
Abstract: East African vervet monkeys give short (125 ms), harsh-sounding grunts to each other in a variety of social situations: when approaching a dominant or subordinate member of their group, when moving into a new area of their range, or upon seeing another group. Although all these vocalizations sound similar to humans, field playback experiments have shown that the monkeys distinguish at least four different calls. Acoustic analysis reveals that grunts have an aperiodic F0, at roughly 240 Hz. Most grunts exhibit a spectral peak close to this irregular F0. Grunts may also contain a second, rising or falling frequency peak, between 550 and 900 Hz. The location and changes in these two frequency peaks are the cues most likely to be used by vervets when distinguishing different grunt types.
|
|
|
Harland, M. M., Stewart, A. J., Marshall, A. E., & Belknap, E. B. (2006). Diagnosis of deafness in a horse by brainstem auditory evoked potential. Can Vet J, 47(2), 151–154.
Abstract: Deafness was confirmed in a blue-eyed, 3-year-old, overo paint horse by brainstem auditory evoked potential. Congenital inherited deafness associated with lack of facial pigmentation was suspected. Assessment of hearing should be considered, especially in paint horses, at the time of pre-purchase examination. Brainstem auditory evoked potential assessment is well tolerated and accurate.
|
|
|
Gentner, T. Q., Fenn, K. M., Margoliash, D., & Nusbaum, H. C. (2006). Recursive syntactic pattern learning by songbirds. Nature, 440(7088), 1204–1207.
Abstract: Humans regularly produce new utterances that are understood by other members of the same language community. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.
|
|
|
Zentall, S. S., & Zentall, T. R. (1976). Activity and task performance of hyperactive children as a function of environmental stimulation. J Consult Clin Psychol, 44(5), 693–697.
|
|
|
Cheney, D. L., Seyfarth, R. M., & Silk, J. B. (1995). The responses of female baboons (Papio cynocephalus ursinus) to anomalous social interactions: evidence for causal reasoning? J Comp Psychol, 109(2), 134–141.
Abstract: Baboons' (Papio cynocephalus ursinus) understanding of cause-effect relations in the context of social interactions was examined through use of a playback experiment. Under natural conditions, dominant female baboons often grunt to more subordinate mothers when interacting with their infants. Mothers occasionally respond to these grunts by uttering submissive fear barks. Subjects were played causally inconsistent call sequences in which a lower ranking female apparently grunted to a higher ranking female, and the higher ranking female apparently responded with fear barks. As a control, subjects heard a sequence made causally consistent by the inclusion of grunts from a 3rd female that was dominant to both of the others. Subjects responded significantly more strongly to the causally inconsistent sequences, suggesting that they recognized the factors that cause 1 individual to give submissive vocalizations to another.
|
|
|
Shettleworth, S. J. (1972). Stimulus relevance in the control of drinking and conditioned fear responses in domestic chicks (Gallus gallus). J Comp Physiol Psychol, 80(2), 175–198.
|
|
|
Friederici, A. D., & Alter, K. (2004). Lateralization of auditory language functions: a dynamic dual pathway model. Brain Lang, 89(2), 267–276.
Abstract: Spoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis the system has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as suprasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathway model of auditory language comprehension syntactic and semantic information are primarily processed in a left hemispheric temporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody is processed in a right hemispheric temporo-frontal pathway. The relative lateralization of these functions occurs as a result of stimulus properties and processing demands. The observed interaction between syntactic and prosodic information during auditory sentence comprehension is attributed to dynamic interactions between the two hemispheres.
|
|
|
Krishnan, A., Gandour, J. T., Ananthakrishnan, S., Bidelman, G. M., & Smalt, C. J. (). Functional ear (a)symmetry in brainstem neural activity relevant to encoding of voice pitch: A precursor for hemispheric specialization? Brain and Language, In Press, Corrected Proof.
Abstract: Pitch processing is lateralized to the right hemisphere; linguistic pitch is further mediated by left cortical areas. This experiment investigates whether ear asymmetries vary in brainstem representation of pitch depending on linguistic status. Brainstem frequency-following responses (FFRs) were elicited by monaural stimulation of the left and right ear of 15 native speakers of Mandarin Chinese using two synthetic speech stimuli that differ in linguistic status of tone. One represented a native lexical tone (Tone 2: T2); the other, T2', a nonnative variant in which the pitch contour was a mirror image of T2 with the same starting and ending frequencies. Two 40-ms portions of f0 contours were selected in order to compare two regions (R1, early; R2 late) differing in pitch acceleration rate and perceptual saliency. In R2, linguistic status effects revealed that T2 exhibited a larger degree of FFR rightward ear asymmetry as reflected in f0 amplitude relative to T2'. Relative to midline (ear asymmetry = 0), the only ear asymmetry reaching significance was that favoring left ear stimulation elicited by T2'. By left- and right-ear stimulation separately, FFRs elicited by T2 were larger than T2' in the right ear only. Within T2', FFRs elicited by the earlier region were larger than the later in both ears. Within T2, no significant differences in FFRS were observed between regions in either ear. Collectively, these findings support the idea that origins of cortical processing preferences for perceptually-salient portions of pitch are rooted in early, preattentive stages of processing in the brainstem.
|
|
|
Lemasson, A., Koda, H., Kato, A., Oyakawa, C., Blois-Heulin, C., & Masataka, N. (2010). Influence of sound specificity and familiarity on Japanese macaques' (Macaca fuscata) auditory laterality. Behav. Brain. Res., 208(1), 286–289.
Abstract: Despite attempts to generalise the left hemisphere-speech association of humans to animal communication, the debate remains open. More studies on primates are needed to explore the potential effects of sound specificity and familiarity. Familiar and non-familiar nonhuman primate contact calls, bird calls and non-biological sounds were broadcast to Japanese macaques. Macaques turned their heads preferentially towards the left (right hemisphere) when hearing conspecific or familiar primates supporting hemispheric specialisation. Our results support the role of experience in brain organisation and the importance of social factors to understand laterality evolution.
|
|
|
Heffner, R. S., & Heffner, H. E. (1983). Hearing in large mammals: Horses (Equus caballus) and cattle (Bos taurus). Behavioral Neuroscience, 97(2), 299–309.
Abstract: Determined behavioral audiograms for 3 horses and 2 cows. Horses' hearing ranged from 55 Hz to 33.3 kHz, with a region of best sensitivity from 1 to 16 kHz. Cattle hearing ranged from 23 Hz to 35 kHz, with a well-defined point of best sensitivity at 8 kHz. Of the 2 species, cattle proved to have more acute hearing, with a lowest threshold of –21 db (re 20 μN/m–2) compared with the horses' lowest threshold of 7 db. Comparative analysis of the hearing abilities of these 2 species with those of other mammals provides further support for the relation between interaural distance and high-frequency hearing and between high- and low-frequency hearing. (39 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
|
|