Abstract:
|
Consonants and vowels may play different roles during language processing, consonants being preferentially involved in lexical processing, and vowels tending to mark syntactic constituency through prosodic cues. In support of this view, artificial language learning studies have demonstrated that consonants (C) support statistical computations, whereas vowels (V) allow certain structural generalizations. Nevertheless, these asymmetries could be mere by-products of lower level acoustic differences between Cs and Vs, in particular the energy they carry, and thus their relative salience. Here we address this issue and show that vowels remain the preferred targets for generalizations, even when consonants are made highly salient or vowels barely audible. Participants listened to speech streams of nonsense CVCVCV words, in which consonants followed a simple ABA structure. Participants failed to generalize this structure over sonorant consonants (Experiment 1), even when vowel duration was reduced to one third of that of consonants (Experiment 2). When vowels were eliminated from the stream, participants showed only a marginal evidence of generalizations (Experiment 4). In contrast, participants readily generalized the structure over barely audible vowels (Experiment 3). These results show that different roles of consonants and vowels cannot be readily reduced to acoustical and perceptual differences between these phonetic categories. |
Abstract:
|
This research was funded by McDonnell Foundation Grant 21002089; by CEE Special Targeted Project CALACEI (Contract 12778, NEST); by the Mind, Brain, and Behavior Interfaculty Initiative at Harvard University; and by PRIN2005 to M.N. |