Autor: |
Stilp CE; Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA., Shorey AE; Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA., King CJ; Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA. |
Jazyk: |
angličtina |
Zdroj: |
The Journal of the Acoustical Society of America [J Acoust Soc Am] 2022 Sep; Vol. 152 (3), pp. 1842. |
DOI: |
10.1121/10.0014174 |
Abstrakt: |
Perception of speech sounds has a long history of being compared to perception of nonspeech sounds, with rich and enduring debates regarding how closely they share similar underlying processes. In many instances, perception of nonspeech sounds is directly compared to that of speech sounds without a clear explanation of how related these sounds are to the speech they are selected to mirror (or not mirror). While the extreme acoustic variability of speech sounds is well documented, this variability is bounded by the common source of a human vocal tract. Nonspeech sounds do not share a common source, and as such, exhibit even greater acoustic variability than that observed for speech. This increased variability raises important questions about how well perception of a given nonspeech sound might resemble or model perception of speech sounds. Here, we offer a brief review of extremely diverse nonspeech stimuli that have been used in the efforts to better understand perception of speech sounds. The review is organized according to increasing spectrotemporal complexity: random noise, pure tones, multitone complexes, environmental sounds, music, speech excerpts that are not recognized as speech, and sinewave speech. Considerations are offered for stimulus selection in nonspeech perception experiments moving forward. |
Databáze: |
MEDLINE |
Externí odkaz: |
|