English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Modulation spectra capture EEG responses to speech signals and drive distinct temporal response functions

MPS-Authors
/persons/resource/persons212714

Teng,  Xiangbin
Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society;

/persons/resource/persons173724

Poeppel,  David
Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Max-Planck-NYU Center for Language, Music, and Emotion, New York University;
Department of Psychology, New York University;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

neu-21-ten-01-modulation.pdf
(Publisher version), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Teng, X., Meng, Q., & Poeppel, D. (2021). Modulation spectra capture EEG responses to speech signals and drive distinct temporal response functions. eNeuro, 8(1): ENEURO.0399-20.2020. doi:10.1523/ENEURO.0399-20.2020.


Cite as: https://hdl.handle.net/21.11116/0000-0008-91AD-1
Abstract
Speech signals have a unique shape of long-term modulation spectrum that is distinct from environmental noise, music, and non-speech vocalizations. Does the human auditory system adapt to the speech long-term modulation spectrum and efficiently extract critical information from speech signals? To answer this question, we tested whether neural responses to speech signals can be captured by specific modulation spectra of non-speech acoustic stimuli. We generated amplitude modulated (AM) noise with the speech modulation spectrum and 1/f modulation spectra of different exponents to imitate temporal dynamics of different natural sounds. We presented these AM stimuli and a 10-min piece of natural speech to 19 human participants undergoing electroencephalography (EEG) recording. We derived temporal response functions (TRFs) to the AM stimuli of different spectrum shapes and found distinct neural dynamics for each type of TRFs. We then used the TRFs of AM stimuli to predict neural responses to the speech signals, and found that (1) the TRFs of AM modulation spectra of exponents 1, 1.5, and 2 preferably captured EEG responses to speech signals in the δ band and (2) the θ neural band of speech neural responses can be captured by the AM stimuli of an exponent of 0.75. Our results suggest that the human auditory system shows specificity to the long-term modulation spectrum and is equipped with characteristic neural algorithms tailored to extract critical acoustic information from speech signals.