DOI

10.17077/etd.l5l1sst3

Document Type

Dissertation

Date of Degree

Spring 2017

Access Restrictions

.

Degree Name

PhD (Doctor of Philosophy)

Degree In

Neuroscience

First Advisor

McMurray, Bob

First Committee Member

Wessel, Jan R.

Second Committee Member

Brown, Carolyn J.

Third Committee Member

Hazeltine, Eliot

Fourth Committee Member

Treat, Teresa A.

Abstract

Speech perception is challenging because the acoustic input is extremely variable. This variability partially stems from differences in how talkers pronounce words. For example, Voice Onset Time (VOT) is the primary cue that distinguishes /b/ from /p/. Women tend to use longer Voice Onset Times (VOTs) than men. A VOT of 20 msec could thus be a /b/ spoken by a woman and a /p/ spoken by a man. A critical question is how listeners deal with this variability. Previous research shows that listeners use these regularities (e.g., the systematic relationship between gender and VOT) to compensate for variability. For example, listeners adjust their phoneme category boundary based on talker gender. However, it is unclear the exact mechanisms by which talker gender information influences speech processing. Talker gender could influence only later stages of speech processing, like phoneme categorization. Alternatively, talker gender could modulate the earliest stage: acoustic cue encoding. I use event-related potentials, eye-tracking in the visual world paradigm, and electrocorticography to isolate the specific role of talker gender in speech perception. The results show that the auditory system influences the earliest stage of speech perception by allowing cues to be encoded relative to prior expectations about gender and that gender is integrated with acoustic cues during lexical activation. These experiments give insight into how the brain deals effectively with variability during categorization.

Public Abstract

Anyone who has ever visited a foreign country knows it can be overwhelming and isolating to be surrounded by a foreign language. All the words seem to stream together – it is difficult to tell where one word ends and the next begins. This experience provides a little glimpse into the amazingly complex processes the brain goes through to understand speech. In an individual’s native language, understanding speech occurs quickly and effortlessly, so it seems simple. However, one of the main reasons that this process is complex is that every person you hear pronounces words slightly differently. The brain can figure out that two people are saying the same word even though the actual sounds they produce are slightly different. For example, scientists know that men tend to pronounce words differently than women. This project asks exactly how the brain uses this information about a talker’s gender to help figure out what is being said. It uses methods that can measure speech processing in the brain as it unfolds (over the course of a few hundred milliseconds). The results show that listeners use information about a talker’s gender at the earliest stage of speech processing, which suggests that we actually hear speech differently depending on the gender of the talker.

Keywords

speech perception

Pages

xi, 131 pages

Bibliography

Includes bibliographical references (pages 121-131).

Copyright

Copyright © 2017 Kayleen Elizabeth Schreiber

Share

COinS