Date of Degree
PhD (Doctor of Philosophy)
During spoken language comprehension, listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phoneme categories are activated in a gradient way, there are also clear individual differences, with more gradient categorization being linked to various communication impairment like dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987).
Crucially, most studies have used two-alternative forced choice (2AFC) tasks to measure the sharpness of between-category boundaries. Here we propose an alternative paradigm that allows us to measure categorization gradiency in a more direct way. We then use this measure in an individual differences paradigm to: (a) examine the nature of categorization gradiency, (b) explore its links to different aspects of speech perception and other cognitive processes, (c) test different hypotheses about its sources, (d) evaluate its (positive/negative) role in spoken language comprehension, and (e) assess whether it can be modified via training.
Our results provide validation for this new method of assessing phoneme categorization gradiency and offer valuable insights into the mechanisms that underlie speech perception.
Understanding spoken language is something we may take for granted. However, it is a quite remarkable skill, especially if we consider how often we deal with noise and ambiguities in everyday interactions (due to background noise, unfamiliar accents etc.). Even though listeners typically cope with such difficulties in an effortless manner, we do not yet have a comprehensive understanding of the perceptual and cognitive mechanisms that allow for this.
One core issue that remains unclear is how do listeners distinguish between similar speech sounds (e.g. between the words beach and peach). On the one hand, there is robust evidence showing that typical listeners perceive speech in great detail and they use this information in a gradient manner. However, according to an alternative account, listeners are better off focusing only on that portion of the speech signal that is relevant for the ultimate categorization decision. Furthermore, divergence from this latter pattern has been considered a marker of atypical or non-optimal language processing.
Interestingly, recent findings suggest that there are substantial differences between listeners in how they process the speech signal. By studying these differences we can achieve a better understanding of how listeners process spoken language, but also identify situations in which maintaining detailed speech information is advantageous or detrimental for language comprehension.
The goal of the present study is to develop a novel way of studying such individual differences in order to address fundamental questions about speech perception processes. Ultimately, the study of such differences will lead to a more comprehensive understanding of both typical and atypical patterns of language processing.
publicabstract, categorical perception, cue encoding, individual differences, phoneme categorization, speech perception, visual analogue scaling
Copyright 2016 Efthymia Evangelia Kapnoula