RESEARCH INTERESTS

My work is primarily concerned with how humans process incoming information, involving its perception, comprehension, and encoding into memory. Most of the work in my lab focuses on the perception of spoken language: How do humans decode the complex acoustic signal, and recognize spoken words?

The investigation of spoken language processing can be approached in many ways, at several levels. The work in our lab has used many different methodologies, and looked at the problem from both a "bottom-up" and a "top-down" perspective. From a bottom-up perspective, we have aimed to clarify the early types of representations used in speech perception, and have been able to identify at least three qualitatively different levels of representation.

Over the years, our lab has investigated the role of top down influences. How does activation at the lexical (word) level affect the activation of lower-level, sublexical representations? A number of our studies in this area have used the "phonemic restoration" phenomenon. The restoration work is built on Richard Warren's (1970) discovery that utterances sound intact, even when parts of them have been deleted and replaced by an extraneous sound. We have used this phenomenon to study the knowledge sources used by the perceptual process to restore missing parts of the signal. This is an example of the restoration phenomenon.

Spoken language processing is not static. Listeners must adapt to changes in the speech environment. Our lab has produced a series of papers over the last 15 years that explore how listeners adjust their perceptual categories when they are confronted with speech that is accented, or non-standard in a similar way. For example, a native Mandarin speaker tends to produce the "th" sound in English as "s" (e.g., "thin" will sound more like "sin"). How do listeners accommodate these mispronunciations, allowing them to better understand the speaker?

In the last several years, we have been examining the relationship between speech perception and speech production. The usual assumption in the field is that these are two sides of the same coin, and that developing one will support the development of the other. It turns out that the relationship is much more complicated. Although sometimes the two modalities are in fact mutually supportive, in other situations they are actually adversarial: When trying to learn new speech sounds or new words, producing them can actually block perceptual learning.

Our research on speech is always conducted within the context of the broader perspective of attention, perception, and cognition. For this reason, we have consistently tried to determine the generality of the perceptual principles and processes that we study. In most cases, we have found that the same principles and processes operate in nonlinguistic domains (such as music perception) -- speech is just one type of complex acoustic signal that the system can operate on. We have repeatedly found that it is necessary to understand the operation of attention, in order to understand the complete pattern of results in any study. It is also critical to study the timecourse of processing. How the percept evolves over the course of hundreds of milliseconds.

Because our lab's focus is on perception and cognition broadly, rather than on speech perception per se, we occasionally have research projects that extend into domains other than speech. For example, we have published a number of papers on attentional effects in the visual domain, exploring the "inhibition of return" phenomenon (once attention has moved from a given location, processing of stimuli in that location will be worse than if attention had not been there). Here are some examples of things we have tested in our studies of inhibition of return.

In the auditory modality, one line of nonspeech research dealt with "change deafness". A number of studies had previously demonstrated a phenomenon called "change blindness" in which people seem to be surprisingly poor at noticing rather large changes in a visual scene.  In the auditory domain, our research on "change deafness" tested whether people are also not very good at noticing when the set of sounds they hear changes. Here are some demonstrations of change deafness, showing that in fact people often do not notice rather substantial changes in the set of sounds that are presented to them.

The research that our lab has done over the years has helped to explain how listeners decode the extremely complex sounds that make up speech. We have found that to understand the system that accomplishes this remarkable feat, it is important to frame the issues in the context of what has been learned about attention, perception, and cognition.