Our lab investigates the cognitive mechanisms underlying language comprehension and production, in both adults and children. Of particular interest is how we understand and produce language “on-line”, as it occurs in real time. Much of our work focuses on referring expressions like “the tall fast girl”, “the girl”, or “she”.
How do speakers make choices between pronouns (she) and names or descriptions? What determines variation in how they pronounce these words? We examine these questions to understand how language production processes are driven by the linguistic and visual context, the speaker’s attention and memory, and language planning.
How do listeners identify referents in the moments after they hear a referring expression? In particular, how do they integrate information from multiple sources that constrains the likelihood that the speaker is referring to a particular object? This work bears on questions of how people rapidly integrate information from multiple sources, both linguistic and nonlinguistic, how they build representations of the situation that focus on some things more than others, as well as questions about the degree to which speaking and understanding involve maintaining representations of the knowledge, goals, and intentions of one’s interlocutors. The main tool we use for investigating comprehension processes is the monitoring of eye movements. We use the Eyelink II head-mounted eyetracker, produced by SR Research.