Alice in Wonderland is 150 years old this year but the ever-young adventurer recently led Cornell researchers to a part of the brain that helps listeners understand her story.
Cornell faculty member John Hale’s study, “Modeling fMRI time courses with linguistic structure at various grain sizes,” published in Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics, examines how the individual words of Lewis Carroll's famous tale come together to yield an understanding of each sentence.
Hale and his team found a positive relationship between predicted difficulty levels based on grammatical structure and neural signals, as measured in the fMRI scanner. The results highlight a region of the temporal lobe that supports an unconscious "parsing" process.
Hale points out the interdisciplinary nature of this neurotechnology research, which encompasses computational linguistics and neuroscience: “Elements from all of these areas come together to say what’s going on in the mind during an important cognitive process, language understanding.”
Study participants listened to the first chapter of “Alice in Wonderland” while in a fMRI scanner at the Cornell MRI Facility.
This type of “naturalistic” study has only recently become possible.
“They didn’t have to press any buttons or do anything except listen,” says Hale, associate professor of linguistics in the College of Arts and Sciences. “We verified they understood the story when they came out, but it’s ecologically natural, as though they were driving to work while listening to the radio. We are studying real language comprehension.”
While in the scanner, the participants’ blood oxygen level dependent (BOLD) signals were collected. BOLD signals revealed which areas in the brain were more active while listening to Alice’s adventures. The researchers showed that grammatical structure can predict the waxing and waning of BOLD signals as measured in fMRI.
This proved true only in the anterior temporal lobe, however. The researchers failed to find a similar fit in areas traditionally thought to be language areas, raising questions about the brain’s language network and how the regions work together to contribute to the comprehension process.
“In cognitive science, the mind is viewed as a computer,” says Hale. “If we think of language comprehension as a program that runs in the brain, we can interpret the brain images as snapshots of this program's execution."
This study used the idea of “surprisal” to link grammars and neural signals. One way to understand surprisal, says Hale, is as the degree to which a person’s expectations were disconfirmed. “It fits into a trend within psychology and cognitive science more broadly of viewing the brain as a predictive machine.”
Hale describes his ideas about surprisal in his 2014 book, “Automaton Theories of Human Sentence Comprehension,” which also shows how different kinds of grammars can be used in models of perceptual processing.
Hale’s current research expands the scope of his fMRI studies to compare brain response to different languages.
David E. Lutz, a graduate student in the field of linguistics; Wen-Ming Luh, adjunct professor in the department of biomedical engineering; and Jonathan R. Brennan, University of Michigan, are co-authors on the “Modeling fMRI” paper. The research was supported in part by a grant from the National Institute of Health and by a National Science Foundation CAREER award.
Linda B. Glaser is a staff writer for the College of Arts & Sciences.