More than a decade ago, neuroscientist Ev Fedorenko asked 48 English speakers to perform tasks such as reading sentences, memorizing information, solving math problems and listening to music. In doing so, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then language processing regions would be activated even during nonlinguistic tasks.
Fedorenko’s experience, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it came to arithmetic, musical processing, general working memory, and other non-linguistic tasks, the language regions of the human brain were unresponsive. Contrary to what many linguists claim, complex thought and language are separate things. One does not require the other. “We have this highly specialized place in the brain that doesn’t respond to other activities,” says Fedorenko, associate professor in the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. “It is not true that thought absolutely needs language.”
Experimental design, using neuroscience to understand how language works, how it evolves, and how it relates to other cognitive functions, is at the heart of Fedorenko’s research. She is part of a unique intellectual triad within the BCS department at MIT, with colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together, they engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs – EvLab, TedLab and the Computational Psycholinguistics Lab – the researchers are motivated by a shared fascination with the human mind and how language works in the brain. “We have a lot of interaction and collaboration,” Levy says. “It’s a very largely collaborative, intellectually rich and diverse landscape.”
By combining computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic linguistic datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. “Believe it or not,” says Ted Gibson, “that’s not the standard answer.”
Gibson first came to MIT in 1993 and joined the linguistics department faculty in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at this time was dominated by the ideas of Noam Chomsky, one of the founders of MIT’s graduate program in linguistics, who is called the father of modern linguistics. Chomsky’s “nativist” theories of language posited that the purpose of language is the articulation of thought and that linguistic ability is integrated before any learning. But Gibson, with his background in mathematics and computer science, felt that researchers had not tested these ideas satisfactorily. He believed that finding the answer to many outstanding questions about language required quantitative research, which was a departure from standard linguistic methodology. “There’s no reason to just rely on you and your friends, that’s how linguistics works,” Gibson says. “The data you can get can be much larger if you involve many people using experimental methods.” Chomsky’s ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. “I felt like I needed to understand this in detail and see if there was any truth to these claims,” he says.
Three decades after joining MIT, Gibson believes that the collaborative research conducted at BCS is compelling and provocative, paving the way for new ways of thinking about human culture and cognition. “Now we are at a stage where there are only arguments against it. We have a lot of positive things about what the language is,” he explains. Levy adds: “I would say that all three of us agree that communication plays a very important role in learning and processing languages, but also in the structure of language itself.”
Levy points out that the three researchers obtained doctorates in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to closely follow and be influenced by each other’s work. The first collaboration between the three took place in 2005 and focused on the linguistic treatment of Russian relative clauses. At that time, Gibson recalls, Levy presented what he describes as “beautiful work” that helped him understand the connections between language structure and communication. “Communicative pressures determine structures,” explains Gibson. “Roger was instrumental in this. He was the one who helped me think about these things a long time ago.”
Levy’s lab focuses on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. “I try to use the tools offered by mathematical and computational approaches to language to formalize scientific hypotheses about language and the human mind and test these hypotheses,” he says.
Levy points to ongoing research between him and Gibson focused on language understanding as an example of the benefits of collaboration. “One of the big questions is: When language comprehension fails, why does it fail?” Together, the researchers applied the concept of a “noisy channel,” first developed by information theorist Claude Shannon in the 1950s, whereby information or messages are corrupted during transmission. “Language understanding develops over time, involving a continuous integration of the past with the present,” explains Levy. “Memory itself is an imperfect channel transmitting the past from our brain a moment ago to our brain today in order to support successful understanding of language.” Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words as adults, can create a kind of statistical knowledge guiding our expectations, our beliefs, our predictions and our interpretations of linguistic meaning. “Statistical knowledge of language actually interacts with the constraints of our memory,” Levy explains. “Our experience shapes our memory for the language itself.”
All three researchers say they share the belief that by following the evidence, they will eventually uncover an even larger, more complete story about language. “This is how science happens,” says Fedorenko. “Ted trained me, along with Nancy Kanwisher, and Ted and Roger are both very data-driven. If the data doesn’t give you the answer you thought it would, you don’t keep moving your story forward. You think about new hypotheses. Almost everything I’ve done has been like that.” Sometimes Fedorenko’s research into parts of the brain’s linguistic system surprised her and forced her to abandon her hypotheses. “In a certain project, I came up with the idea that there would be some separation between the parts that are interested in combinatorics and the meaning of words,” she says, “but every little piece of the system Linguistics is sensitive to both. At one point, I said to myself: This is what the data is telling us, and we have to comply with it.”
The work of researchers, who designate communication as the constitutive objective of language, opens up new possibilities for probing and studying non-human language. The standard claim is that human language has a considerably larger lexicon than that of animals, which have no grammar. “But a lot of times we don’t even know what other species are communicating,” says Gibson. “We say they can’t communicate, but we don’t know. We don’t speak their language.” Fedorenko hopes that more possibilities for linguistic comparisons between species will open up. “Understanding where things are similar and where they diverge would be extremely helpful,” she says.
At the same time, the potential applications of linguistic research are considerable. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language as they read English texts, Levy can predict their English level, an approach that could one day replace the English as a Foreign Language test. “It’s an implicit measure of language rather than a much more playful test,” he says.
Researchers agree that some of the most exciting opportunities in language neuroscience lie in large models of language that provide new opportunities to ask new questions and make new discoveries. “In language neuroscience, the kinds of stories we were able to tell about how the brain interprets language were limited to verbal, descriptive hypotheses,” says Fedorenko. The computer models being implemented are now incredibly efficient in language and show a certain degree of alignment with the brain, she adds. Now researchers can ask questions like: What are the actual calculations cells perform to obtain meaning from strings of words? “You can now use these models as tools to better understand how humans might process language,” she says. “And you can take apart the models in a way that you can’t take apart the brain.”