Human Brain Uses Next-Word Prediction to Drive Language Processing

MIT neuroscientists suggests the underlying function of the models resembles the function of language-processing centers in the human brain. Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing. Joshua Tenenbaum and Evelina Fedorenko are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.
Most notably, the artificial intelligence excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word people are going to type. The most recent generation of predictive language models appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.
RECOMMENDED NEWS