Deep language models (DLMs) provide a novel computational paradigm for how the brain processes natural language. Unlike symbolic, rule-based models described in psycholinguistics, DLMs encode words and their context as continuous numerical vectors
We found a striking correspondence between the layer-by-layer sequence of embeddings from GPT2-XL and the temporal sequence of neural activity in language areas.
In addition, we found evidence for the gradual accumulation of recurrent information along the linguistic processing hierarchy.
These findings point to a connection between language processing in humans and DLMs where the layer-by-layer accumulation of contextual information in DLM embeddings matches the temporal dynamics of neural activity in high-order language areas.
Deep language models transformed our ability to model language. Recent studies connected these neural nets based models to the human representation of language. Here, we show a striking similarity between the sequence of representations induced by the model and the brain encoding of language over time during real-life comprehension.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.