AI-powered study uncovers how the brain turns thoughts into words

AI-powered research reveals how the brain processes language in real conversations, offering insights into speech disorders and improving technology.

Scientists used AI to study how the brain processes language, uncovering a neural pattern that mirrors deep learning models in speech and meaning.

Scientists used AI to study how the brain processes language, uncovering a neural pattern that mirrors deep learning models in speech and meaning. (CREDIT: CC BY-SA 4.0)

For decades, scientists have worked to understand how the brain turns thoughts into words and words into meaning. Language is one of the most complex human abilities, involving many different brain regions working together. Researchers have traditionally studied language in isolated parts—speech sounds, grammar, and meaning—without a unified model to connect them.

Now, a team of scientists has developed a new approach using artificial intelligence (AI) to explore how the brain processes speech in everyday conversations. By combining brain recordings with deep learning models, they have created a system that predicts neural activity during real-world discussions.

A recent study, published in Nature Human Behaviour, recorded over 100 hours of natural conversations using electrocorticography (ECoG). This method, which involves placing electrodes directly on the brain, allowed scientists to track neural activity with remarkable precision.

The study, led by Dr. Ariel Goldstein from the Hebrew University of Jerusalem in collaboration with researchers at NYU Langone Health and Princeton University, used a powerful AI system called Whisper to analyze how speech is processed in the brain.

An ecological, dense-sampling paradigm for modelling neural activity during real-world conversations. (CREDIT: Nature Human Behaviour)

AI and the Human Brain: A Perfect Match

Whisper, a deep learning model designed by OpenAI, processes spoken language without relying on traditional grammar rules or symbolic structures like phonemes. Instead, it learns patterns from massive amounts of audio data and converts sound into meaningful words. Scientists used this model to break down speech into three key levels:

  • Acoustic features – the raw sound waves of speech.
  • Speech patterns – the way sounds form words.
  • Word meanings – the deeper understanding behind sentences.

By mapping these layers onto brain activity, the team discovered striking similarities between AI and human neural processes. Whisper’s deep-learning model closely mirrored how the brain moves from hearing sounds to understanding words.

“We found that deep learning models align remarkably well with brain activity during natural conversation,” said Dr. Goldstein. “This suggests that AI models process language in a way similar to how the brain does.”

How the Brain Moves from Thought to Speech

The study revealed that the brain follows a specific sequence when processing speech. Before a person speaks, neural activity shifts from higher-order language areas—where words are formed—to speech-related regions that control vocal movements. When listening, the process moves in reverse, starting with sound perception and ending with understanding.

Brain areas responsible for sound processing, such as the superior temporal cortex, showed strong alignment with speech patterns. Meanwhile, higher-level language regions, like the inferior frontal gyrus, were more closely linked to word meaning. This structured flow of information explains how people can quickly form sentences and interpret speech in real time.

Acoustic, speech and language encoding model performance during speech production and comprehension. (CREDIT: Nature Human Behaviour)

The team used AI to predict neural responses to specific words and sounds. Even for conversations not included in the original dataset, the model accurately mapped different brain regions to their corresponding language functions. These findings suggest that deep learning could provide a new framework for studying natural language processing in the brain.

A New Frontier for Neuroscience and Technology

Understanding how the brain processes speech has major implications for both science and technology. The findings could help improve AI-powered speech recognition, making virtual assistants and transcription tools more natural and efficient. They could also contribute to the development of better communication aids for individuals with speech disorders.

Beyond practical applications, this research provides insight into one of humanity’s most essential functions—conversation. “By connecting different layers of language, we’re uncovering the mechanics behind something we all do naturally—talking and understanding each other,” Dr. Goldstein explained.

Enhanced encoding for language embeddings fused with auditory speech features. (CREDIT: Nature Human Behaviour)

The study marks a step forward in bridging neuroscience and AI. By modeling human language with deep learning, researchers are beginning to decode the brain’s complex communication system. This could lead to breakthroughs in artificial intelligence, brain-computer interfaces, and treatments for speech-related conditions.

Language has long been studied as a set of separate components—sounds, grammar, and meaning. But as this research shows, these elements are deeply interconnected in both human brains and AI models.

By using technology to explore how people speak and understand language in the real world, scientists are getting closer to unlocking the mysteries of communication.

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joseph Shavit
Joseph ShavitSpace, Technology and Medical News Writer
Joseph Shavit is the head science news writer with a passion for communicating complex scientific discoveries to a broad audience. With a strong background in both science, business, product management, media leadership and entrepreneurship, Joseph possesses the unique ability to bridge the gap between business and technology, making intricate scientific concepts accessible and engaging to readers of all backgrounds.