Generative AI can read your mind, if you let it

Subjects need to be willing and can thwart the system to maintain privacy. But generative AI can read your mind, find neuroscientists in US study.
2 May 2023

Volunteers listened to hours of podcasts while sat in a functional magnetic resonance imaging (fMRI) machine to create subject-specific training datasets for the generative AI algorithms to decode. Image credit: Shutterstock Generate.

Getting your Trinity Audio player ready...

If you thought that the capabilities of large language models (LLMs) to predict the next word in a sequence – the basis for advanced chatbots such as OpenAI’s ChatGPT – was impressive, then you’re in for a treat. A study published this week in Nature Neuroscience, dubbed Semantic reconstruction of continuous language from non-invasive brain recordings, takes language decoding systems to a whole new level. And the results (also available on bioRxiv, a free-to-read preprint repository) make for fascinating reading as researchers in the US demonstrate how generative AI can read your mind.

Thought interface

Much progress has been made in decoding speech articulation and other motor signals from intracranial recordings. And these breakthroughs can bring quality-of-life improvements to stroke victims and other patients whose speech has been impaired. But there are limits to how widely such approaches can be rolled out.

“While effective, these decoders require invasive neurosurgery, making them unsuitable for most other uses,” write the scientists in their paper. “Non-invasive language decoders could be more widely adopted, and may eventually help all people interact with technological devices through thought.”

The Huth Lab team, based at the University of Texas at Austin, wanted to discover whether it was possible to reconstruct the words that a willing subject is perceiving or imagining from non-invasive brain recordings. And the group realized that transformer-based LLMs, which look for statistical patterns in language, could help to reconstruct data that machines struggle to capture.

Modern functional magnetic resonance imaging (fMRI) equipment does a great job of pinpointing where neurons are firing. But the relatively slow rise and fall of those blood-oxygen-level-dependent (BOLD) impulses – which are used to measure brain activity – points to a hurdle facing neuroscientists. Subjects process language more quickly than fMRI machines can generate images.

For spoken English, a single fMRI image could represent 20 words. However, decoders can overcome this data acquisition barrier by predicting not individual words, but the most likely sequences of words – a feature that’s built into LLMs thanks to their numerous parameters and massive corpus.

To determine whether generative AI can read your mind, non-invasively, the team had to first build up a brain-specific training set of data. And this step involved a willing subject lying inside an fMRI for 16 hours listening to podcasts while researchers gathered BOLD data so that they could correlate those narrated stories with the brain activity triggered by the spoken words. Once complete, the group then tested its algorithm by attempting to predict the contents of an 1800-word story based on the brain activity of the same subject, who had never heard that narration before.

Comparing exact matches, the decoder was able to recreate many of the same words and phrases as heard by the subject. And what’s more, when the team considered words and phrases that – although different – shared the same meaning (measured using BERTScore), the agreement between actual and decoded sequences became stronger.

Looking at the results up close, it’s not a word-for-word transcript. But even if you doubt that generative AI can read your mind exactly, it can certainly gather the gist of the subject’s thoughts. Although there are some caveats – thankfully, as it turns out.

Privacy safeguards

The team repeated the training and testing sequence across several willing subjects to examine the potential of generative AI to impact mental privacy. Currently, the system has to be trained extensively on a willing subject to succeed and requires the use of a large and expensive fMRI machine. “A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” confirmed Alex Huth, who leads the research.

Also, the training data is only meaningful for individuals. Decoders that were trained on cross-subject data performed barely above chance, and significantly worse than decoders trained on within-subject data, noted the group. And importantly, subjects can send decoders off course by imagining scenarios that don’t tally with audio track, which shows that while generative AI can read your mind, it’s possible to resist.

It’s not the first time that scientists have demonstrated that generative AI can read your mind. Earlier this year, neuroscientists in Japan provided a glimpse into the capability of AI to re-create what people see by reading their brain scans. In this case, the team used Stable Diffusion to render images of a teddy bear, an aeroplane, a clock tower, and train, which were remarkably similar to the examples shown to volunteers.

Also, digging into why neural network language models work so well at predicting brain responses to natural language, Alux Huth and his colleague Richard Antonello provide food for thought in another study. The scientists caution that the recent success in using LLMs to fit brain data doesn’t necessarily mean that one explains the computational principles of the other. In fact, the researchers note that the effectiveness of LLMs to capture a wide range of linguistic phenomena could offer an alternative explanation for brain-like similarities, rather than a shared capability for upcoming word prediction.