BrainGPT thoughts to text – how AI can read your mind

If you thought that you had to lie inside an fMRI machine for AI to puzzle your mind, think again. BrainGPT systems have become portable.
13 December 2023

High-performance portable multichannel wireless EEG acquisition systems can read the minds of willing volunteers.

Getting your Trinity Audio player ready...

• BrainGPT – a “mind-reading” app – is significantly closer than you might have imagined.
• A significant advance in sensor technology has made BrainGPT a portable prospect.
• At the moment, it’s a technology with around a 40% accuracy – but speech recognition moved quickly, and so could this.

Volunteers used to have to lie inside an fMRI machine and listen to podcasts before large language models (LLMs) could decode their thoughts. But not anymore. Portable, high-performance multichannel wireless EEG acquisition systems have been shown to be capable of BrainGPT-enabled thought-to-text conversion, which is a big deal.

The team behind the latest breakthrough – a group of researchers based at the Human-centric Artificial Intelligence Center in Sydney, Australia – presented their results at NeurIPS 2023, which took place this week in the US.

As a video of the BrainGPT system (embedded in the section below) reveals, the setup connects a so-called multitask EEG encoder with LLMs to decode coherent and readable sentences from EEG signals. In other words, BrainGPT can turn thoughts (measured by wearing a cap of non-invasive electrodes) into words (shown on a screen).

There’s a lot to unpack here, so let’s begin with the EEG acquisition system.

How to design a portable brain scanner

Thought-to-text conversion relies on associating patterns of brain activity with certain words and phrases. And there are a few ways of gathering that information.

To give algorithms the best chance of finding unique patterns that can be tagged to text, high-resolution data is ideal. And the closer that you can place electrodes to the brain neurons themselves, the better. However, there’s also risk and convenience to consider.

It’s notable that the University of Technology Sydney (UTS) team has managed to perform thought-to-text conversion using a non-invasive headset with dry electrodes that sit over the wearer’s hair.


This is the most convenient scenario for users, but unlike brain electrodes used together with a gel or having hardware implanted, the arrangement typically produces the noisiest signals – making it more difficult for AI systems, such as BrainGPT, to read someone’s mind.

The UTS team gets around this in a few ways, starting with materials selection. Its non-invasive sensors for the brain-machine interface are based on micropatterned epitaxial graphene.

Graphene – a one-atom thick form of carbon, first isolated using Scotch tape by Andre Geim and Konstantin Novoselov at the University of Manchester, UK, winning them the 2010 Nobel Prize in Physics – is a so-called ‘wonder material.’

Graphene is incredibly strong, thin, and a great conductor of both electricity and heat. A combination of those properties make the nanomaterial well-suited to measuring cortical electrical activity.

“[Its] very thin nature, combined with high electrical conductivity, biocompatibility, corrosion resistance, and stability in sweat, are all very attractive characteristics for use as sensors for both invasive and noninvasive neural interfaces,” write Chin-Teng Lin – director of the UTS Human-centric AI Center – and colleagues in a related paper.

The epitaxial graphene, which is grown on the tips of micro-patterned silicon structures, was able to detect EEG signals that were comparable (in terms of signal-to-noise ratio) to the gold standard in wet sensors.

Historically, brain-computer interfaces with high signal quality have been heavy and large, but the researchers have shown that it’s possible to build a portable setup.

The wearable EEG measurement device used in their BrainGPT system transmits its 24-bit high-resolution sampled signals, which can be gathered from up to 64 channels, via Wi-Fi and features a Cortex-M4 microcontroller.

How LLMs help to convert thoughts to words

Aside from the cleverly engineered brain-scanning hardware, another key feature is the use of LLMs. The group uses a framework dubbed DeWave that aligns EEG-to-text encodings with pre-trained language models.

BrainGPT - coming soon.

Coming soon to a cortex near you. Very, very near you…

It means that while there may still be some uncertainty in the raw measurements themselves, LLMs can highlight the most-likely next word based on the context of the output so far. And this BrainGPT approach helps to push up the overall accuracy of the system.

The team reports that the translation accuracy score is currently around 40% on BLEU-1, which is a scale between zero and one used to compare the similarity of machine-generated results to high-quality references.

Modern speech recognition systems have BLEU-1 scores closer to 90%. And while it’s clear that the BrainGPT system has some way to go, it’s worth recalling where speech recognition was a decade or so ago.

Speech recognition systemsback then needed to be trained for hours by each user before they generated sensible results – whereas today, they work well straight out of the box.

Considering the progress being made by a raft of neurotech start-ups, the prospects for thought-to-text conversion are very bright indeed. Already, companies are making use of relatively crude brain wave measurements to inform users about their performance levels and alertness.

List of neurotech start-ups:

What’s more, brain signals can be converted not just into text, but also into speech and expressions that can be rendered using a digital avatar. And these breakthroughs will help people who would otherwise be unable to communicate.

In previous work, the UTS group has shown how eight non-invasive graphene sensors is sufficient to control a four-legged robot using brain activity alone. And there are lots of inspiring examples of what can be done using consumer hardware that’s starting to become available at relatively affordable prices.

Sour lemons and JavaScript

One of the most curious examples (assuming that the video ‘I literally connected my brain to GPT-4 with JavaScript’ is for real) is being able to trigger applications by simply thinking about sour lemons.

The approach makes use of an SDK available with the Neurosity Crown – a device that claims to increase your productivity by boosting your brain’s concentration.

As well as reporting on the wearer’s focus level, the Crown’s software interface lets users train algorithms to recognize custom thought patterns. Putting the feature to the test, the video’s author imagined biting into a sour lemon, relaxed, and repeated 30 times until the thought pattern had been fully captured.

Using JavaScript, it’s possible to have that data trigger applications by subscribing to those observables and have code run whenever the events are true.

On TechHQ we’ve written previously about the rise of neuromarketing and using brainwaves to capitalize on sales opportunities, and the future could be a strange one indeed. For example, what if your smartphone notified you of nearby sandwich shops when it knew that you were feeling hungry?

Buckle up – brain-reading AI is on the horizon.