Neuromorphic roadmap: are brain-like processors the future of computing?

Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023

Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.

• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.

A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.

Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.

Von Neumann machines versus neuromorphic chips

The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.

Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.

Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.

What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.

Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.

It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.

Neuromorphic chips and AI – a dream combination?

Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.

OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.

Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.

LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.

So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.

Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.

But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.


Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.

“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”

Neuromorphic chips emit cybersecurity signals

What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.

The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.

“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”

Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.

However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.

Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.

Giving a new meaning to sleep mode

“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”

Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.

And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.