Why graphics cards aren’t just for gamers anymore

GPU manufacturers are experiencing a new lease of life in AI/ML deployments.
24 October 2018

Graphic Processor Units aren’t just for gamers – they’re perfect for AI, too. Source: Shutterstock

Anyone who’s played a computer game probably knows about graphics cards. These are the pieces of technology that slot into computers’ PCI interfaces and create those almost life-like depictions of whatever game is the flavor of the month – from Call of Duty to Fortnite.

As dedicated hardware to the large-scale number crunching required in rendering game images, the cards have their own memory, cooling fans & heatsinks and at their heart, their own GPUs (graphical processing units).

GPUs have been used for a few years now in cryptography and cryptocurrency mining, and recently the units have been deployed in artificial intelligence and deep-learning analytics. GPUs’ ability to get through millions of iterative computations in record time makes them perfect for big data analysis.

If your business is looking to experiment with its own AI code-base, getting an array of GPUs to undertake the number-crunching may be a wise move. In the simplest of layman’s terms, artificial intelligence in whatever of its guises usually relies on a learning period before it can be deployed successfully – and an ongoing fine-tuning of its “intellect” after being turned to real-world tasks, too.

The computing power required to absorb many, many data sets is considerable, but happily for GPU makers (which are enjoying a boosted demand for their products outside the traditional gamers’ market), suits the pure grunt-power available from GPUs.

In the grand scheme of things, graphics cards and GPUs are common, easy to source, cheap, aren’t especially power hungry, and are easily addressable by proprietary code.

Programming libraries often use Python interfaces (one of the more common programming languages, and one commonly deployed in ML/AI) and will leverage GPUs to accelerate computations across any number of hardware units available – the more GPUs you use, the faster the AI learning process.

In the enterprise as a whole, AI has already been used in applications in fields as broadly-reaching as astronomy, biology, chemistry, physics, data mining, manufacturing, finance, and other computationally intense fields.

Compared to CPU-only learning, AI routines can perform up to 50 times faster when processing through the vast amounts of data required to set the benchmarks on which routines’ “intelligence” is based.

As a single example, Nvidia’s RAPIDS platform is available now to run in Docker environments and as source code. The company’s press release is available here.