AWS’ Neo-AI drives machine learning on edge

AWS launches open source Neo-AI to drive adoption of machine learning models in edge computing.
29 January 2019

AWS launches open source Neo-AI to drive ML on edge services. Source: Shutterstock

It was back in 2018 that AWS released its SageMaker Neo, a machine learning feature that one can use to train a machine learning model once and then run it anywhere in the cloud and the edge.

Now, the company is releasing the code as the open-source Neo-AI project under the Apache Software License.

This will allow processor vendors, device makers, and deep learning developers to bring new and independent innovations in machine learning to a variety of hardware platforms.

In most cases, optimizing a machine learning model for multiple hardware platforms can be difficult as developers need to tune models manually for each platform’s hardware and software configuration.

This is especially challenging for edge devices, which tend to be constrained in compute power and storage. These constraints limit the size and complexity of the models that they can run.

Therefore, developers spend weeks or months manually tuning a model to get the best performance. The tuning process requires rare expertise in optimization techniques and deep knowledge of the hardware. Even then, it typically requires considerable trial and error to get good performance because good tools aren’t readily available.

Differences in software further complicate this effort. If the software on the device isn’t the same version as the model, the model will be incompatible with the device.

This leads developers to limit themselves to only the devices that exactly match their model’s software requirements— all of this makes it very difficult to quickly build, scale, and maintain machine learning applications.

At its core, Neo-AI reduces the time and effort needed to tune machine learning models for deployment on multiple platforms by automatically optimizing these models with no loss of accuracy.

The models are converted into efficient common formats that reduce software compatibility problems too. This software solution allows modes to run on resource-constrained devices too.

Unlocking innovation in areas such as autonomous vehicles, home security, and anomaly detection is also on the horizon.

Neo-AI currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence and Qualcomm coming soon.

“Intel’s vision of Artificial Intelligence is motivated by the opportunity for researchers, data scientists, developers, and organizations to obtain real value from advances in deep learning,” said Naveen Rao, General Manager of the Artificial Intelligence Products Group at Intel.

“Xilinx provides the FPGA hardware and software capabilities that accelerate machine learning inference applications in the cloud and at the edge,” said Sudip Nag, Corporate Vice President at Xilinx.

Finally, Jem Davies, fellow, General Manager and Vice President for the Machine Learning Group at ARM, also chimed in by saying “ARM’s vision of a trillion connected devices by 2035 is driven by the additional consumer value derived from innovations like machine learning.”