Meta, Microsoft release new AI language model for commercial use

The tech giant unveiled its first large language model for anyone to use—for free.
19 July 2023

The latest AI model by Meta, Llama 2, is available to major cloud providers, including Microsoft Corp. Source: Shutterstock

• The laest AI model by Meta, LLaMA 2, is available to major cloud providers, including Microsoft.
• Qualcomm is scheduled to make LLaMA 2-based AI implementations available on flagship smartphones and PCs starting in 2024.
• LLaMA models are available at three levels of pre-training.

Meta has intensified the generative AI race by unveiling its latest large language model, LLaMA 2, which will be open-source and free for commercial and research use. The move puts the social media company in a position to go head-to-head with OpenAI’s free-to-use GPT-4, which powers tools like ChatGPT and Microsoft Bing.

Meta’s press release explains the decision to open up LLaMA, to give businesses, startups, and researchers access to more AI tools, allowing for experimentation as a community. In short, the tech giant is sticking to its long-held belief that allowing all sorts of programmers to tinker with technology is the best way to improve it. 

LLaMA 2 will not be limited to researchers. Meta said it is open-sourcing the AI model for commercial use through partnerships with major cloud providers, including Microsoft Corp. “We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing,” Meta said in a blog posting on Tuesday (June 18). 

The Facebook parent company believes making its large language model open-source is a safer option. “Opening access to today’s AI models means a generation of developers and researchers can stress test them, identifying and solving problems fast, as a community. By seeing how others use these tools, our teams can learn from them, improve those tools, and fix vulnerabilities,” the company stated.

Separately, Mark Zuckerberg, in a post on his personal Facebook page, said Meta had a long history of open-sourcing its infrastructure and AI work. “From PyTorch, the leading machine learning framework, to models like Segment Anything, ImageBind, and Dino, to basic infrastructure as part of the Open Compute Project. This has helped us build better products by driving progress across the industry,” he claimed.

Mark Zuckerberg's Facebook post.

Mark Zuckerberg’s Facebook post.

The move by Meta would also establish the company alongside other tech giants as having a pivotal contribution to the AI arms race. For context, Chief Executive Officer Mark Zuckerberg has said incorporating AI improvements into all the company’s products and algorithms is a priority and that Meta is spending record amounts on AI infrastructure. According to Meta, there has been a massive demand for Llama 1 from researchers — with more than 100,000 requests for access to the large language model.

What’s new with the latest AI model by Meta?

LLaMA 2 is the first project to come out of the company’s generative AI group, a new team assembled in February 2023. According to Zuckerberg, LLaMA 2 has been pre-trained and fine-tuned on models with 7 billion, 13 billion, and 70 billion parameters. “LLaMA 2 was pre-trained on 40% more data than LLaMA 1 and had improvements to its architecture,” he said.

It also says it “outperforms” other LLMs like Falcon and MPT in terms of reasoning, coding, proficiency, and knowledge tests. For the fine-tuned models, Zuckerberg said Meta had collected more than one million human annotations and applied supervised fine-tuning and reinforcement learning with human feedback (RLHF) with leading results on safety and quality. 

Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parametersSource: Meta

Meta developed and released the LLaMA 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Source: Meta

Meta also announced that Microsoft would distribute the new version of the AI model through its Azure cloud service and will run it on the Windows operating system.

Meta said in its blog post that Microsoft was its “preferred partner” for the release. In the generative AI race, Microsoft has emerged as the clear leader through its investment and technology partnership with ChatGPT creator OpenAI, which charges for access to its model.

“Starting today, LLaMA 2 is available in the Azure AI model catalog, enabling developers using Microsoft Azure to build with it and leverage their cloud-native tools for content filtering and safety features. It is also optimized to run locally on Windows, giving developers a seamless workflow as they bring generative AI experiences to customers across different platforms,” the tech giant said.

Meta said LLaMA 2 is available through Amazon Web Services (AWS), Hugging Face, and other providers.

Qualcomm partners with Meta to run LLaMA 2 on phones

Shortly after Meta unveiled LLaMA 2, Qualcomm announced that it is partnering with the tech giant for the new large language model. “Qualcomm Technologies Inc. and Meta are working to optimize the execution of Meta’s LLaMA 2 large language models directly on-device – without relying on the sole use of cloud services,” Qualcomm said.

For the US chip designer, the ability to run generative AI models like LLaMA 2 on devices such as smartphones, PCs, VR/AR headsets, and vehicles allows developers to save on cloud costs and provide users with private, more reliable, personalized experiences. Qualcomm is scheduled to make LLaMA 2-based AI implementation available on devices powered by Snapdragon from 2024 onwards. 

“We applaud Meta’s approach to open and responsible AI and are committed to driving innovation and reducing barriers-to-entry for developers of any size by bringing generative AI on-device,” said Durga Malladi, senior vice president and general manager of technology, planning, and edge solutions businesses, Qualcomm Technologies, Inc. 

Malladi believes that to scale generative AI into the mainstream effectively, AI will need to run on both the cloud and devices at the edge, such as smartphones, laptops, vehicles, and IoT devices.