Microsoft and Meta expand their AI partnership with Llama 2 on Azure and Windows 

The astonishing advancements in AI innovation over the past few months have sparked a surge of revolutionary potential, capturing our collective imagination with the possibility of changing industries and how we operate. 

Announcing support for the Llama 2 family of large language models (LLMs) on Azure and Windows today at Microsoft Inspire was Meta and Microsoft. The goal of Llama 2 is to make it possible for organizations and developers to create generative AI-powered products and experiences. We are delighted that Meta is adopting an open strategy with Llama 2 because Meta and Microsoft share a commitment to democratizing AI and its advantages. We give developers a variety of models to choose from, supporting open and frontier models, and we’re proud to be Meta’s preferred partner as they launch the new version of Llama 2 for the first time to commercial users. 

The 7B, 13B, and 70B-parameter Llama 2 models may now be adjusted and deployed on Azure, the platform for the most frequently used frontier and open models, more quickly and safely. Llama will also be enhanced so that it can operate locally on Windows. Targeting the DirectML execution provider via the ONNX Runtime will enable Windows developers to leverage Llama, enabling a seamless workflow as they integrate generative AI experiences into their applications. 

 

Our growing partnership with Meta

Since collaborating to integrate ONNX Runtime with PyTorch to improve the development experience for PyTorch on Azure, as well as when Meta selected Azure as a strategic cloud provider, Meta and Microsoft have been longtime AI partners. The news made today strengthens our collaboration to speed up innovation in the age of AI and strengthens Microsoft’s position as the leading supercomputing platform for AI in the world. 

The world’s top AI companies can build, train, and deploy some of the most demanding AI workloads with the assistance of Azure’s purpose-built AI supercomputing platform, which is specially designed from the ground up with the infrastructure, hardware, and software to support them. Developers may benefit from Azure AI’s robust infrastructure for model training, fine-tuning, inference, and especially the capabilities that promote AI safety, thanks to the availability of the Llama 2 models with Azure AI. 

The Llama 2 models’ inclusion in Windows promotes Windows as the best platform for developers to create AI experiences catered to the needs of their clients and frees up their ability to create using top-notch tools like Windows Subsystem for Linux (WSL), Windows terminal, Microsoft Visual Studio, and VS Code. 

Expanding Azure AI model catalogue and Windows availability 

The newest model to be added to our expanding library of Azure AI models is Llama 2. The model catalogue, which is presently in public preview, acts as a repository for foundation models and equips developers and machine learning (ML) experts to quickly find, assess, alter, and scale-up the deployment of huge pre-built AI models. 

When Llama 2 is operationalized, the catalog eliminates the requirement for users to handle all infrastructure requirements. With the help of potent optimization methods like DeepSpeed and ONNX Runtime, which can greatly speed up model fine-tuning, it offers turnkey support for model evaluation and fine-tuning. 

Using Llama 2, Windows developers will find it simple to create new experiences that can be accessed via GitHub Repo. Developers can customize LLMs to match their unique needs directly on their Windows PCs thanks to Windows Subsystem for Linux and powerful GPUs. 

Building responsibly with Azure

The core of Microsoft’s approach to AI and how we partner is responsible AI. We have spent a lot of money over the years making Azure the hub for ethical, cutting-edge AI innovation, whether users choose to create their own models from scratch or use pre-made, scalable models from Microsoft, Meta, OpenAI, and the open-source ecosystem. 

At Microsoft, we employ an iterative, layered strategy that combines experimentation and measurement to reduce any risks that could be associated with the usage of massive language models. Customers of Azure AI can use their own sample data to test Llama 2 and see how it works fortheir specific use case. Then, clients can create, assess, and optimize meta-prompts for their app to give consumers a safer and more dependable experience using prompt engineering and retrieval augmented generation (RAG) methodologies. 

Services like Azure AI Content Safety provide an additional degree of security to make using AI apps online safer. As a result of our work with Meta, the deployments of the Llama 2 models in Azure AI now come with a layered safety approach as standard. This was made possible by fusing Meta’s safety methodologies with Azure AI Content Safety. 

With the addition of Llama 2 to our model library today and our collaboration with Meta, we have made significant progress toward an open, responsible approach to AI. 

Visit the Azure AI model catalogand start using Llama 2 today. 

 

Tags:Azure, Azure AI, large language models, Llama 2, Meta, Microsoft Inspire 

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

Related articles

Contact us

Partner with us for comprehensive IT

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Your benefits:
What happens next?
1

We Schedule a call at your convenience 

2

We do a discovery and consulting meeting 

3

We prepare a proposal 

Schedule a Free Consultation