Electronics

Build Your Own AI Supermodel Based on Llama 3 Using Nvidia’s “AI Foundry”

Build Your Own AI Supermodel Based on Llama 3 Using Nvidia's "AI Foundry"

In a time when every enterprise and nation is trying to build their own sovereign AI, Nvidia has introduced a new offering under its Nvidia AI Foundry using which enterprises and nations can now build their own AI supermodel based on Llama 3.1 tailored to reflect their unique business and culture.

What is Nvidia AI Foundry?

Nvidia AI Foundry is a comprehensive platform designed to empower organizations in building, deploying, and scaling their custom AI supermodels. It seamlessly integrates with leading public clouds and leverages the power of NVIDIA DGX™ Cloud AI platform to provide scalable compute resources aligned with AI demands.

Previously, NVIDIA AI Foundry offered NVIDIA-created AI models like Nemotron and Edify, along with popular open foundation models, enabling users to train and deploy custom AI supermodels. Now with its partnership with Meta, it extends its capabilities to include Meta’s latest open-source model, Llama 3.1.

Nvidia Founder and CEO Jensen Huang emphasized the significance of Llama 3.1 integration with Nvidia AI Foundry, stating

Meta’s openly available Llama 3.1 models mark a pivotal moment for the adoption of generative AI within the world’s enterprises

What is Meta’s Llama 3.1?

 Llama 3.1 is the latest collection of open-source AI large language models comprising both pretrained and instruction-tuned text-in/text-out generative AI models available in 8B, 70B, and 405B parameter sizes from Meta. The instruction-tuned Llama 3.1-405B stands out as the largest and most powerful open-source AI model.

When asked from Meta Founder and CEO Mark Zuckerberg about their latest open source AI offering , he highlighted the importance of Llama 3.1 for open-source AI, stating:

The new Llama 3.1 models are a super-important step for open source AI.

What makes this Meta and Nvidia AI Foundry so special is that , all these large language models under Llama 3.1 has been trained over 16,000 NVIDIA H100 Tensor Core GPUs and are optimized for NVIDIA accelerated computing and software — in the data center, in the cloud and locally on workstations with NVIDIA RTX™ GPUs or PCs with GeForce RTX GPUs.

For in-depth details on Meta’s Llama 3.1, refer to IBM’s blog.

How to train and deploy your own AI supermodel using Nvidia AI Foundry?

Training and Deployment with Nvidia Nemo

To embark on training and deploying your AI supermodel, you’ll require Nvidia’s Nemo platform, an integral component of Nvidia AI Foundry. Nemo empowers you to train your AI model using either your domain-specific datasets or synthetic data generated through the combined capabilities of Llama 3.1 405B and Nemotron-4 340B.

Once your AI model is trained, you can create NVIDIA NIM inference microservices to operationalize them within your preferred MLOps and AIOps platforms on your chosen cloud platforms and NVIDIA-Certified Systems™.

Collaboration for Distillation

Nvidia’s partnership with Meta for the release of Llama 3.1 on AI Foundry has further led to the development of a distillation recipe for Llama 3.1. This recipe enables developers to create smaller, custom Llama 3.1 models tailored for generative AI applications. This advancement empowers enterprises to run Llama-powered AI applications on a wider range of accelerated infrastructure, including AI workstations and laptops.

Companies Already using Nvidia AI Foundry to deploy their own AI model based on Llama .

Global professional service firm “Accenture” is the first company to adopt NVIDIA AI Foundry to build custom Llama 3.1 models using the Accenture AI Refinery™ framework, both for its own use as well as for clients seeking to deploy generative AI applications that reflect their culture, languages and industries.

“The world’s leading enterprises see how generative AI is transforming every industry and are eager to deploy applications powered by custom models,” said Julie Sweet, chair and CEO of Accenture.

Apart from Accenture other companies across healthcare, energy, financial services, retail, transportation and telecommunications are already working with NVIDIA NIM microservices for Llama. Among the first to access the new NIM microservices for Llama 3.1 are Aramco, AT&T and Uber.

Conclusion

With its latest offering in partnership with Meta ,Nvidia has significantly lowered the barrier of entry for organizations and nations aspiring to develop their own AI supermodels. 

Further with this offering , Nvidia has positioned itself much strongly in the AI space .


Discover more from WireUnwired

Subscribe to get the latest posts sent to your email.

Senior Writer
Abhinav is a graduate from NIT Jamshedpur . He is an electrical engineer by profession and analog engineer by passion . His articles at WireUnwired is just a part of him following his passion.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Discover more from WireUnwired

    Subscribe now to keep reading and get access to the full archive.

    Continue reading