UHG
Search
Close this search box.

Meta Releases Llama 3, Beats Claude 3 Sonnet and Gemini Pro 1.5

The model is available in 8B and 70B parameter versions and has been trained on over 15 trillion tokens, making it seven times larger than Llama 2's dataset.

Share

Llama 3

After teasing the world with a glimpse on Microsoft Azure, Meta has finally dropped Llama 3, the latest generation of its LLM that offers SOTA performance and efficiency. 

Click here to check out the model on GitHub.

The model is available in 8B and 70B parameter versions and has been trained on over 15 trillion tokens, making it seven times larger than Llama 2’s dataset. Llama 3 provides enhanced reasoning and coding capabilities, and its training process is three times more efficient than its predecessor.

The models are now also available on Hugging Face.

Meta is also training a model with more than 400 billion parameters which Mark Zuckerberg said in a Reel on Instagram is going to be the top performing model out there.

The 7B models outperforms Gemma and Mistral on all benchmarks and the 70B model outperforms Gemini Pro 1.5 and Claude 3 Sonnet.

Llama 3 models is now rolling out on Amazon SageMaker, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake. Additionally, the models will be compatible with hardware platforms provided by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.

In addition to the model, Meta has incorporated its latest models into Meta AI, now powered by Llama 3, and expanded its availability across more countries. Meta AI is accessible through Facebook, Instagram, WhatsApp, Messenger, and the web, enabling users to accomplish tasks, learn, create, and engage with their interests.

Additionally, users will soon have the opportunity to experience multimodal Meta AI on Ray-Ban Meta smart glasses.

Meta AI is powered by Llama 3 and is now available in 13 new countries. It includes improved search capabilities and innovative web experiences. The latest updates in image generation on Meta AI allow users to create, animate, and share images with a simple text prompt. 

The model uses a 128K-token vocabulary for more efficient language encoding, leading to significantly improved performance. To boost inference efficiency, grouped query attention (GQA) is implemented in both the 8B and 70B parameter models. The models were trained on sequences of 8,192 tokens, with masking to maintain document boundaries.

Llama 3’s training data consists of over 15 trillion tokens sourced from publicly available data, seven times larger than Llama 2’s dataset. The model was trained on two custom built 24k GPU clusters.

It includes four times more code and over 5% high-quality non-English data spanning 30+ languages, though English remains the most proficient. Advanced data-filtering methods, including heuristic filters and semantic deduplication, ensure top-quality training data. 

Here is the sneak preview of the upcoming 400 billion parameter Llama 3 model.

📣 Want to advertise in AIM? Book here

Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.