UHG
Search
Close this search box.

Google Likely to Release Gemma 3 Next Month

The competitive environment in which LLMs operate is changing quickly. For Google to stay in the market, it will be essential for it to innovate and set Gemma apart.

Share

Google is hosting its next hardware launch event ‘Made by Google’ on August 13. The company has already confirmed that it will announce the Pixel 9, Pixel 9 Pro, and the Pixel 9 Pro Fold at the event in California.

At most product launch events, hardware announcements steal the limelight. However, Google’s software-related announcements are also highly anticipated. One of the key updates to look forward to is on Gemma, Google’s own language model.

What about Gemma 3? 

Meta recently released Llama 3.1. It outperformed OpenAI’s GPT-4o on most benchmarks in categories such as general knowledge, reasoning, reading comprehension, code generation, and multilingual capabilities. 

Similarly, last week, OpenAI released GPT-4o mini, a cost efficient LLM. Priced at 15 cents per million input tokens and 60 cents per million output tokens, GPT-4o mini is 30x cheaper than GPT-40 and 60% cheaper than GPT-3.5 Turbo.

Gemma 2’s last update came over a month ago. The competitive environment in which LLMs operate is changing quickly. For Google to stay in the market, it will be essential for it to innovate and set Gemma apart.

At Made by Google, the tech giant is most likely to release the updated version of Gemma, aka Gemma 3, to stay relevant. 

Limitations of Gemma 2 

Data engineer Maziyar Panahi highlighted issues with Gemma 2’s performance when compared with models like Llama-3-70B and Mixtral. Panahi ran these models in Medical Advanced RAG. 

Panahi noted, “Gemma-2 (27B) trailed… Gemma-2 missed several obvious documents—quite a few mistakes noted! Gemma-2 tends to over-communicate, overlook details, and add unsolicited safety notes.”

Initial technical problems also plagued Gemma 2, as mentioned by a user mikael110 on Reddit. A tokeniser error was corrected relatively quickly, but a more critical issue related to “Logic Soft-Capping” persisted. 

This feature, crucial for the model’s performance, was initially overlooked due to conflicts with the model’s architecture.

Hugging Face has also said that biases or gaps in the training data can lead to limitations in the model’s responses. It also struggles to grasp subtle nuances, sarcasm, or figurative language.

Indian Developers Love Gemma 2 

Despite initial problems, Gemma 2 remains popular among Indian developers. They say they are more comfortable with Gemma than Llama. 

“750 billion tokens are spread across 30 languages, and considering an equal distribution over all 30 languages, it comes out to be 25 billion tokens per non-English language. A language like Hindi is very rich, so I feel it’s grossly underrepresented in Llama 3,” said Adarsh Shirawalmath, the founder of Tensoic.

Similarly, OdiaGenAI released Hindi-Gemma-2B-instruct, a 2 billion SFT with 187k large instruction sets in Hindi. The company said Gemma-2B was chosen as the base model due to 2B versions for CPU and on-device applications and efficient tokenisers on Indic languages compared to other LLMs. 

Recently, Telugu LLM Labs also experimented with Gemma and released Telugu Gemma.

“Models using Llama 2 extended its tokeniser by 20 to 30k tokens, reaching a vocabulary size of 50-60k. Continuous pre-training is crucial for understanding these new tokens. 

In contrast, Gemma’s tokeniser initially handles Indic languages well, requiring minimal fine-tuning for specific tasks,” said Adithya S Kolavi, the founder of Cognitive Lab. 

Not Everything is Lost for Gemma

According to Kolavi’s leaderboard for Indic LLMs, Llama 3 performs significantly better than Llama 2 on most benchmarks. However, compared to Gemma, it falls a little short. Gemma’s tokenisation for Devanagari is efficient when compared to Llama 2.

DeepMind engineer Anil Rohan wrote on X that Gemma 2 27b clearly outperforms Llama 3 70b and other open weight models with excellent post training.

“Gemma probably does a better job at Indic tokenisation than GPT-4 and Llama 3,” said Vivek Raghavan, the co-founder of Sarvam AI, in an exclusive interview with AIM. However, he added that Llama 3 has its own advantages. 

“I think Llama 3 looks quite good. There are many open models and There are many open models and we have a strategy where we leverage all of them,” he added.

📣 Want to advertise in AIM? Book here

Picture of Anshul Vipat

Anshul Vipat

Anshul Vipat is a tech aficionado, enthusiastic about the latest innovations in the digital world. He also holds keen interest in traveling, exploring and cooking
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.