Google announces Gemma 2, a 27-parameter version of its open model, launching in June

Google on Tuesday announced a number of new additions to Gemma, its family of open (but not open source) models similar to Meta’s Llama and Mistral models, at its annual Google I/O 2024 developer conference.

The version grabbing the headlines here is Gemma 2, the next generation of Google’s open-weight Gemma models, which will launch with a model parameter of 27 billion in June.

PaliGemma, a pre-trained Gemma variant that Google describes as “the first vision language model in the Gemma family” is already available for image captions, image tagging and visual Q&A use cases.

Until now, the standard GEMA models, which were launched earlier this year, have only been available in 2-billion-parameter and 7-billion-parameter versions, making this new 27-billion-parameter model a step forward.

In a press conference before Tuesday’s announcement, Josh Woodward, Google’s vice president of Google Labs, noted that Gemma’s models have been downloaded more than “millions of times” across the various services on which they are available. He stressed that Google has optimized the $27 billion model to run on Nvidia’s next-generation GPUs, a single Google Cloud TPU host and a managed Vertex AI service.

However, size does not matter if the model is not good. Google hasn’t shared a lot of data about Gemma 2 yet, so we’ll have to see how it performs once the developers get their hands on it. “We’re already seeing some great quality. It’s outperforming models that are twice as big as they were,” Woodward said.

We’re launching an AI-driven newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *