Google CEO tells employees Gemini AI blunder is 'unacceptable'

Google CEO Sundar Pichai speaks with Emily Zhang during the Asia-Pacific Economic Cooperation (APEC) CEO Summit at Moscone Center West in San Francisco on November 16, 2023.

Justin Sullivan | Getty Images News | Getty Images

In a note Tuesday evening, Google CEO Sundar Pichai addressed the company's AI missteps, which led to Google taking its Gemini image generation feature offline for further testing.

Pichai called the issues “problematic” and said they “offended our users and showed bias.” The news was first reported by Semafor.

Google introduced the image generator earlier this month through Gemini, the company's flagship suite of AI models. The tool allows users to enter prompts to create an image. During the past week, users discovered historical errors that spread on the Internet, and the company withdrew the feature last week, saying that it would relaunch it in the coming weeks.

“I know some of her responses offended our users and showed bias — to be clear, this is completely unacceptable and we got it wrong,” Pichai said. “No AI is perfect, especially at this emerging stage in the industry’s development, but we know the bar is high for us.”

This news comes on the heels of Google changing the name of its chatbot from Bard to Gemini earlier this month.

Pichai's memo said teams were working around the clock to address the issues and that the company would put in place a clear set of procedures and structural changes, as well as “enhanced launch processes.”

“We have always strived to provide users with useful, accurate, and unbiased information in our products,” Pichai wrote in the memo. “That's why people trust them. This should be the approach we take across all our products, including emerging AI products.”

See also  Japan slides into recession allowing Germany to overtake the world's third largest economy

Read the full text of the memo here:

I'd like to address recent issues with problematic text and image responses in the Gemini (formerly Bard) app. I know some of her responses offended our users and showed bias – to be clear, this is completely unacceptable and we got it wrong.

Our teams are working around the clock to address these issues. We are already seeing significant improvement across a wide range of claims. No AI is perfect, especially at this nascent stage of the industry's development, but we know the bar is high for us and we will continue to do so no matter how long it takes. We will review what happened and make sure it is fixed extensively.

Our mission to organize the world's information and make it accessible and useful is sacred. We have always strived to provide users with useful, accurate and unbiased information in our products. That's why people trust them. This should be our approach across all of our products, including emerging AI products.

We will drive a clear set of actions, including structural changes, updated product guidance, enhanced launches, robust evaluations, red teaming and technical recommendations. We are studying all this and will make the necessary changes.

Even as we learn from the mistakes made here, we must also build on the product and technical announcements we've made in AI over the past few weeks. This includes some fundamental developments in our core models, for example, our breakthrough on the 1 million context long window and our open models, both of which have been very well received.

See also  A Florida grandmother has been ordered to demolish the Miami home she has lived in for 17 years after $40,000 in fines for “unsafe” construction.

We know what it takes to create great products that are used and loved by billions of people and businesses, and with our infrastructure and research expertise, we have an incredible springboard for the AI ​​wave. Let's focus on what's most important: creating useful products that deserve our users' trust.

Leave a Reply

Your email address will not be published. Required fields are marked *