Google has announced that it’s temporarily halting its Gemini AI’s capability to generate images of people, following the discovery that the tool was producing inaccurate historical images. Gemini has been generating diverse images of the US Founding Fathers and Nazi-era German soldiers, seemingly in an effort to challenge the gender and racial stereotypes often seen in generative AI.
“We’re already working to address recent issues with Gemini’s image generation feature,” says Google in a statement posted on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”
Google’s choice to temporarily stop generating images of people in Gemini comes shortly after the company apologized for the inaccuracies in certain historical images created by its AI model. Some Gemini users have been asking for images of historical groups or figures like the Founding Fathers and have found non-white AI-generated individuals in the results. This has sparked online conspiracy theories that Google is deliberately avoiding depicting white people.
Now that Google has turned off Gemini’s capability to create images of people, here’s what happens when you ask the AI model for a picture of a person:
“We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.”
Google introduced image generation through Gemini (formerly Bard) earlier this month, aiming to rival OpenAI and Microsoft’s Copilot. Like its competitors, the image generation tool creates a set of images based on a text input. Google has confirmed that image generation is available worldwide in English, except in the European Economic Area, UK, or Switzerland. This explains why testing from the UK didn’t work.