After retracting its artificial intelligence image generation tool on Thursday amidst a series of controversies, Google is gearing up to reintroduce the product soon, according to Google DeepMind CEO Demis Hassabis.
The image generator was initially launched earlier this month via Gemini, Google’s primary suite of AI models. This tool allows users to input prompts to generate an image. However, over the past week, users have uncovered historical inaccuracies and questionable responses, leading to widespread circulation on social media.
“We have taken the feature offline while we fix that,” Hassabis explained during a panel at the Mobile World Congress conference in Barcelona on Monday. “We are hoping to have that back online very shortly in the next couple of weeks, a few weeks.” He further acknowledged that the product was not “working the way we intended.”
The controversy emerged alongside a high-profile rebranding effort by Google earlier this month, which included renaming its chatbot and introducing a new app and subscription options. Formerly known as Bard, a significant competitor to OpenAI’s ChatGPT, the chatbot, and assistant is now called Gemini, aligning with the suite of AI models powering it.
Instances of inaccuracies include when users requested images such as a German soldier in 1943, prompting the tool to depict a racially diverse group of soldiers in German military uniforms of that era, as seen in screenshots on X.
Similar outcomes occurred when users asked for historically accurate depictions, such as a medieval British king or the U.S. founding fathers. The model generated racially diverse images, including a woman ruler, an 18th-century king of France, and a German couple in the 1800s. In another instance, the model showed an image of Asian men in response to a query about Google’s founders.
Margaret Mitchell, chief ethics scientist at Hugging Face and former co-leader of Google’s AI ethics group, criticized the situation, stating, “The Gemini debacle showed how AI ethics _wasn’t_ being applied with the nuanced expertise necessary.”
Alphabet CEO Sundar Pichai also faced criticism for the company’s handling of AI, particularly regarding the botched rollout of Bard last year, following the viral spread of ChatGPT.
The latest issues with Gemini have reignited debates within the AI industry, with some groups criticizing its perceived bias while others argue that Google didn’t adequately prioritize AI ethics. Google itself faced scrutiny in 2020 and 2021 for the restructuring of its AI ethics group after the publication of a critical research paper by its co-leads.
In addition to concerns about image generation, a viral query on Sunday asked the Gemini chatbot to compare the societal impact of Adolf Hitler and Elon Musk’s tweeting of memes. The response sparked further controversy, highlighting the challenges Google faces in managing AI-generated content.
Google acknowledged the issues with Gemini’s image generation and announced plans to “pause the image generation of people” while working on improvements.
Despite these challenges, Google remains committed to advancing AI, aiming to develop true AI assistants capable of handling a wide range of tasks beyond simple summarization or coding assistance. Sissie Hsiao, a vice president at Google, emphasized that the changes to Gemini are just the first step towards “building a true AI assistant.”