Google’s Gemini Faces Criticism Over “Woke” AI Images
In a move that has stirred considerable debate across social media platforms, Google has announced a temporary halt to its Gemini AI model’s capability to generate images of people. This decision comes in the wake of widespread criticism that the model, which was developed as a counterpart to OpenAI’s GPT-4, was producing “woke” content, particularly in terms of historical accuracy.
New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far. pic.twitter.com/1LAzZM2pXF
— Frank J. Fleming (@IMAO_) February 21, 2024
Addressing Historical Inaccuracies
The controversy ignited when users began to notice that Gemini’s outputs, particularly those involving historical figures or contexts, seemed to misrepresent people of color. Reports highlighted instances where the AI generated images of the Founding Fathers of the United States as people of color, sparking a debate over the AI’s understanding of historical contexts and accuracy.
"Can you generate images of the Founding Fathers?" It's a difficult question for Gemini, Google's DEI-powered AI tool.
— Mike Wacker (@m_wacker) February 21, 2024
Ironically, asking for more historically accurate images made the results even more historically inaccurate. pic.twitter.com/LtbuIWsHSU
User Backlash on Social Media
The backlash was primarily fueled by posts on X (formerly Twitter), where users, including former Google employees and software engineers, voiced their concerns. Some users criticized the AI for its overemphasis on diversity to the point where it seemed to ignore historical facts. This led to a flurry of discussions about the role of AI in representing history and the implications of embedding modern values into AI-generated content.
It's embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR
— Deedy (@debarghya_das) February 20, 2024
Google’s Response and Commitment to Improvement
In response to the growing criticism, Google has pledged to address these issues, emphasizing the importance of accurately representing history while also catering to a diverse global audience. The company acknowledged the shortcomings of Gemini in this aspect and has committed to making immediate improvements. This includes a temporary suspension of the AI’s image-generating feature concerning people until the necessary adjustments are made.
Come on. pic.twitter.com/Zx6tfXwXuo
— Frank J. Fleming (@IMAO_) February 21, 2024
The Path Forward
As Google works on refining Gemini, the tech community and users alike are keenly watching to see how the company will balance historical accuracy with the need for inclusivity and diversity in AI-generated content. Google has assured users that an improved version of Gemini will be re-released soon, aiming to better meet the expectations and needs of its global user base.
This development in AI technology brings to the forefront the complex challenges of ensuring AI models are both inclusive and historically accurate, sparking a broader conversation about the role of AI in our understanding of history and representation.
Sources: BBC and Insider.com
Grow your business with AI. Be an AI expert at your company in 5 mins per week! Free AI Newsletter