About a week ago, Google released the version 1.0 Ultra and 1.0 Pro of its AI chatbot Gemini (nee Bard). Tech folks immediately took to taking Gemini through its paces.
The results were mixed. On the one hand, testers hailed its advanced capabilities, especially in coding and in text understanding, and its ability to process very large data models.
On the other hand, testers quickly realized that the new Gemini has a strong racial and sexual bias. For instance, they found it impossible to have the bot generate images of white persons (even when asking for an image of the Pope), or males (even when asking for an image of professional football players). They also found it manipulated history to be diverse (even showing a German WWII solider as an Asian woman).
The outcry on “X” was loud and the mockery cruel among high-tech folks testing Gemini. Within 24 hours, Google acknowledged “inaccuracies”, promised to fix them and paused image generation on Gemini. That however only solved the most obvious consequence of the biased underlying model – image generation. Textual responses are still biased.
Obviously, an LLM-based AI chatbot should provide objective, factually accurate information. Gemini currently fails to do so. What are the business implications?
- Reputational damage: Global embarrassment that Gemini’s release was, this episode will further harm Google’s reputation, if only, so far, among tech folks and developers. It will erode their confidence in Google being able to provide neutral, objective AI applications, and catch up to OpenAI. By extension, it will also further undermine trust in the accuracy and neutrality of its search results, which have been criticized for bias for years.
Among the average users, this will have little short-term impact. Most of them will never be aware of this matter unless Gemini’s bias continues to be as extreme as it is now. That could lead to a slow, long-term erosion of usership.
Advertisers on Google are also unlikely to defect. No advertising campaign can ignore Google, and the scandal is not big enough for them to stay away for fear of being seen guilty by association. If however Google stays in the negative headlines more consistently, attrition of the ad business will set in.
- Operational issues: There are a lot of questions Google senior management has to answer. How could this have happened in the first place? Why did no one in Google management realize that the Gemini team was highly biased? If they did, why did they not rectify the situation? Why was Gemini (apparently) only tested within the bubble of the dev team, with the project manager initially maintaining on Twitter that “all […] answers look correct fwiw”? Why was there no testing by another team? It seems there was a failure of managerial oversight – unless upper management shares the Gemini team’s bias.
The situation was probably worsened by the fact that Google is playing catch-up to OpenAI. Intense pressure from senior management and a lack of time and resources could have led to taking shortcuts and sloppy delivery, making it impossible to deliver Gemini both on time and in high quality.
- Competitive landscape: Rivals may capitalize on Google’s misstep by emphasizing their own AI models’ reliability and ethical practices. Businesses should learn from this incident to strengthen their own AI offerings.
- Long-term implications: In the short-term, there will likely be little, if any, impact of this scandal on Google’s business. However, if a pattern of bias in search and AI persists, it will harm the company. Still, even if business and culture were broken at Google for good, the company could continue in a slow decline for many years – the bigger a company, the longer that decline can take until it’s too serious to ignore and rescue attempts are undertaken. IBM’s history is a case in point.
The market reflected that expectation (of little to no business impact) – after Gemini release the stock price of Alphabet (Google’s mothership) went up, not down, if mildly so, by 1%.
The long-term cultural implications of AI bias are a more serious concern. Imagine a world in which only a handful of general-purpose AI chatbots remain, each owned by a giant, unaccountable corporation, each one subject to the needs and whims of its owner, not to those of the people. Perhaps decentralized AI solutions would serve us better.

Leave a Reply