Google's Gemini experiences image and text generation errors

Gemini generated offensive material and some are calling it racist and white-washing

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Gemini AI errors lead to offensive and inappropriate generations

Google is trying to fix Gemini’s text and image generation errors. If you didn’t know, Gemini is having some errors. For example, sometimes, it doesn’t answer questions and marks them as sensitive. In addition, some of the images generated are offensive. So, Google needs to solve these problems as soon as possible.

What went wrong with Gemini errors?

Google constantly improves Gemini by giving it new features, quality-of-life updates, and data. Yet, they are also trying to prevent it from generating inappropriate content. However, that’s a challenging thing to do.

So, they add various rules, limits, and regulations. In addition, Google tries to add diversity to the AI’s content. As a result, Gemini experiences some errors. Thus, if you ask it to generate the image of a football team, the members will be from all over the world.

Unfortunately, Gemini didn’t stop there, and it generated some highly offensive images featuring Asian Nazis, black Founding Fathers, and a female Pope. On top of that, its text generator also defended immoral behavior. Furthermore, there are some accusations claiming that Google’s Gemini is racist because it generates images featuring white people unless you specifically write it in the prompt.

If you want to see more, scroll down through Frank J. Fleming’s Media on X.

All the errors mentioned above show how unreliable Gemini and other large language models are. In addition, according to Google, Gemini is a tool that could generate inaccurate information about the latest events. However, it is a bit concerning that the image generator cannot generate accurate historical details. Moreover, the company recommends we use Google search instead of AI.

In a nutshell, Gemini and most large language models experience errors. Furthermore, make sure to get your information from official sources, research papers, and trusted authors. Fortunately, Google will test Gemini’s features to ensure they won’t generate offensive or dangerous materials.

What are your thoughts? Do you get information from AI? Let us know in the comments.

More about the topics: AI, Google, Google Bard