Google and Microsoft AI shared predictions instead of real information

Copilot and Bard shared unreliable data about Super Bowl before the official results

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Super Bowl LVIII featuring Copilot and Gemini on a cardboard background

AI is only sometimes suitable, especially when you look for truthful information, but get predictions instead. After all, chatbots get their data from all kinds of sources, and they might not always be reliable. In addition, their power to generate facts could sometimes lead to misinformation. 

If you didn’t know, a few days ago, before the game was over, according to Gemini, the San Francisco 49ers defeated the Kansas City Chiefs during the Super Bowl. However, the truth is that the Kansas City Chiefs won.

How is AI used for predictions?

The AI makes predictions using data from Google searches and other sources, especially Gemini. Thus, we asked the new version of Bard how does it work, and here’s the answer:

As you can see, even Gemini says it can analyze data and use statistical models to estimate possibilities. Thus, for the sake of answering your questions, the AI might generate fictional answers.

Interestingly enough, the AI prediction feature of Gemini already got a fix. So, instead of making predictions, Google’s AI will offer you a list of things it can actually do to assist you.

The good part is that most AI chatbots get regular updates to match the reliable data. Also, their security system and answers are periodically verified to avoid any issues.

If you are working on your thesis and ask a chatbot for book suggestions, you might look everywhere for something that needs to be added. Moreover, you might discover that the book is an article’s name. So, you might want to ask the chatbot for sources.

Ultimately, all you can do is test your sources. Also, could you tell the AI that it’s wrong now and then, since it will start generating new data? Afterward, you can compare it with the previously generated one.

Here’s everything you need if you want to learn more about Gemini.

What are your thoughts? Did you ever run into AI misinformation? If you want, share it with us in the comments.

More about the topics: AI, Google Bard, Microsoft copilot