Salesforce's Einstein Copilot AI chatbot promises less hallucination
The general availability of the AI chatbot for businesses was announced last week
3 min. read
Updated on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
Salesforce recently announced the general availability of the Einstein Copilot for businesses promising to reduce hallucination compared to other AI chatbots. When it comes to AI hallucination, top players like Google, Meta, OpenAI, and others have struggled to overcome this problem.
Talking about Einstein Copilot in a keynote at Salesforce World Tour NYC, the company’s Executive VP, Patrick Stokes said that their AI chatbot is different.
Although Einstein Copilot promises less hallucination, it isn’t entirely preventable
Since AI hallucination isn’t completely preventable, Salesforce has backed Einstein Copilot with a hallucination detection feature. The AI chatbot also gathers customers’ feedback in real-time and informs system administrators of its weaknesses.
In a recent interview with Quartz, Stokes talked about why Einstein Copilot will have fewer hallucinations than other AI chatbots. He mentioned:
Before we send the question over to the LLM, we’re gonna go source the data. I don’t think we will ever completely prevent hallucinations.
Stokes further added that imagining to completely stop AI hallucination is just a silly thing:
There’s always going to be a way in. I think that’s true with AI as well. But what we can do is do everything that we possibly can to make sure that we are building transparent technology that can surface when that happens.
Salesforce President and CMO’s thoughts align with Stokes as he also believes that LLMs inherently were built to hallucinate. That’s how they work, they have imagination. Not to forget, earlier this month, the company released the beta version of the chatbot for Tableau.
AI hallucination is a major issue, but fixing it is very challenging
Although Salesforce promises lesser hallucinations with Einstein Copilot, it is one of the major issues that has plagued AI chatbots these days. AI chatbots hallucinate when they don’t have the required training data to answer a query but end up generating a response as if it is a fact.
Honestly, resolving AI hallucinations is not as easy as one could think. That’s because AI models use the bulk of training data which makes spotting a specific problem quite challenging. That’s not just one factor, inaccurate training data from various sources also contribute to AI hallucination.
Last year The New York Times reported a hallucination rate for different AI systems. It listed the hallucination rate for AI systems of Meta which stood at about 5%, 3% for OpenAI, 8% for Anthropic, and 27% for Google PaLM.
Do you think Einstein Copilot can be a game-changer when it comes to AI hallucinations? If yes, share your thoughts, we’d love to hear.
User forum
0 messages