Study finds exploits in ChatGPT, hackers can read all your conversations

Other AI-based chatbots had similar vulnerabilities, except Google Gemini

Reading time icon 4 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

hackers may access ChatGPT conversations

If you share personal matters with ChatGPT or ask questions where private information is disclosed, stop that right away. A recent study suggests that chatbots, including ChatGPT, could be hacked, and all your conversations/chats may be accessible to attackers!

In the study conducted at Israel’s Ben-Gurion University, researchers have found a side channel exploit present in almost every popular AI-based chatbot, except Google Gemini, that could reveal the entire conversation with high accuracy, though not 100%.

In an email to Ars Technica, Yisroel Mirsky, head of the Offensive AI Research Lab at the Ben-Gurion University, said,

Currently, anybody can read private chats sent from ChatGPT and other services. This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.

Researchers shed light on the vulnerabilities in AI-based chatbots

The study is complex and slightly tricky for a regular user to comprehend. In simple terms, researchers exploited the side channel vulnerability to acquire tokens (data that helps LLMs translate user inputs) and then used them to infer the conversation with a 55% accuracy.

Researchers leveraged the side channel attack technique because, with it, instead of directly attacking the system, they could gather the information it shared inadvertently. This way, they could bypass the built-in protection protocols, including encryption.

These tokens were then run through two specially trained LLMs (Large Language Models), which could translate the contents into text format. This is practically impossible if you take a manual approach.

However, since chatbots have a distinct style, researchers were able to train LLMs to effectively decipher the prompts. One LLM was trained to identify the first sentence of the response, while the other worked on inner sentences based on the context.

In the email to Ars Technica, they explain it as,

It’s like trying to solve a puzzle on Wheel of Fortune, but instead of it being a short phrase, it’s a whole paragraph of phrases and none of the characters have been revealed. However, AI (LLMs) are very good at looking at the long-term patterns and can solve these puzzles with remarkable accuracy given enough examples from other games.

This breakthrough is also mentioned in their research paper.

We observed that LLMs used in AI assistant services exhibit distinct writing styles and sometimes repeat phrases from their training data, a notion echoed by other researchers as well. Recognizing this characteristic enables us to conduct an attack similar to a known-plaintext attack. The method involves compiling a dataset of responses from the target LLM using public datasets or via sending prompts as a paid user. The dataset can then be used to further fine-tune the inference model. As a result, the inference model is able to reduce entropy significantly, and sometimes even predict the response R from T perfectly, word for word.

The researchers have shared a demo video of the entire process, from Traffic Interception to Response Inference, on YouTube.

So, your ChatGPT chat isn’t as safe as you thought, and hackers may easily read it! Even though the side channel exploit wasn’t present in Google’s chatbot, researchers have hacked into Gemini AI and Cloud Console previously.

Besides, there has been a significant rise in cyberattacks after AI became mainstream. A recent report by Microsoft suggests that in the UK, 87% of companies stand the risk of facing AI-powered cyber attacks.

Microsoft’s President, Brad Smith, had his share of concerns regarding AI and called for immediate regulations!

What do you think about the rise of AI? Share with our readers in the comments section.

More about the topics: AI, ChatGPT

User forum

0 messages