Google Co-founder Says AI Responds Better When You Threaten It — Seriously

Probably the strangest claim made so far


AI feature image (1)

AI is now part of our daily lives. From writing emails, getting help with code, brainstorming, or even planning trips, AI does it all. But sometimes, it just doesn’t give you the answer you need, no matter how clearly you ask. That frustration has led some users to try all kinds of prompts.

Maybe it’s time to threaten your AI assistant when it doesn’t respond better….I’m not even joking here. According to Google co-founder Sergey Brin, the strange, threatening method actually works to get desired results with AI.

On a recent All-In podcast episode, Brin revealed that large language models—including Google’s—tend to respond better when they’re threatened.

Brain said, “You know, it’s a weird thing. Not just our models, but all models tend to do better if you threaten them, like with physical violence.

He admitted it’s not something the AI community really talks about, because, well, it sounds awful. “Historically, you just say, ‘Oh, I’m going to kidnap you if you don’t blah blah blah…’

The idea isn’t to actually threaten anyone, obviously, but the fact that models seem to react differently to aggressive language raises questions about how they interpret intent, urgency, or tone. All that said, it’s a strange workaround. Moreover, it shows how unpredictable AI behavior can still be, even from the people who build it.

More about the topics: AI

Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

User forum

0 messages