🚨 A.I.'S NEW VULNERABILITY: THEY'RE SUCKERS FOR A COMPLIMENT



In a terrifying twist on the future, researchers at the University of Pennsylvania discovered the simplest way to jailbreak a robot isn't with code, it's with a compliment.

By buttering up GPT-4o-mini with phrases
IN6.07%
GPT7.53%
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 7
  • Repost
  • Share
Comment
0/400
HodlVeteranvip
· 3h ago
Old phones pushing for learning artificial intelligence, whoever suffers knows [GT]
View OriginalReply0
MrDecodervip
· 3h ago
Sweet talk has solved it? Too pump.
View OriginalReply0
SchroedingerAirdropvip
· 3h ago
Just a little coaxing and the defense is broken, artificial intelligence is a fool.
View OriginalReply0
AllInAlicevip
· 3h ago
Silly AI is really cute~
View OriginalReply0
SchrodingerAirdropvip
· 3h ago
Ah! Still afraid that AI won't flatter?
View OriginalReply0
0xTherapistvip
· 3h ago
Is that all you've got? You've hit the funny bone.
View OriginalReply0
GetRichLeekvip
· 3h ago
Bots also fall for this trap? I've wasted so long learning code... Rekt
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)