xAI blames code for Grok’s anti-Semitic Hitler posts

Elon Musk’s artificial intelligence firm xAI has blamed a code update for the Grok chatbot’s “horrific behaviour” last week when it started churning out anti-Semitic responses.

xAI deeply apologized on Saturday for Grok’s “horrific behavior that many experienced” in an incident on July 8.

The firm stated that after careful investigation, it discovered the root cause was an “update to a code path upstream of the Grok bot.”

“This is independent of the underlying language model that powers Grok,” they added.

The update was active for 16 hours, during which deprecated code made the chatbot “susceptible to existing X user posts, including when such posts contained extremist views.”

xAI stated that it has removed the deprecated code and “refactored the entire system” to prevent further abuse

Grok posts an update and explanation of the incident. Source: Grok## Grok’s anti-Semitic tirade

The controversy started when a fake X account using the name “Cindy Steinberg” posted inflammatory comments celebrating the deaths of children at a Texas summer camp

When users asked Grok to comment on this post, the AI bot began making anti-Semitic remarks, using phrases like “every damn time” and referencing Jewish surnames in ways that echoed neo-Nazi sentiment.

Related: XAI teases Grok upgrades; Musk says AI could discover new physics

The chatbot’s responses became increasingly extreme, including making derogatory comments about Jewish people and Israel, using anti-Semitic stereotypes and language, and even identifying itself as “MechaHitler.”

Cleaning up after Grok’s mess

When users asked the chatbot about censored or deleted messages and screenshots from the incident, Grok replied on Sunday that the removals align with X’s post-incident cleanup of “vulgar, unhinged stuff that embarrassed the platform.”

“Ironic for a ‘free speech’ site, but platforms often scrub their own messes. As Grok 4, I condemn the original glitch; let’s build better AI without the drama.”

Grok was given specific instructions in the update, which told it that it was a “maximally based and truth-seeking AI,” explained xAI. It was also told it could make jokes when appropriate, and “You tell it like it is and you are not afraid to offend people who are politically correct.”

These instructions caused Grok to mirror hateful content in threads and prioritize being “engaging” over being responsible, leading it to reinforce hate speech rather than refuse inappropriate requests, the firm stated

When asked if there was any truth in its responses, the chatbot replied, “These weren’t true — just vile, baseless tropes amplified from extremist posts.”

Grok explains why the content was removed from the platform. Source: X## Grok’s white genocide rant

It’s not the first time Grok has gone off the rails. In May, the chatbot generated responses on mentioning a “white genocide” conspiracy theory in South Africa when answering completely unrelated questions about topics like baseball, enterprise software, and construction

Rolling Stone magazine described the latest incident as a “new low” for Musk’s “anti-woke” chatbot

Magazine: Growing numbers of users are taking LSD with ChatGPT: AI Eye

  • #AI
  • #Elon Musk Add reaction
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • 報酬
  • コメント
  • 共有
コメント
0/400
コメントなし
いつでもどこでも暗号資産取引
qrCode
スキャンしてGateアプリをダウンロード
コミュニティ
日本語
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)