🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
V God’s first review of LLM: Grok essentially saves the X platform by "helping the dissemination of truth," but there are still many hallucinations.
Vitalik on Christmas Eve describes Grok as a “net improvement” for the X platform, highlighting new debates on AI bias governance
(Background: Elon Musk’s xAI collaborates with El Salvador to launch the world’s first “National AI Education Program”: one million students use Grok as personal tutors)
(Additional context: Musk claims Grok will challenge the strongest human LOL team next year: if successful, it will prove a substantial breakthrough in AGI development)
On Christmas night 2025, controversy in Silicon Valley erupts, with fierce clashes on social media platforms across the political spectrum. At this most dense information bubble moment, Ethereum co-founder Vitalik Buterin unusually endorses Elon Musk’s AI chatbot Grok, believing that even with frequent errors, it injects a rare “honesty factor” into the X platform.
Political polarization raises the dialogue threshold
Following the chaos brought by the Trump administration, X platform (former Twitter) has recently seen a surge of conspiracy theories and emotionally charged posts. Echo chambers deepen, amplifying AI tools’ influence in public discourse. Vitalik points out that many models deliberately soften responses to avoid controversy, which instead reinforces users’ existing biases. In contrast, Grok often “pushes back” against questions with extreme intent, forcing users seeking confirmation bias to confront opposing viewpoints.
Vitalik proposes a “Net Improvement”(Net Improvement) framework to evaluate the overall pros and cons of tools in the information ecosystem. He emphasizes that Grok’s value lies not in whether its answers are entirely correct, but in its willingness to face biases head-on. He publicly states:
This statement immediately sparks heated discussion in the tech community. Supporters see it as an opportunity to break down algorithmic barriers; critics warn that biased outputs could still spread false narratives.
Illusions and “home advantage” become concerns
In November, Grok once mistook a fabricated video of a shooting at Bondi Beach as real-time news, due to over-reliance on instant posts from X platform, leading to failures in fact-checking. Additionally, user tests revealed that the model occasionally displayed personal worship of Musk, even claiming he has superior physical abilities comparable to Jesus, exposing vulnerabilities in resistance prompts. Kyle Okamoto, CTO of decentralized cloud platform Aethir, warns that if the most powerful models are fully controlled by a single company, “biases could become institutionalized knowledge.”
Vitalik does not endorse centralized architectures but points out that Grok’s “chaotic nature” unexpectedly produces a decentralizing effect at this stage: it does not follow a single political script nor deliberately appease sensitive issues, making it harder for users to fall into echo chambers. As information warfare heats up in 2026, whether Grok can become a hammer to break echo chambers or merely amplify noise remains X platform’s biggest gamble. For industries seeking objective AI, this tug-of-war between “imperfect honesty” and “safe harmlessness” has only just entered extra time.