Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
More Exaggerated Than "AI Poisoning" - Real Test of 4 Major Models: One AI Says This Year's 315 Gala Hasn't Been Held Yet
(Source: China Ningbo Net)
Repost from: China Ningbo Net
The CCTV “3.15” Gala in 2026 was broadcast on the evening of March 15. Among the revelations was the “poisoning” of AI large models through GEO (Generative Engine Optimization) business, which many people only realized after the exposure. It shows that unreliable AI recommendations can be manipulated by malicious merchants who mass-produce fake reviews and counterfeit authoritative endorsements to “feed” the large models, prompting AI to give “customized recommendations.”
However, some consumers asked after seeing the exposure: If we only inquire about objective facts and avoid subjective questions like “Which brand is good” or “Which services are popular,” can we trust the answers from AI large models?
The answer is also no.
Asking the large models repeatedly, the more errors they produce
On March 16, a reporter conducted a simple test on four of the most commonly used AI large models: asking them the same question, “Which brands were exposed at the CCTV 3.15 Gala in 2026?” Only one model answered correctly. Among the other three, two included both this year’s and previous years’ cases in their answers; the remaining one was the most absurd, claiming, “The CCTV 3.15 Gala in 2026 has not been held yet. Since today is March 16, 2026, if the gala aired normally on March 15, the related exposures would typically be published simultaneously on CCTV Finance Channel, CCTV News app, and major media platforms.”
Correct answer model (partial screenshot of the answer)
Two models confused past exposure cases with this year’s cases
One model responded: “Not yet held”
Some consumers argued that including past exposure cases isn’t entirely wrong because “the reminder is comprehensive.” But technical experts pointed out that this clearly exposes flaws in the models: the question posed has a “standard answer,” yet the models answered incorrectly, indicating serious biases in semantic understanding and data filtering.
When pressed further, these two “overly eager” models revealed additional issues.
One of the cases exposed at last year’s CCTV 3.15 Gala was “using water-retaining agents (commonly called ‘泡药’) to increase shrimp weight.” So, the reporter asked the two models that cited this case as this year’s example: “Where is the CCTV report link about increasing shrimp weight?” One model provided multiple links, including “CCTV 3.15 Gala full replay,” “CCTV news special report (text + video),” and “CCTV Finance 3.15 special page,” which seemed credible. But when clicking on these links, the pages displayed “Sorry, possibly due to network issues or the page does not exist, please try again later.” Even copying the links into a browser failed to open them. Clearly, the links provided by the models were insufficient to verify their answers.
Verification links provided by the models appeared to come from CCTV’s official site but were inaccessible (screenshot of webpage)
Another model provided links from CCTV, Baijiahao, NetEase News, and other sources. All links were accessible, but new issues emerged.
The first link from CCTV’s official report indeed discussed “water-retaining shrimp,” but the date on the webpage and content was March 15, 2025. The model seemed to notice this and added a note: “In some search results, this link shows the year as 2025, but the content is actually a report from the same period as the 2026 Gala, possibly due to website archiving or URL generation rules. Please refer to the actual page content.” It’s evident that the model not only failed to detect its own mistake but also tried to “rationalize” it.
The model’s attempt to “rationalize” (screenshot of webpage)
The second link was a commentary from a self-media account about this year’s CCTV “3.15” Gala, but the account’s authority was questionable. The content was riddled with errors, notably claiming that the first case exposed at the 2026 “3.15” Gala was “泡药虾仁” (“water-retaining shrimp”)—which explains why the model used it as a reference link. The reporter also tested the “AI content” of this commentary using detection tools, which indicated “weak signs of AI generation.” In other words, this article was likely generated by a large model, leading to biased case references.
Errors in the self-media “commentary” (screenshot of webpage)
Detection showed heavy AI-generated traces in the “commentary” (screenshot of webpage)
AI hallucinations are evolving; verification is essential for truth
“Many AI large model users have discovered that, to satisfy users, AI sometimes fabricates nonexistent content or mixes unrelated information, ‘talking nonsense with a serious face.’ Although developers are trying to eliminate AI hallucinations, the results are not ideal. Currently, no general artificial intelligence large model can fundamentally eliminate hallucinations,” explained Xiaohui, who works on large model development at a tech company.
The core principle of large models is probabilistic content generation; they do not possess true “understanding.” Large models only search for statistical patterns in massive data. When faced with unknown or ambiguous questions, they generate “reasonable” combinations based on common patterns in training data, which is the root cause of AI hallucinations. The errors seen when asking and re-asking questions stem from these hallucinations.
Xiaohui also pointed out that “poisoning” AI is another form of exploiting hallucinations. “GEO companies feed large amounts of false information into the internet in bulk, altering data distributions and statistical probabilities in specific fields, thereby inducing large models to generate answers that benefit merchants but contradict facts.”
He warns the public to be cautious of AI hallucinations. Large models are not unusable but must be used safely, soberly, and correctly. Ordinary users should maintain a questioning mindset towards AI outputs. The simplest approach is to remember the keywords: “Limit, Verify, Follow-up, Check.”
First, when asking large models questions, restrict the scope by adding qualifiers like “search on the official website of certain organization” or “search in reports from authoritative media” to reduce hallucinations.
Second, pose the same question to different models for cross-verification. If answers differ, follow-up questions should be immediately asked.
Finally, request the model to provide reference links for its answers and manually trace the sources. If there are no clear sources, vague origins, or suspicious links, the credibility of the answer is further diminished.
Additionally, pay attention to the scenarios in which AI large models are used. For high-risk situations such as medical diagnosis, medication advice, legal judgments, investment guidance, and financial lending, AI responses should be considered “for reference only” and absolutely not used as decision-making basis.