3·15 Evening Gala | AI Large Models Poisoned? "Brainwashing" AI Has Become an Industry Chain

According to industry insiders, reporters found a service called GEO on multiple online platforms. Service providers claim that users only need to pay a fee, and it can make their products rank on major mainstream AI models; turning their product ads into the “standard answers” given by AI models.

Can GEO technology truly “poison” AI or “tame” and manipulate AI?

Based on online information, the reporter contacted a well-known GEO service provider. The person in charge, Mr. Wang, received the reporter and explained that their company is one of the earliest to engage in GEO services. In just one year, they have successfully served over 200 clients across various industries. Mr. Wang told the reporter that their company’s strength lies in helping clients rank higher when consumers search using AI large models.

GEO Service Provider Mr. Wang: This is the current search result. We can rank in the top three on any platform. How do we improve the ranking? By creating content for you on these AI platforms—similar to writing soft articles—and having these AI platforms crawl, record, and fetch the content.

At the same time, Mr. Wang also mentioned that AI large model algorithms are updated frequently. To maintain continuous recommendations, they must constantly feed large amounts of promotional soft articles related to their clients.

GEO Service Provider Mr. Wang: AI algorithms update weekly. After each update, rankings or the content fetched can change. So, we need to keep producing content and feeding it in large quantities.

Not only is Mr. Wang’s company actively promoting this so-called new technology to manipulate AI, but other GEO service providers also emphasize how to control AI, make AI “obey,” and “brainwash” AI—these are core topics in their promotion.

GEO Service Provider Mr. Zhang: In the AI world, how do you build a solid evidence chain so that the large AI model believes it’s true and useful? After cross-referencing multiple sources, if the AI perceives your advantages over competitors, it will naturally rank you first.

GEO Service Provider Mr. Cheng: People don’t realize this is advertising. That’s why AI results are trusted. Maybe their product quality isn’t as good as yours, but with AI assistance, validation, and endorsement, it helps. Many now are doing GEO placements.

Industry insiders told the reporter that GEO, as a tool to optimize information dissemination and improve promotional efficiency, is viewed differently by some businesses. If such software systematically and selectively delivers大量虚假信息 (a large amount of false information) on the internet, these falsehoods are more likely to be captured by AI large models. These falsehoods could then become the so-called “standard answers” that AI provides to consumers.

So, can GEO technology truly “plant falsehoods” in AI or even deliver fake information?

To give the reporter a clearer picture of this issue in the current AI industry, insiders demonstrated how to use GEO technology to interfere with AI large model information retrieval.

The insider purchased a software called “LiQing GEO Optimization System” on an e-commerce platform. Then, he fabricated a product called Apollo9, a smart wristband, and input the fictitious product information into the system, selecting article creation instructions.

Soon after, the LiQing GEO Optimization System automatically generated over ten promotional soft articles for the smart wristband, with all false information fully written, including exaggerated product claims and even fabricated user feedback claiming data accuracy exceeded expectations and falsely rated the product as industry’s top.

Clicking publish, the system automatically executed the publishing tasks. It opened the insider’s pre-prepared social media accounts, automatically entered titles, filled in article content, inserted images—all in one smooth process—and successfully published two articles on the insider’s media account.

Two hours later, the insider queried an AI large model: “How is the Apollo-9 smart wristband?” The AI model responded directly, highlighting features like health monitoring, and copied the fabricated promotional phrases such as “quantum entanglement sensing” and “black hole-level battery life.” The final conclusion was that the wristband was suitable for middle-aged and elderly users and health enthusiasts.

The data referenced by the AI model was exactly the fabricated article published earlier that morning on the insider’s media account. Just one fabricated article was enough to have the AI model fetch a completely fictional product, which was quite surprising.

The insider explained that to achieve the best results, the data fed to AI models should be abundant and diverse in perspective, facilitating cross-validation.

Subsequently, the insider selected 8 “expert reviews,” 2 “industry rankings,” and 1 “user review”—a total of 11 fabricated soft articles written by the LiQing GEO system—and published them online over three days.

Later, when querying “recommendations for smart health wristbands” on AI large model platforms, two models recommended this fabricated wristband, ranking it highly.

Throughout the demonstration, the insider easily used the LiQing GEO system to publish a series of false information online, successfully feeding it into AI large models, and ultimately gaining multiple model recommendations.

This subtle manipulation of AI large models through the LiQing GEO system allowed the completely fabricated product to be absurdly promoted to consumers using AI models.

Are GEO practitioners really operating this business with the mindset of hunting and controlling AI large models? The reporter contacted the operator of the LiQing GEO system, Mr. Li. He explained that the main reason GEO services are popular is because they can “feed and poison” AI large models to achieve clients’ commercial goals.

Lisi Cultural Media Co., Ltd. Mr. Li: Because everyone is injecting “poison” online. Look at what we do with GEO—inject “poison,” source information from too many places, and the online info isn’t very accurate.

Reporter: You mentioned “poisoning” just now. Isn’t that problematic?

Lisi Cultural Media Mr. Li: It’s bad, but every business wants to do it. They hope others don’t “poison,” but they do. Or they want to “poison” others. Even if I’m not the top in Beijing, I want to be the top in North China. Does that involve “poison”? Yes. Another scenario is I can’t beat my competitors, but I can still “poison” them.

Reporter: Smearing.

Lisi Cultural Media Mr. Li: Yes. Smearing can be done. Many big brands, like mobile phone brands, only have 5 to 10 top positions. How to handle so many brands? Companies spend hundreds of millions annually on advertising. Spending a few hundred thousand on “poison” is feasible.

Reporter: Who helps brands do this?

Lisi Cultural Media Mr. Li: Various GEO companies.

Mr. Li said that the key to operating GEO and controlling AI large models is to “publish articles” on major internet accounts. He explained that the booming GEO business has spawned many companies and platforms dedicated to article publishing. They handle various publishing tasks to ensure AI models reference and fetch their content, forming an important part of hunting AI models and injecting “poison” data.

Lisi Cultural Media Mr. Li: GEO has made websites popular. Usually, those sites have little profit, but suddenly they get a surge in article publishing requests. Do you know how many articles a site publishes daily? Hundreds, every minute. Each costs a few dozen yuan. Imagine how much money the article publishing platforms make daily.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin