WAIC 2025: AI is sure to continue growing, avoiding becoming the 'ultimate villain' is a human dilemma.

robot
Abstract generation in progress

Author: Jingyu

The World Artificial Intelligence Conference (WAIC) in Shanghai has reignited the spotlight!

On July 26, 2025, this year’s WAIC attracted over 1,200 guests from more than 30 countries and regions, including 12 recipients of top awards such as the Turing Award and Nobel Prize, over 80 academicians from both China and abroad, and representatives from several leading international laboratories.

At the opening event on July 26, 2024 Nobel Prize winner Geoffrey Hinton, a master in the AI field, delivered a speech. This well-known scholar, who has always adhered to the “AI threat theory,” reiterated the dangers of the chaotic development of AI and called for the establishment of a “safety net” for AI research globally.

Additionally, Yan Junjie, founder and CEO of the rapidly rising domestic AI startup MiniMax, stated in his speech that the growth of increasingly powerful AI has almost no limits, and with the decreasing cost of training, “the future AI will be more inclusive.”

At the same time, Peng Zhihui (Zhihui Jun), the co-founder and chief technology officer of the domestic robotics startup Zhiyuan Robot, which has just been rumored to be “buying a shell for listing,” took the stage with their robot product Lingxi X2 to perform a segment of “crosstalk,” providing a more tangible representation of the “partnership” between robots and humans.

Both lions and babies need to be “supervised”.

Even though disciples and followers occupy half of the Silicon Valley AI tech circle, the “grandmaster” of the AI world, Geoffrey Hinton, has consistently sung a different tune from his industry peers, steadfastly insisting on the “AI threat theory.”

At the WAIC conference on July 26, Hinton reiterated his concerns about the rapid development of AI in his speech.

In a brief description of the development of AI technology over the past 30 years to the current large model phase, Hinton believes that the way large models understand language now is similar to that of humans.

“Humans may very well be large language models and can also generate hallucinations just like large language models, creating many hallucinatory languages.” This insight from Hinton is quite in line with the so-called “human-machine” meme prevalent on social media today.

However, the problem is that compared to the “carbon-based brain” of humans, the “silicon-based brain” equipped by AI has inherent advantages in storage, replication, and “instant transmission”. This means that with technological development, it is widely believed in the industry that the emergence of AI smarter than humans is just a matter of time. As a form of existence, these AIs, representing “intelligent agents”, will undoubtedly demand “survival” and “control”.

Syndon believes that current AI may be like a “three-year-old child,” still easily manipulated by humans, but the future is uncertain. He also compares today’s AI to a young lion, stating that there are only two possibilities when raising a lion: “either train it not to attack you, or eliminate it.”

From the current global progress in AI, no country can truly “eliminate AI” by stopping technological development. The only path left is: the world needs an AI safety agency to train AI to be able to “do good.”

“How to train an AI that does not want to dominate humanity is the ultimate question facing humanity,” Hinton said at the end.

Some seek regulation, while others seek “deregulation”.

Statements like this from the master are thought-provoking at WAIC, but in North America, where Hinton is based, it seems somewhat “lacking in insight”. Companies like OpenAI and Anthropic, which are associated with Hinton’s students, have already reached valuations in the hundreds of billions of dollars, not to mention the heavy bets that Silicon Valley venture capitalists have placed on AI startups in the past two years.

A prominent manifestation is that, with the increase in lobbying expenses by AI companies in the U.S. political arena, U.S. regulators have officially loosened their control over the development of AI.

Similarly, on July 23, local time in the United States, then-President Donald Trump released the AI Action Plan. In this document, U.S. regulators identified the need to ensure AI’s leading advantage in the U.S. through aspects such as data, standards, and talent.

Increase R&D Investment (Invest in R&D): Significantly increase the federal government’s long-term investment in AI foundational and applied research, particularly in areas such as next-generation AI, AI safety, and trustworthy AI.

Unleash Data Resources (Unleash AIData Resources): Promote the safe opening of massive datasets owned by the federal government to AI researchers and the public, providing high-quality “fuel” for model training.

Set AITechnical Standards (: Led by the government, in collaboration with the industry and academia, to establish global benchmarks, standards, and norms for AI technology, ensuring that AI systems are safe, reliable, explainable, and fair.

Cultivate AI Talent )Cultivate an AI-Ready Workforce(: Reform STEM education, promote apprenticeship and retraining programs, attract and retain top global AI talent, and reserve human resources for the AI economy.

Strengthen International Collaboration ): Establish AI alliances with allies and partner countries, jointly formulate rules to counter the abuse of AI by “authoritarian countries”, and promote open and democratic AI applications.

Protect Key Technologies (: Strengthen the protection of key AI technologies, algorithms, and hardware (especially semiconductors) in the United States through measures such as export controls and investment reviews to prevent technology from flowing to strategic competitors.

It can be seen that the United States is giving the green light to the development of AI technology, while using geopolitical issues to prevent competitors from being on the same starting line as itself.

The era of “experiencing robots” has arrived.

The humanoid robot is undoubtedly the most attractive highlight of this WAIC conference, without exception.

In the main forum session, the co-founder of ZhiYuan Robotics, “ZhiHui Jun” Peng ZhiHui, brought the ZhiYuan product Lingxi X2 to the stage for a “human-machine cross talk”.

Cross-talk emphasizes talking, learning, teasing, and singing, while the interaction between robots and humans focuses on “long live understanding.” Lingxi X2 took the stage to indicate that collaboration between humans and robots must be built on the foundation of “consensus.” But how can we establish consensus between robots and humans and break through the key to human-machine collaboration? Zhihui Jun stated that this is the track the company aims to delve into, and it is important to walk together with more peers.

Therefore, Zhihui Jun also announced the “Zhiyuan Lingqu OS” open source plan on site, hoping to work with more people to promote the integration of the current robotic system ecosystem and breakthroughs in embodied intelligence new technologies.

In what seems to be an endorsement for embodied intelligence, Richard Sutton, the 2024 Turing Award winner and professor at the Department of Computer Science at the University of Alberta, participated in the conference via video link. He believes that the data currently used to train large models is nearly exhausted. However, there is no need for everyone to be discouraged, as this signifies that the next era of AI—the Era of Experience—is about to arrive.

Unlike the past where AI was trained with “static” data, in the future, AI can gain knowledge and improve its abilities by “experiencing” the external environment and objects, just like human infants. Although this goal is still somewhat distant, many robotic startups are indeed continuously training and learning in the “physical world”.

This is also why top scholars in the industry, including Fei-Fei Li, have transitioned from “AI” to “Physical AI,” emphasizing that for artificial intelligence to truly enter the real world, it must understand and learn about the entire world from a three-dimensional perspective.

LION2.37%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)