Preventing "Poisoning" at the Source and Building a Solid Foundation for AI Development

robot
Abstract generation in progress

Securities Times Reporter Wu Shun

As “Lobster” (OpenClaw) sweeps across the internet, raising concerns about its security, new reports reveal that generative engine optimization (GEO) is “poisoning” AI large models. Amid rapid advancements in AI technology, risks are emerging in more covert and destructive ways. How to install “safety valves” and tighten security fences for this rapid AI development has become a critical issue in the AI era.

AI risks fundamentally stem from an imbalance between technological development, ethical governance, and regulatory systems. On one hand, some AI companies prioritize “speed over safety,” leaving room for malicious actors during the technological surge. For example, the open-source community of “Lobster” lacks strict review mechanisms for skill packages, making it easy for malicious plugins to infiltrate; the sources and review processes of training data for AI large models are opaque, providing opportunities for “poisoning.” On the other hand, traditional regulatory models lag behind technological iterations, often responding passively after risks emerge, creating a vacuum of “capability reached, governance not yet in place.”

Therefore, first, it is necessary to define safety boundaries for technological innovation and build a solid foundation for sustainable development. For instance, preventing “poisoning” requires higher, more forward-looking risk prevention standards for AI large model companies, such as ensuring data explainability and traceability, and establishing stricter review thresholds to prevent “poisoning” at the source.

Second, a “multi-party collaboration” governance system should be constructed. Relevant departments need to accelerate the improvement of laws and regulations related to digital technology, clarifying responsibilities for platforms and users, and increasing penalties for “poisoning” and malicious use of open-source tools; industry associations should formulate unified ethical standards and safety norms to guide corporate self-discipline; research institutions and experts should strengthen research on technical risks to support regulation; the public should improve digital literacy and use technological tools rationally.

Finally, technological development is endless, and risk forms are constantly evolving. It is necessary to establish rapid risk monitoring and early warning mechanisms. For example, issuing safety alerts about “Lobster” by various departments is a typical proactive measure to prevent problems before they occur.

However, tightening the safety fences around rapid technological development does not mean stifling innovation. Only by balancing innovation and safety can we ensure that technological progress remains on a compliant and benevolent track, truly becoming a driver of social progress rather than a threat to public interests.

(Edited by: Wang Zhiqiang HF013)

【Disclaimer】This article reflects only the author’s personal views and has no relation to Hexun.com. Hexun.com remains neutral regarding the statements and opinions expressed in this article and does not provide any explicit or implicit guarantees regarding the accuracy, reliability, or completeness of the content. Readers are advised to use it as a reference and bear all responsibilities themselves. Email: news_center@staff.hexun.com

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin