Prevent "poisoning" at the source and strengthen the foundation for AI development

robot
Abstract generation in progress

Securities Times Reporter Wu Shun

As “Lobster” (OpenClaw) sweeps across the internet, raising concerns about its security, new reports reveal that generative engine optimization (GEO) is “poisoning” AI large models. Amid rapid advancements in AI technology, risks are emerging in more covert and destructive ways. How to install “safety valves” and tighten security measures for this rapid AI development has become a critical issue in the AI era.

AI risks fundamentally stem from an imbalance between technological development, ethical governance, and regulatory systems. On one hand, some AI companies prioritize “speed over safety,” leaving room for malicious actors during rapid innovation. For example, the open-source community around “Lobster” lacks strict review mechanisms for skill packages, making it easy for malicious plugins to infiltrate; the sources and review processes of training data for AI large models are opaque, providing opportunities for “poisoning.” On the other hand, traditional regulatory approaches lag behind technological iterations, often responding passively after risks emerge, creating a vacuum where “capability has arrived, but governance has not.”

Therefore, first, it is essential to define safety boundaries for technological innovation and build a solid foundation for sustainable development. For instance, preventing “poisoning” requires higher, forward-looking risk prevention standards for AI large model companies, such as ensuring data is explainable and traceable, and establishing stricter review thresholds to prevent “poisoning” at the source.

Second, a “multi-party collaboration” governance system should be constructed. Relevant departments need to accelerate the improvement of laws and regulations related to digital technology, clarifying responsibilities for platforms and users, and increasing penalties for “poisoning” and malicious use of open-source tools; industry associations should develop unified ethical standards and safety protocols to guide corporate self-discipline; research institutions and experts must strengthen research on technical risks to support regulation; the public should improve digital literacy and use technological tools rationally.

Finally, technological development is endless, and risk forms are constantly evolving. It is necessary to establish rapid risk monitoring and early warning mechanisms. For example, issuing safety alerts about “Lobster” by various departments is a typical proactive measure to prevent problems before they occur.

However, tightening the safety measures for rapid technological development does not mean stifling innovation. Only by balancing innovation and safety can technology continue to operate on a compliant and benevolent track, truly driving social progress rather than becoming a hidden threat to public interests.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin