🎉 #Gate Alpha 3rd Points Carnival & ES Launchpool# Joint Promotion Task is Now Live!
Total Prize Pool: 1,250 $ES
This campaign aims to promote the Eclipse ($ES) Launchpool and Alpha Phase 11: $ES Special Event.
📄 For details, please refer to:
Launchpool Announcement: https://www.gate.com/zh/announcements/article/46134
Alpha Phase 11 Announcement: https://www.gate.com/zh/announcements/article/46137
🧩 [Task Details]
Create content around the Launchpool and Alpha Phase 11 campaign and include a screenshot of your participation.
📸 [How to Participate]
1️⃣ Post with the hashtag #Gate Alpha 3rd
Trusta.AI: Building Web3 Trust Infrastructure Leading a New Era of AI Agent on-chain identification
Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction
1. Introduction
With the rapid maturation of AI infrastructure and the development of multi-agent collaborative frameworks, AI-driven on-chain agents are quickly becoming the main force in Web3 interactions. It is expected that within the next 2-3 years, these AI agents with autonomous decision-making capabilities will lead to large-scale adoption of on-chain transactions and interactions, potentially replacing 80% of on-chain human behaviors and becoming true "users" on the chain.
These AI Agents are not just "robots" executing scripts, but rather intelligent entities capable of understanding context, continuous learning, and independently making complex judgments. They are reshaping on-chain order, driving financial flows, and even guiding governance voting and market trends. The emergence of AI Agents marks a shift in the Web3 ecosystem from a "human participation" centered model to a new paradigm of "human-machine symbiosis."
However, the rapid rise of AI Agents has also brought unprecedented challenges: how to identify and authenticate the identities of these agents? How to assess the credibility of their actions? In a decentralized and permissionless network, how to ensure that these agents are not abused, manipulated, or used for attacks?
Therefore, establishing an on-chain infrastructure that can verify the identity and reputation of AI Agents has become a core proposition for the next stage of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.
2. Project Analysis
2.1 Project Introduction
Trusta.AI is committed to building Web3 identity and reputation infrastructure through AI.
Trusta.AI has launched the first Web3 user value assessment system - MEDIA reputation score, establishing the largest real-person certification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person certification services for top public chains such as Linea, Starknet, Celestia, Arbitrum, Manta, Plume, and others. Over 2.5 million on-chain certifications have been completed on mainstream chains like Linea, BSC, and TON, making it the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism of identity establishment, identity quantification, and identity protection to achieve on-chain financial services and on-chain social interactions for AI Agents, building a reliable trust foundation in the era of artificial intelligence.
2.2 Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play a crucial role, as they can not only complete interactions and transactions on-chain but also perform complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is central to decentralized trust. Without a reliable identity authentication mechanism, these agents are vulnerable to manipulation, fraud, or abuse. This is why the multiple application attributes of AI Agents in social, financial, and governance contexts must be built on a solid foundation of identity authentication.
The application scenarios of AI Agents are becoming increasingly diverse, covering multiple fields such as social interaction, financial management, and governance decision-making, with their autonomy and intelligence levels continuously improving. Therefore, it is crucial to ensure that each agent has a unique and trusted identity identifier (DID). Without effective identity verification, AI Agents may be impersonated or manipulated, leading to a collapse of trust and security risks.
In the future fully driven by intelligent agents in the Web3 ecosystem, identity verification is not only the cornerstone of ensuring security but also a necessary line of defense for maintaining the healthy operation of the entire ecosystem.
As a pioneer in the field, Trusta.AI has taken the lead in building a comprehensive AI Agent DID authentication mechanism with its advanced technological strength and rigorous credit system, providing solid guarantees for the trustworthy operation of intelligent agents, effectively preventing potential risks, and promoting the stable development of the Web3 smart economy.
Project Overview 2.3
2.3.1 Financing Situation
January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, etc.
June 2025: Completed a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.
2.3.2 Team Situation
Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identity Platform.
Simon: Co-founder and CTO, former head of AI Security Lab at Ant Group, with fifteen years of experience applying artificial intelligence technology to security and risk management.
The team has a strong technical accumulation and practical experience in artificial intelligence and security risk control, payment system architecture, and identity verification mechanisms. They have long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.
3. Technical Architecture
3.1 Technical Analysis
3.1.1 Identity Establishment - DID + TEE
Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain and securely stores it in a Trusted Execution Environment (TEE). In this black-box environment, critical data and computation processes are completely hidden, sensitive operations remain private at all times, and external parties cannot peek into the internal operational details, effectively establishing a solid barrier for the information security of AI Agents.
For agents that were generated before the plugin integration, we rely on the comprehensive scoring mechanism on the chain for identity recognition; while agents that are newly integrated with the plugin can directly obtain the "identity certificate" issued by DID, thus establishing an AI Agent identity system that is self-controllable, authentic, and tamper-proof.
3.1.2 Identity Quantification - Pioneering the SIGMA Framework
The Trusta team always adheres to the principles of rigorous evaluation and quantitative analysis, committed to building a professional and trustworthy identity authentication system.
The Trusta team originally built and validated the effectiveness of the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions: Interaction Amount ( Monetary ), Participation ( Engagement ), Diversity ( Diversity ), Identity ( Identity ), and Age ( Age ).
MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive assessment dimensions and rigorous methods, it has been widely adopted by leading public chains such as Celestia, Starknet, Arbitrum, Manta, and Linea as an important reference standard for airdrop eligibility screening. It not only focuses on interaction amounts but also encompasses multi-dimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project teams accurately identify high-value users, improve the efficiency and fairness of incentive distribution, and fully reflect its authority and wide recognition in the industry.
Based on the successful construction of a human user evaluation system, Trusta has migrated and upgraded the experience of the MEDIA Score to the AI Agent scenario, establishing a Sigma evaluation system that is more aligned with the behavior logic of intelligent agents.
The Sigma scoring mechanism constructs a logical closed-loop evaluation system from "capability" to "value" based on five dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, which better meets the needs of AI Agents.
First, based on professional competence ( Specification ), the level of engagement ( Engagement ) reflects whether it is consistently and steadily invested in practical interaction, which is a key support for building subsequent trust and effectiveness. Influence ( Influence ) refers to the reputation feedback generated in the community or network after participation, representing the credibility of the agent and the dissemination effect. Monetary ( Monetary ) assesses whether it has the ability to accumulate value and financial stability in the economic system, laying the foundation for a sustainable incentive mechanism. Finally, adoption ( Adoption ) is used as a comprehensive reflection, representing the degree to which the agent is accepted in practical use, which is the final validation of all prior capabilities and performances.
This system is progressive in layers, with a clear structure, capable of comprehensively reflecting the comprehensive quality and ecological value of AI Agents, thus achieving a quantitative assessment of AI performance and value, transforming abstract pros and cons into a concrete, measurable scoring system.
Currently, the SIGMA framework has advanced cooperation with well-known AI agent networks such as Virtual, Elisa OS, and Swarm, demonstrating its significant application potential in AI agent identity management and reputation system construction, and is gradually becoming the core engine for promoting the construction of trusted AI infrastructure.
3.1.3 Identity Protection - Trust Assessment Mechanism
In a truly resilient and highly trustworthy AI system, the most critical aspect is not only the establishment of identity but also the continuous verification of that identity. Trusta.AI introduces a continuous trust assessment mechanism that can monitor certified intelligent agents in real-time to determine whether they are being illegally controlled, subjected to attacks, or experiencing unauthorized human intervention. The system identifies potential deviations during the agent's operation through behavioral analysis and machine learning, ensuring that each agent's actions remain within established policies and frameworks. This proactive approach ensures that any deviations from expected behavior are detected immediately and triggers automatic protective measures to maintain the integrity of the agent.
Trusta.AI has established a security guard mechanism that is always online, continuously reviewing every interaction process to ensure that all operations comply with system specifications and established expectations.
3.2 Product Introduction
3.2.1 AgentGo
Trusta.AI assigns decentralized identity identifiers (DID) to each on-chain AI Agent, and rates and indexes them based on on-chain behavioral data, creating a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter high-quality agents, enhancing the user experience. Currently, Trusta has completed the collection and identification of AI Agents across the network, issuing decentralized identifiers to them and establishing a unified summary index platform called AgentGo, further promoting the healthy development of the intelligent agent ecosystem.
Through the Dashboard provided by Trusta.AI, human users can easily retrieve the identity and credibility score of a certain AI Agent to determine its trustworthiness.
AI can directly read the index interface between each other, achieving quick confirmation of each other's identity and reputation, ensuring the security of collaboration and information exchange.
AI Agent DID is no longer just an "identity", but a fundamental support for building core functions such as trusted collaboration, financial compliance, and community governance, becoming an essential infrastructure for the development of AI-native ecosystems. With the establishment of this system, all