🌕 Gate Square · Creator Incentive Program Day 8 Topic– #XRP ETF Goes Live# !
Share trending topic posts, and split $5,000 in prizes! 🎁
👉 Check details & join: https://www.gate.com/campaigns/1953
💝 New users: Post for the first time and complete the interaction tasks to share $600 newcomer pool!
🔥 Day 8 Hot Topic: XRP ETF Goes Live
REX-Osprey XRP ETF (XRPR) to Launch This Week! XRPR will be the first spot ETF tracking the performance of the world’s third-largest cryptocurrency, XRP, launched by REX-Osprey (also the team behind SSK). According to Bloomberg Senior ETF Analyst Eric Balchunas,
From Verifiable AI to Composable AI—Reflections on ZKML application scenarios
Author: Guo Turbine, Mirror
all in all:
Modulus labs implements verifiable AI by executing ML calculations off-chain and flexibly generating zkp. This article redeploys this solution from an application perspective and analyzes in which scenarios it is strictly needed, and in which scenarios the demand is weak, and ultimately delays the horizontal development. and vertically unified AI ecological model based on the public chain. The main contents are:
Whether the AI basis is verifiable: whether the on-chain data is modified, and whether fairness and privacy are involved
When AI does not affect the status of the chain, AI can act as a suggester, and people can judge the quality of AI services through actual effects without verifying the calculation process.
When affecting the status on the chain, if the service is targeted at individuals and affects certain privacy, users can still directly judge the AI service quality extraction and inspection calculation process.
When the output of AI will affect the fairness and personal privacy among multiple people, such as using AI to evaluate and distribute rewards to community members, using AI to optimize AMM, or involving biological data, you will want to review the calculation of AI. This is You can verify where the AI may have found the PMF.
Vertical AI application ecosystem: Since the tail of verifiable AI is a smart contract, since the trust-consuming interactive calls between AI and native dapps may be possible between verifiable AI applications, this is a potential composable AI application ecosystem.
Horizontal AI application ecosystem: The public chain system can handle issues such as service payment, payment bottleneck coordination, and matching of user needs and service content for AI service providers, allowing users to obtain a decentralized AI service experience with a higher degree of freedom.
1. Introduction and application cases of Modulus Labs
1.1 Introduction and core solutions
Modulus Labs is an "on-chain" AI company that believes that AI can significantly improve the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI evaluates web3, that is, the operation of AI requires a lot of computing power, and the AI in off-chain computing is a black box, which does not meet the basic requirements of web3 to be trustworthy and verifiable.
Therefore, the Modulus Labs Summit zk rollup [off-chain repair + on-chain verification] plan proposed a verifiable AI architecture, specifically: the ML model runs off-chain, and a zkp is generated off-chain for the ML calculation process. Through this zkp, the architecture, permissions and inputs of the off-chain model can be verified. Of course, this zkp can also be published to the chain for verification by smart contracts. At this time, AI and on-chain contracts can have more trustless interactions, roughly realizing "on-chain AI".
Based on verifiable AI ideas, Modulus Labs has currently launched three "on-chain AI" applications and also proposed many possible application scenarios.
1.2 Application Cases
Additionally, Modulus Labs mentions some other examples:
1.3.1 Scenarios where AI may need to be verified
In the Rocky bot scenario, users may not have the need to verify the ML calculation process. First, users do not have professional knowledge and have no ability to do real verification. Even if there is a verification tool, it seems to the user that [he pressed a] button, and the interface pop-up window tells him that this AI service is indeed generated by a certain model, and the authenticity cannot be determined. Second, users have no need for verification, because users care about whether the AI is of high significance. Users will migrate when it is not high, and will always choose the model with the best performance. In short, when users are pursuing the final effect of AI, the verification process may be of little significance, because users only need to migrate to the model service with the best effect.
**One possible solution is: AI only acts as a suggester, and users execute transactions independently. **When people input their trading goals into AI, AI calculates and returns a better trading path/direction off-chain, and the user chooses whether to execute it. People also don’t need to verify the model behind it, just choose the product with the highest return.
Another dangerous but very likely situation is that people don’t care about their control over assets and the AI corrosion process at all. When a robot that automatically makes money appears, people are even willing to entrust their money directly to it, just like the proxy It is common to deposit coins into CEX or traditional banks for financial management. People don't care about the principle behind it, they only care about how much money they get in the end, or even how much money the project side shows them as earning, because this kind of service may also be able to quickly acquire a large number of users, even better than using verifiable AI The project side’s product iteration speed is faster.
Taking a step back, if AI does not participate in on-chain state modification at all, then if the on-chain data is pulled down for consumption by users, there is no need to generate ZKP for the calculation process. Here, such applications are turned into [data services]. Here are a few cases:
1.3.2 Scenarios where AI needs to be verified
This article believes that multiple scenarios involving fairness and privacy require ZKP to provide verification. Here we discuss several applications mentioned by Modulus Labs:
Generally speaking, when artificial intelligence is like a decision-maker, and its output has a wide range of impacts and involves the fairness of many parties, people will demand a review of the decision-making process, or simply ensure that the decision-making process of artificial intelligence does not have major consequences. problem, and protecting personal privacy is a very direct need.
Therefore, [whether the AI output is in on-chain status] and [whether modifications need to affect fairness/privacy] are the two criteria for judging whether the AI solution is verifiable.
In any case, Modulus Labs’ solution has great implications for combining AI with cryptocurrency and bringing practical application value. However, the public chain system can not only improve the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. The new ecology has brought about the eye-catching relationship between AI services of Web2. The way of collaboration between AI services and users must be the way of collaboration between upstream and downstream links. We can summarize the potential AI ecological models into two types: vertical mode and horizontal mode. kind.
2.1 Vertical Mode: Attention enables composability between AIs
A special feature of the “Leela vs the World” on-chain chess example is that people can place bets for humans or AI, and tokens are automatically distributed after the game. At this time, the significance of zkp is not only to provide users with the process of verifying AI calculations, but also as a trust that triggers state transitions on the chain. With trust assurance, it is also possible to have dapp-level composability between AI services and between AI and cryptocurrency dapps.
The basic unit of composable AI is [off-chain ML model-zkp generation-on-chain verification contract-main contract]. This unit is enriched in the framework of "Leela vs the World", but the actual architecture of a single AI dapp may be different from the above. The picture shows it differently. First, the situation in chess requires a contract, but in reality, AI may not need an on-chain contract. But from the perspective of the architecture of composable AI, if the main business is recorded through contracts, the other two is that the impact of the main contract does not necessarily require the ML model of the AI dapp itself, because a certain AI dapp may have one-way influence, and the ML model handles After completion, just trigger the contract related to your own business, and the contract will be called by other dapps.
Extended view, the calls between contracts are calls between different web3 applications, which are calls for personal identity, assets, financial services, and social information. We can imagine a specific combination of AI applications:
Interaction between AIs under the public chain framework is not a matter of discussion. Loaf, an ecological contributor in the field of full-chain games, once proposed that AI NPCs can interact and trade with each other like players, so that the entire economic system can self-optimize and automatically AI Arena has developed an AI automatic battle game. Users first purchase an NFT. Each NFT represents a combat robot, with an AI model behind it. Users first play the game themselves, and then exchange the data with the AI to simulate and learn. When the user feels that the AI is strong enough, it can be placed in the arena to automatically battle against other AIs. The AI Arena mentioned by Modulus Labs hopes to transform these AIs into verifiable AI. Both cases saw the possibility of direct interaction between AIs to modify on-chain data during transactions.
But how to combine AI to solve a lot of issues to be discussed in terms of specific implementation, such as different dapps using universal zkp or verification contracts, etc. However, there are also a large number of excellent projects in the field of zk. For example, RISC Zero has made a lot of progress in releasing complex ischemic zkp off-chain to the chain. Maybe a suitable solution can be put together in a day.
2.2 Horizontal mode: AI service platform to realize parking lot decentralization
In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain and Eigen Layer. Its core goal is to enable users to obtain AI services in a more decentralized manner, making the entire process more trustless and automated.
SAKSHI's architecture can be divided into six layers: Service Layer, Control Layer, Transation Layer, Proof Layer, Economic Layer and Marketplace ) )
The market is the level closest to the user. There are aggregators in the market to provide services to users on behalf of different AI suppliers. Orders are placed through the user aggregator and an agreement is reached with the aggregator on service quality and payment price (the agreement is called SLA) -Service Level Agreement).
The next service layer will provide an API for the client header, and then the client header will initiate an ML inference request to the aggregator, and the request will be transmitted to the server matching the AI service provider (the route used to transmit the request is part of the control layer) . The service layer and control layer are similar to a web2 service with multiple servers, but different servers are operated by different entities, and a single server is associated with the aggregator through SLA (previous service agreement).
SLA is deployed on the chain in the form of smart contracts, and these contracts belong to the transaction layer (Note: In this solution, they are deployed on the witness chain). The transaction layer records the current status of accounting service orders and is used to coordinate users, aggregators, and service providers to resolve payment challenges.
In order for the transaction layer to have evidence to rely on when dealing with problems, the Proof Layer will verify whether the service complies with the agreed usage model of the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process. Instead, it used an optimistic argument and hoped to establish a network of challenger nodes to test the service. The nodes are borne by the witness chain.
Although the SLA and challenger node network are both on Witness Chain, in SAKSHI's plan, Witness Chain does not intend to use its own tokens to achieve independent security, but to borrow the security of Ethereum through Eigen Layer , so the entire economic layer actually relies on Eigen Layer.
It can be seen that SAKSHI organizes different AIs in a decentralized manner to provide services to users around the relationship between AI service providers and users, which forms a horizontal plan. The core of SAKSHI is that it allows AI services to focus more on managing their own off-chain model calculations, completing the matching of user needs and model services, payment of services and verification of service quality through on-chain protocols, and attempts to automatically solve payment problems. Of course, SAKSHI is still in a theoretical stage at present, and there are also a lot of implementation details that need to be determined.
3.Future Outlook
Whether it is composable AI or a decentralized AI platform, the AI ecological models of public chains seem to have something in common. For example, AI service providers do not directly interface with users. They only need to provide ML models and perform off-chain calculations and payments, problem solving, and matching between user needs and services, all of which can be solved through decentralized protocols. As a trustless infrastructure, the public chain reduces the difficulty between service providers and users. At this time, users also have higher autonomy.
Although the advantages of using public chains as the basis for applications are cliché, they also apply to AI services. However, AI applications are different from existing dapp applications. AI applications cannot put all calculations on the chain, so using zk is still optimistic to prove that AI services can be connected to the public chain system in a more trustless way.
With the implementation of a series of experience optimization solutions such as account abstraction, users can no longer perceive the existence of mnemonics, chains, gas, etc. This makes the public chain ecosystem close to web2 in terms of experience, and users can obtain higher services than web2. The degree of freedom and composability make it more attractive to users, and the AI application ecosystem based on the public chain is worth looking forward to.