📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
The Intersection of AI and Web3: New Opportunities and Mutual Empowerment
AI+Web3: Towers and Squares
TL;DR
AI concept Web3 projects have become targets for capital attraction in the primary and secondary markets.
The opportunities for Web3 in the AI industry are reflected in: using distributed incentives to coordinate potential supply in the long tail - across data, storage, and computing; at the same time, establishing open-source models and a decentralized market for AI Agents.
AI's main application in the Web3 industry is on-chain finance (crypto payments, trading, data analysis) and assisting development.
The utility of AI+Web3 is reflected in the complementarity of the two: Web3 is expected to counteract AI centralization, while AI is expected to help Web3 break boundaries.
Introduction
In the past two years, the development of AI has been like pressing the accelerator, and this butterfly effect ignited by ChatGPT has not only opened a new world of generative artificial intelligence but has also stirred up currents in Web3 on the other side.
With the support of AI concepts, the financing boost in the slowing crypto market is evident. Media statistics show that in the first half of 2024, a total of 64 Web3+AI projects completed financing, with the AI-based operating system Zyber365 achieving a maximum financing amount of 100 million dollars in its Series A.
The secondary market is more prosperous, and data from cryptocurrency aggregation websites shows that in just over a year, the total market value of the AI sector has reached $48.5 billion, with a 24-hour trading volume close to $8.6 billion; the benefits brought about by the progress of mainstream AI technologies are evident, with the average price of the AI sector rising by 151% after the release of OpenAI's Sora text-to-video model; the AI effect has also spread to one of the cryptocurrency capital-raising sectors, Meme: the first AI Agent concept MemeCoin—GOAT has quickly gained popularity and achieved a valuation of $1.4 billion, successfully sparking the AI Meme craze.
Research and discussions about AI+Web3 are also heating up, from AI+Depin to AI Memecoin and currently to AI Agents and AI DAOs, the FOMO sentiment can no longer keep up with the speed of the new narrative rotation.
AI+Web3, this combination of terminology filled with hot money, trends, and future fantasies, is inevitably seen by some as a marriage arranged by capital. It seems difficult for us to distinguish beneath this gorgeous cloak whether it is the playground of speculators or the eve of an explosive dawn.
To answer this question, a crucial consideration for both parties is whether it will get better with the other party involved. Can benefits be gained from the other party's model? In this article, we also try to examine this pattern from the shoulders of our predecessors: How can Web3 play a role in various aspects of the AI technology stack, and what new vitality can AI bring to Web3?
Part.1 What opportunities does Web3 have under the AI stack?
Before delving into this topic, we need to understand the technology stack of AI large models:
In more colloquial terms, the entire process can be described as follows: a "large model" is like the human brain. In the early stages, this brain belongs to a newborn baby that has just come into the world, needing to observe and take in massive amounts of information from the surrounding environment to understand this world. This is the "data collection" phase. Since computers do not possess human senses such as vision and hearing, before training, the large-scale unlabelled information from the outside world needs to be converted into a format that the computer can understand and use through "preprocessing."
After inputting data, AI constructs a model with understanding and predictive capabilities through "training", which can be seen as the process of a baby gradually understanding and learning about the outside world. The parameters of the model are like the language abilities of the baby that are continuously adjusted during the learning process. When the content of learning begins to be categorized, or feedback is received from communication with others and corrections are made, it enters the "fine-tuning" phase of the large model.
As children gradually grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations. This stage is similar to the "reasoning" of AI large models, where the model can predict and analyze new language and text inputs. Infants express feelings, describe objects, and solve various problems through language abilities, which is also similar to how AI large models are applied to various specific tasks, such as image classification and speech recognition, during the reasoning phase after training and deployment.
The AI Agent is closer to the next form of large models - capable of independently executing tasks and pursuing complex goals, not only possessing thinking abilities but also being able to remember, plan, and interact with the world using tools.
Currently, in response to the pain points of AI across various stacks, Web3 has preliminarily formed a multi-layered, interconnected ecosystem that covers all stages of the AI model process.
1. Basic Layer: Computing Power and Data's Airbnb
▎Hash Rate
Currently, one of the highest costs of AI is the computing power and energy required for training models and inference models.
One example is that Meta's LLAMA3 requires 16,000 H100 GPUs produced by NVIDIA (a top graphics processing unit designed for artificial intelligence and high-performance computing workloads) to complete training in 30 days. The price of the latter's 80GB version ranges from $30,000 to $40,000, which necessitates a computing hardware investment of $400 million to $700 million (GPU + network chips), while monthly training consumes 1.6 billion kilowatt-hours, with energy expenditures approaching $20 million per month.
The decompilation of AI computing power is also the area where Web3 first intersects with AI—DePin (Decentralized Physical Infrastructure Network). Currently, the DePin Ninja data website has listed over 1,400 projects, among which representative projects for GPU computing power sharing include io.net, Aethir, Akash, Render Network, and so on.
The main logic is that the platform allows individuals or entities with idle GPU resources to contribute their computing power in a permissionless decentralized manner, creating an online marketplace for buyers and sellers similar to Uber or Airbnb, thereby increasing the utilization rate of underutilized GPU resources. End users also benefit from more cost-effective and efficient computing resources; at the same time, the staking mechanism ensures that if there is a violation of quality control mechanisms or a network interruption, resource providers will face corresponding penalties.
Its characteristics are:
Gather idle GPU resources: The suppliers are mainly third-party independent small and medium-sized data centers, excess computing power resources from operators such as cryptocurrency mining farms, and mining hardware with a consensus mechanism of PoS, like FileCoin and ETH miners. Currently, there are also projects dedicated to launching devices with lower entry thresholds, such as exolab which utilizes local devices like MacBook, iPhone, iPad, etc. to establish a computing power network for running large model inference.
Facing the long-tail market of AI computing power:
a. "From a technical perspective," a decentralized computing power market is more suitable for inference steps. Training relies more on the data processing capabilities brought by large-scale GPU clusters, while inference has relatively lower requirements for GPU computing performance, such as Aethir, which focuses on low-latency rendering work and AI inference applications.
b. From the demand side, small and medium computing power demanders will not train their own large models separately, but will only choose to optimize and fine-tune around a few leading large models. These scenarios are naturally suitable for distributed idle computing power resources.
▎Data
Data is the foundation of AI. Without data, computation is as useless as a floating weed, and the relationship between data and models is like the saying "Garbage in, Garbage out"; the quantity of data and the quality of input determine the output quality of the final model. For the training of current AI models, data determines the model's language ability, comprehension ability, and even values and human-like performance. Currently, the data demand dilemma for AI mainly focuses on the following four aspects:
Data hunger: AI model training relies on a large amount of data input. Public information shows that OpenAI trained GPT-4 with a parameter count reaching the trillion level.
Data Quality: With the integration of AI across various industries, the timeliness of data, the diversity of data, the professionalism of vertical data, and the incorporation of emerging data sources such as social media sentiment have all raised new demands on its quality.
Privacy and compliance issues: Currently, various countries and enterprises are gradually recognizing the importance of high-quality datasets and are imposing restrictions on dataset scraping.
High costs of data processing: Large data volume and complex processing. Public information shows that over 30% of AI companies' R&D costs are spent on basic data collection and processing.
Currently, web3 solutions are reflected in the following four aspects:
The vision of Web3 is to allow users who make real contributions to also participate in the value creation brought about by data, as well as to obtain more private and valuable data from users at a low cost through distributed networks and incentive mechanisms.
Grass is a decentralized data layer and network where users can run Grass nodes to contribute idle bandwidth and relay traffic to capture real-time data from the entire internet and earn token rewards.
Vana introduces a unique Data Liquidity Pool (DLP) concept, allowing users to upload their private data (such as shopping records, browsing habits, social media activities, etc.) to a specific DLP and flexibly choose whether to authorize the use of this data by specific third parties;
In PublicAI, users can use #AI 或#Web3 as a classification tag on X and @PublicAI to collect data.
Currently, Grass and OpenLayer are both considering adding data annotation as a key component.
Synesis introduced the concept of "Train2earn", emphasizing data quality, where users can earn rewards by providing labeled data, annotations, or other forms of input.
The data labeling project Sapien gamifies the labeling tasks and allows users to stake points to earn more points.
The currently common privacy technologies in Web3 include:
Trusted Execution Environment ( TEE ), such as Super Protocol;
Fully Homomorphic Encryption (FHE), such as BasedAI, Fhenix.io, or Inco Network;
Zero-knowledge technology (zk), such as the Reclaim Protocol using zkTLS technology, generates zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.
However, the field is still in its early stages, and most projects are still exploring. One current dilemma is that the computational costs are too high, some examples include: