📢 Gate Square #Creator Campaign Phase 2# is officially live!
Join the ZKWASM event series, share your insights, and win a share of 4,000 $ZKWASM!
As a pioneer in zk-based public chains, ZKWASM is now being prominently promoted on the Gate platform!
Three major campaigns are launching simultaneously: Launchpool subscription, CandyDrop airdrop, and Alpha exclusive trading — don’t miss out!
🎨 Campaign 1: Post on Gate Square and win content rewards
📅 Time: July 25, 22:00 – July 29, 22:00 (UTC+8)
📌 How to participate:
Post original content (at least 100 words) on Gate Square related to
A Brief Analysis of McKinsey's Lilli: What Development Ideas Does It Provide for the Enterprise AI Market?
McKinsey's Lilli case provides key development insights for the enterprise AI market: the potential market opportunity of Edge Computing + small models. This AI assistant, which integrates 100,000 internal documents, not only achieved a 70% adoption rate among employees, but is also used an average of 17 times per week, a level of product stickiness that is rare among enterprise tools. Below, I would like to share my thoughts:
Data security for enterprises is a pain point: The core knowledge assets accumulated by McKinsey over 100 years and specific data accumulated by some small and medium-sized enterprises have high data sensitivity, and are not suitable for processing on public clouds. Finding a balance where "data does not leave the local area and AI capabilities are not compromised" is an actual market demand. Edge Computing is an exploration direction;
Specialized small models will replace general large models: What enterprise users need is not a "hundred billion parameters, all-purpose" general model, but a specialized assistant that can accurately answer specific domain questions. In contrast, there is a natural contradiction between the generality and professional depth of large models, and in enterprise scenarios, small models are often valued more.
Balancing the cost of self-built AI infrastructure and API calls: Although the combination of Edge Computing and small models has a large upfront investment, the long-term operational costs are significantly reduced. Imagine if the AI large model frequently used by 45,000 employees comes from API calls; this dependency, increase in usage scale, and the growth of discussions would make self-built AI infrastructure a rational choice for medium and large enterprises.
New Opportunities in the Edge Hardware Market: Large model training relies on high-end GPUs, but edge inference has completely different hardware requirements. Chip manufacturers like Qualcomm and MediaTek are seeing a market opportunity with processors optimized for edge AI. As every business wants to create its own "Lilli", edge AI chips designed for low power consumption and high efficiency will become a necessity for infrastructure.
The decentralized web3 AI market is also being enhanced simultaneously: Once enterprises' demands for computing power, fine-tuning, algorithms, etc., on small models are driven, how to balance resource scheduling will become an issue. Traditional centralized resource scheduling will become problematic, which will directly create significant market demand for the web3 AI decentralized small model fine-tuning network, decentralized computing power service platforms, and so on.
While the market is still discussing the general capabilities of AGI, it is more pleasant to see many enterprise end users already exploring the practical value of AI. Clearly, compared to the past monopolistic leaps in computing power and algorithms, when the market shifts its focus to Edge Computing + small model approaches, it will bring greater market vitality.