🚀 Gate.io #Launchpad# for Puffverse (PFVS) is Live!
💎 Start with Just 1 $USDT — the More You Commit, The More #PFVS# You Receive!
Commit Now 👉 https://www.gate.io/launchpad/2300
⏰ Commitment Time: 03:00 AM, May 13th - 12:00 PM, May 16th (UTC)
💰 Total Allocation: 10,000,000 #PFVS#
⏳ Limited-Time Offer — Don’t Miss Out!
Learn More: https://www.gate.io/article/44878
#GateioLaunchpad# #GameeFi#
A Brief Analysis of McKinsey's Lilli: What Development Ideas Does It Provide for the Enterprise AI Market?
McKinsey's Lilli case provides key development insights for the enterprise AI market: the potential market opportunity of Edge Computing + small models. This AI assistant, which integrates 100,000 internal documents, not only achieved a 70% adoption rate among employees, but is also used an average of 17 times per week, a level of product stickiness that is rare among enterprise tools. Below, I would like to share my thoughts:
Data security for enterprises is a pain point: The core knowledge assets accumulated by McKinsey over 100 years and specific data accumulated by some small and medium-sized enterprises have high data sensitivity, and are not suitable for processing on public clouds. Finding a balance where "data does not leave the local area and AI capabilities are not compromised" is an actual market demand. Edge Computing is an exploration direction;
Specialized small models will replace general large models: What enterprise users need is not a "hundred billion parameters, all-purpose" general model, but a specialized assistant that can accurately answer specific domain questions. In contrast, there is a natural contradiction between the generality and professional depth of large models, and in enterprise scenarios, small models are often valued more.
Balancing the cost of self-built AI infrastructure and API calls: Although the combination of Edge Computing and small models has a large upfront investment, the long-term operational costs are significantly reduced. Imagine if the AI large model frequently used by 45,000 employees comes from API calls; this dependency, increase in usage scale, and the growth of discussions would make self-built AI infrastructure a rational choice for medium and large enterprises.
New Opportunities in the Edge Hardware Market: Large model training relies on high-end GPUs, but edge inference has completely different hardware requirements. Chip manufacturers like Qualcomm and MediaTek are seeing a market opportunity with processors optimized for edge AI. As every business wants to create its own "Lilli", edge AI chips designed for low power consumption and high efficiency will become a necessity for infrastructure.
The decentralized web3 AI market is also being enhanced simultaneously: Once enterprises' demands for computing power, fine-tuning, algorithms, etc., on small models are driven, how to balance resource scheduling will become an issue. Traditional centralized resource scheduling will become problematic, which will directly create significant market demand for the web3 AI decentralized small model fine-tuning network, decentralized computing power service platforms, and so on.
While the market is still discussing the general capabilities of AGI, it is more pleasant to see many enterprise end users already exploring the practical value of AI. Clearly, compared to the past monopolistic leaps in computing power and algorithms, when the market shifts its focus to Edge Computing + small model approaches, it will bring greater market vitality.