🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
AI agent core technology now has a critical flaw: LangChain 'LangGrinch' vulnerability warning
Source: TokenPost Original Title: Critical Flaw in Core AI Agent Technology… LangChain ‘LangGrinch’ Alert Issued Original Link: The core library used in AI agent applications, ‘LangChain core(langchain-core)’, has been found to contain a serious security vulnerability. This issue has been named ‘LangGrinch’, allowing attackers to steal sensitive information from AI systems. This vulnerability could undermine the security foundation of numerous AI applications over the long term, raising alarms across the industry.
AI security startup Cyata Security has publicly disclosed this vulnerability as CVE-2025-68664 and assigned a danger score of 9.3 in the unified vulnerability scoring system(CVSS). The core of the problem lies in internal auxiliary functions within the LangChain core, which may mistake user input as trusted objects during serialization and deserialization processes. Attackers can exploit ‘prompt injection(prompt injection)’ techniques to insert internal token keys into structured outputs generated by the agent, causing them to be processed as trusted objects later on.
LangChain core plays a critical role among many AI agent frameworks, with tens of millions of downloads in the past 30 days and a total download count exceeding 847 million. Considering the entire LangChain ecosystem and its related applications, the scope of this vulnerability’s impact will be extremely broad.
Cyata security researcher Yarden Forrat stated: “This vulnerability is not just a deserialization issue but occurs within the serialization path itself, which is unusual. The storage, transmission, and subsequent recovery of structured data generated by AI prompts expose new attack surfaces.” Cyata has confirmed 12 clear attack vectors, which can evolve from a single prompt into multiple scenarios.
When triggered, the attack can cause remote HTTP requests to leak entire environment variables, including cloud credentials, database access URLs, vector database information, and LLM API keys, among other sensitive data. Importantly, this vulnerability is a structural flaw that exists solely within the LangChain core itself, without involving third-party tools or external integrations. Cyata refers to it as “a threat existing within the ecosystem pipeline layer,” indicating high vigilance.
Security patches to address this issue have been released separately for LangChain core versions 1.2.5 and 0.3.81. Before publicly disclosing the issue, Cyata had notified the LangChain operations team in advance, which has taken immediate response measures and implemented long-term security reinforcement plans.
Cyata co-founder and CEO Shahar Tal said: “As AI systems begin large-scale deployment into industrial settings, the permissions and scope of authority ultimately granted to the system have become a core security concern, surpassing the code execution itself. In the agent ID architecture, permission reduction and minimizing impact scope have become necessary design elements.”
This incident will serve as an industry wake-up call, prompting a re-examination of the security design fundamentals within the AI industry, especially in an era where agent automation increasingly replaces manual intervention.