What is MCP?

Intermediate4/24/2025, 8:52:31 AM
MCP (Model Context Protocol) is an emerging field that has recently attracted attention from Web2 tech companies like Google. The article provides an in-depth analysis of the principles and positioning of the MCP protocol, explaining how it delivers context to large language models (LLMs) through standardized communication with applications. It also explores the team behind DARK, MtnDAO, and how the founder, Edgar Pavlovsky’s strong execution capabilities and the team's future outlook could potentially drive up the token’s price.

Forward the Original Title ‘AI’s USB-C Standard: Understanding MCP’

During my years at Alliance, I’ve watched countless founders build their own specialized tools and data integrations built into their AI agents and workflows. However, these algorithms, formalizations and unique datasets are locked away behind custom integrations that few people would ever use.

This has been rapidly changing with the emergence of Model Context Protocol. MCP is defined as an open protocol that standardizes how applications communicate and provide context to LLMs. One analogy that I really liked is that “MCPs for AI applications are like USB-C for hardware”; that is standardized, plug-and-playable, versatile, and transformative.

Why MCP?

LLMs like Claude, OpenAI, LLAMA, etc are incredibly powerful, but they’re limited by the information they can access at the moment. That means that they typically have knowledge cutoffs, can’t browse the web independently, and don’t have direct access to your personal files or specialized tools without some form of integration.

In particular, before, developers faced three major challenges when connecting LLMs to external data and tools:

  1. Integration Complexity: Building separate integrations for each AI platform (Claude, ChatGPT, etc.) required duplicating effort and maintaining multiple codebases
  2. Tool Fragmentation: Each tool functionality (e.g., file access, API connections, etc) needed its own specialized integration code and permission model
  3. Limited Distribution: Specialized tools were confined to specific platforms, limiting their reach and impact

MCP solves these problems by providing a standardized way for any LLM to securely access external tools and data sources through a common protocol. Now that we understand what MCP does, let’s look at what people are building with it.

What Are People Building with MCP?

The MCP ecosystem is currently exploding with innovation. Here are some recent examples, I found on Twitter, of developers showcasing their work.

  • AI-Powered Storyboarding: An MCP integration that enables Claude to control ChatGPT-4o, automatically generating complete storyboards in Ghibli style without any human intervention.
  • ElevenLabs Voice Integration: An MCP server that gives Claude and Cursor access to their entire AI audio platform through simple text prompts. The integration is powerful enough to create voice agents that can make outbound phone calls. This demonstrates how MCP can extend current AI tools into the audio realm.
  • Browser Automation with Playwright: An MCP server that allows AI agents to control web browsers without requiring screenshots or vision models. This creates new possibilities for web automation by giving LLMs direct control over browser interactions in a standardized way.
  • Personal WhatsApp Integration: A server that connects to personal WhatsApp accounts, enabling Claude to search through messages and contacts, as well as send new messages.
  • Airbnb Search Tool: An Airbnb apartment search tool that showcases MCP’s simplicity and power for creating practical applications that interact with web services.
  • Robot Control System: An MCP controller for a robot. The example bridges the gap between LLMs and physical hardware, showing MCP’s potential for IoT applications and robotics.
  • Google Maps and Local Search: Connecting Claude to Google Maps data, creating a system that can find and recommend local businesses like coffee shops. This extension enables AI assistants with location-based services.
  • Blockchain Integration: The Lyra MCP project brings MCP capabilities to StoryProtocol and other web3 platforms. This allows interaction with blockchain data and smart contracts, opening up new possibilities for decentralized applications enhanced by AI.

What makes these examples particularly compelling is their diversity. In just a short time since its introduction, developers have created integrations spanning creative media production, communication platforms, hardware control, location services, and blockchain technology. All these varied applications follow the same standardized protocol, demonstrating MCP’s versatility and potential to become a universal standard for AI tool integration.

For a comprehensive collection of MCP servers, check out the official MCP servers repository on GitHub. With a careful disclaimer, before using any MCP server, be cautious about what you are running and giving permissions to.

Promise vs. Hype

With any new technology, it’s worth asking: Is MCP truly transformative, or just another overhyped tool that will fade away?

Having watched numerous startups in this space, I believe MCP represents a genuine inflection point for AI development. Unlike many trends that promise revolution but deliver incremental change, MCP is a productivity boost that solves a fundamental infrastructure problem that has been holding back the entire ecosystem.

What makes it particularly valuable is that it’s not trying to replace existing AI models or compete with them, rather, it’s making them all more useful by connecting them to external tools and the data they need.

That said, there are legitimate concerns around security and standardization. As with any protocol in its early days, we’ll likely see growing pains as the community works out best practices around audits, permissions, authentication, and server verification. The developer needs to trust the functionality of these MCP servers and shouldn’t blindly trust them, especially as they’ve become abundant. This article discusses some of the recent vulnerabilities exposed by blindy using MCP servers that have not been carefully vetted, even if you are running it locally.

The Future of AI is Contextual

The most powerful AI applications won’t be standalone models but ecosystems of specialized capabilities connected through standardized protocols like MCP. For startups, MCP represents an opportunity to build specialized components that fit into these growing ecosystems. It’s a chance to leverage your unique knowledge and capabilities while benefiting from the massive investments in foundation models.

Looking ahead, we can expect MCP to become a fundamental part of AI infrastructure, much like HTTP became for the web. As the protocol matures and adoption grows, we’ll likely see entire marketplaces of specialized MCP servers emerge, allowing AI systems to tap into virtually any capability or data source imaginable.

Appendix

For those interested in understanding how MCP actually works beneath the surface, the following appendix provides a technical breakdown of its architecture, workflow, and implementation.

Under the Hoods of MCP

Similar to how HTTP standardized the way the web accesses external data sources and information, MCP does for AI frameworks, creating a common language that allows different AI systems to communicate seamlessly. So let’s explore how it does that.

MCP Architecture and Flow

The main architecture follows a client-server model with four key components working together:

  • MCP Hosts: Desktop AI applications like Claude or ChatGPT, IDEs like cursorAI or VSCode, or other AI tools that need access to external data and capabilities
  • MCP Clients: Protocol handlers embedded within hosts that maintain one-to-one connections with MCP servers
  • MCP Servers: Lightweight programs exposing specific functionalities through the standardized protocol
  • Data Sources: Your files, databases, APIs, and services that MCP servers can securely access

So now that we have discussed the components, lets look into how they interact in a typical workflow:

  1. User Interaction: It begins with a user asking a question or making a request in an MCP Host,e.g., Claude Desktop.
  2. LLM Analysis: The LLM analyzes the request and determines it needs external information or tools to provide a complete response
  3. Tool Discovery: The MCP Client queries connected MCP Servers to discover what tools are available
  4. Tool Selection: The LLM decides which tools to use based on the request and available capabilities
  5. Permission Request: The Host asks the user for permission to execute the selected tool crucial for transparency and security.
  6. Tool Execution: Upon approval, the MCP Client sends the request to the appropriate MCP Server, which executes the operation with its specialized access to data sources
  7. Result Processing: The server returns the results to the client, which formats them for the LLM
  8. Response Generation: The LLM incorporates the external information into a comprehensive response
  9. User Presentation: Finally, the response is displayed to the end user

What makes this architecture powerful is that each MCP Server specializes in a specific domain but uses a standardized communication protocol. So rather than rebuilding integrations for each platform, developers can only focus on developing tools once for their entire AI ecosystem.

How To Build Your First MCP Server

Now let’s look at how one can implement a simple MCP server in a few lines of code using the MCP SDK.

In this simple example, we want to extend Claude Desktop’s ability to be able to answer questions like “What are some coffee shops near Central Park?” from Google maps. You can easily extend this to get reviews or ratings. But for now, lets focus on the MCP tool find_nearby_places which will allow Claude to get this information directly from Google Maps and present the results in a conversational way.

As you can see, the code is really simple. 1) It transforms the query to a Google map API search and the 2) returns the top results in a structured format. Thus information is passed back to the LLM for further decision making.

Now we need to let Claude Desktop know about this tool, so we register it in its configuration file as follows.

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

And voila, you are done. Now you have just extended Claude to find realtime locations from Google maps.

Disclaimer:

  1. This article is reprinted from [X]. Forward the Original Title ‘AI’s USB-C Standard: Understanding MCP’. All copyrights belong to the original author [@Drmelseidy]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.

  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

* 投資有風險,入市須謹慎。本文不作為 Gate.io 提供的投資理財建議或其他任何類型的建議。
* 在未提及 Gate.io 的情況下,複製、傳播或抄襲本文將違反《版權法》,Gate.io 有權追究其法律責任。

What is MCP?

Intermediate4/24/2025, 8:52:31 AM
MCP (Model Context Protocol) is an emerging field that has recently attracted attention from Web2 tech companies like Google. The article provides an in-depth analysis of the principles and positioning of the MCP protocol, explaining how it delivers context to large language models (LLMs) through standardized communication with applications. It also explores the team behind DARK, MtnDAO, and how the founder, Edgar Pavlovsky’s strong execution capabilities and the team's future outlook could potentially drive up the token’s price.

Forward the Original Title ‘AI’s USB-C Standard: Understanding MCP’

During my years at Alliance, I’ve watched countless founders build their own specialized tools and data integrations built into their AI agents and workflows. However, these algorithms, formalizations and unique datasets are locked away behind custom integrations that few people would ever use.

This has been rapidly changing with the emergence of Model Context Protocol. MCP is defined as an open protocol that standardizes how applications communicate and provide context to LLMs. One analogy that I really liked is that “MCPs for AI applications are like USB-C for hardware”; that is standardized, plug-and-playable, versatile, and transformative.

Why MCP?

LLMs like Claude, OpenAI, LLAMA, etc are incredibly powerful, but they’re limited by the information they can access at the moment. That means that they typically have knowledge cutoffs, can’t browse the web independently, and don’t have direct access to your personal files or specialized tools without some form of integration.

In particular, before, developers faced three major challenges when connecting LLMs to external data and tools:

  1. Integration Complexity: Building separate integrations for each AI platform (Claude, ChatGPT, etc.) required duplicating effort and maintaining multiple codebases
  2. Tool Fragmentation: Each tool functionality (e.g., file access, API connections, etc) needed its own specialized integration code and permission model
  3. Limited Distribution: Specialized tools were confined to specific platforms, limiting their reach and impact

MCP solves these problems by providing a standardized way for any LLM to securely access external tools and data sources through a common protocol. Now that we understand what MCP does, let’s look at what people are building with it.

What Are People Building with MCP?

The MCP ecosystem is currently exploding with innovation. Here are some recent examples, I found on Twitter, of developers showcasing their work.

  • AI-Powered Storyboarding: An MCP integration that enables Claude to control ChatGPT-4o, automatically generating complete storyboards in Ghibli style without any human intervention.
  • ElevenLabs Voice Integration: An MCP server that gives Claude and Cursor access to their entire AI audio platform through simple text prompts. The integration is powerful enough to create voice agents that can make outbound phone calls. This demonstrates how MCP can extend current AI tools into the audio realm.
  • Browser Automation with Playwright: An MCP server that allows AI agents to control web browsers without requiring screenshots or vision models. This creates new possibilities for web automation by giving LLMs direct control over browser interactions in a standardized way.
  • Personal WhatsApp Integration: A server that connects to personal WhatsApp accounts, enabling Claude to search through messages and contacts, as well as send new messages.
  • Airbnb Search Tool: An Airbnb apartment search tool that showcases MCP’s simplicity and power for creating practical applications that interact with web services.
  • Robot Control System: An MCP controller for a robot. The example bridges the gap between LLMs and physical hardware, showing MCP’s potential for IoT applications and robotics.
  • Google Maps and Local Search: Connecting Claude to Google Maps data, creating a system that can find and recommend local businesses like coffee shops. This extension enables AI assistants with location-based services.
  • Blockchain Integration: The Lyra MCP project brings MCP capabilities to StoryProtocol and other web3 platforms. This allows interaction with blockchain data and smart contracts, opening up new possibilities for decentralized applications enhanced by AI.

What makes these examples particularly compelling is their diversity. In just a short time since its introduction, developers have created integrations spanning creative media production, communication platforms, hardware control, location services, and blockchain technology. All these varied applications follow the same standardized protocol, demonstrating MCP’s versatility and potential to become a universal standard for AI tool integration.

For a comprehensive collection of MCP servers, check out the official MCP servers repository on GitHub. With a careful disclaimer, before using any MCP server, be cautious about what you are running and giving permissions to.

Promise vs. Hype

With any new technology, it’s worth asking: Is MCP truly transformative, or just another overhyped tool that will fade away?

Having watched numerous startups in this space, I believe MCP represents a genuine inflection point for AI development. Unlike many trends that promise revolution but deliver incremental change, MCP is a productivity boost that solves a fundamental infrastructure problem that has been holding back the entire ecosystem.

What makes it particularly valuable is that it’s not trying to replace existing AI models or compete with them, rather, it’s making them all more useful by connecting them to external tools and the data they need.

That said, there are legitimate concerns around security and standardization. As with any protocol in its early days, we’ll likely see growing pains as the community works out best practices around audits, permissions, authentication, and server verification. The developer needs to trust the functionality of these MCP servers and shouldn’t blindly trust them, especially as they’ve become abundant. This article discusses some of the recent vulnerabilities exposed by blindy using MCP servers that have not been carefully vetted, even if you are running it locally.

The Future of AI is Contextual

The most powerful AI applications won’t be standalone models but ecosystems of specialized capabilities connected through standardized protocols like MCP. For startups, MCP represents an opportunity to build specialized components that fit into these growing ecosystems. It’s a chance to leverage your unique knowledge and capabilities while benefiting from the massive investments in foundation models.

Looking ahead, we can expect MCP to become a fundamental part of AI infrastructure, much like HTTP became for the web. As the protocol matures and adoption grows, we’ll likely see entire marketplaces of specialized MCP servers emerge, allowing AI systems to tap into virtually any capability or data source imaginable.

Appendix

For those interested in understanding how MCP actually works beneath the surface, the following appendix provides a technical breakdown of its architecture, workflow, and implementation.

Under the Hoods of MCP

Similar to how HTTP standardized the way the web accesses external data sources and information, MCP does for AI frameworks, creating a common language that allows different AI systems to communicate seamlessly. So let’s explore how it does that.

MCP Architecture and Flow

The main architecture follows a client-server model with four key components working together:

  • MCP Hosts: Desktop AI applications like Claude or ChatGPT, IDEs like cursorAI or VSCode, or other AI tools that need access to external data and capabilities
  • MCP Clients: Protocol handlers embedded within hosts that maintain one-to-one connections with MCP servers
  • MCP Servers: Lightweight programs exposing specific functionalities through the standardized protocol
  • Data Sources: Your files, databases, APIs, and services that MCP servers can securely access

So now that we have discussed the components, lets look into how they interact in a typical workflow:

  1. User Interaction: It begins with a user asking a question or making a request in an MCP Host,e.g., Claude Desktop.
  2. LLM Analysis: The LLM analyzes the request and determines it needs external information or tools to provide a complete response
  3. Tool Discovery: The MCP Client queries connected MCP Servers to discover what tools are available
  4. Tool Selection: The LLM decides which tools to use based on the request and available capabilities
  5. Permission Request: The Host asks the user for permission to execute the selected tool crucial for transparency and security.
  6. Tool Execution: Upon approval, the MCP Client sends the request to the appropriate MCP Server, which executes the operation with its specialized access to data sources
  7. Result Processing: The server returns the results to the client, which formats them for the LLM
  8. Response Generation: The LLM incorporates the external information into a comprehensive response
  9. User Presentation: Finally, the response is displayed to the end user

What makes this architecture powerful is that each MCP Server specializes in a specific domain but uses a standardized communication protocol. So rather than rebuilding integrations for each platform, developers can only focus on developing tools once for their entire AI ecosystem.

How To Build Your First MCP Server

Now let’s look at how one can implement a simple MCP server in a few lines of code using the MCP SDK.

In this simple example, we want to extend Claude Desktop’s ability to be able to answer questions like “What are some coffee shops near Central Park?” from Google maps. You can easily extend this to get reviews or ratings. But for now, lets focus on the MCP tool find_nearby_places which will allow Claude to get this information directly from Google Maps and present the results in a conversational way.

As you can see, the code is really simple. 1) It transforms the query to a Google map API search and the 2) returns the top results in a structured format. Thus information is passed back to the LLM for further decision making.

Now we need to let Claude Desktop know about this tool, so we register it in its configuration file as follows.

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

And voila, you are done. Now you have just extended Claude to find realtime locations from Google maps.

Disclaimer:

  1. This article is reprinted from [X]. Forward the Original Title ‘AI’s USB-C Standard: Understanding MCP’. All copyrights belong to the original author [@Drmelseidy]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.

  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

* 投資有風險,入市須謹慎。本文不作為 Gate.io 提供的投資理財建議或其他任何類型的建議。
* 在未提及 Gate.io 的情況下,複製、傳播或抄襲本文將違反《版權法》,Gate.io 有權追究其法律責任。
即刻開始交易
註冊並交易即可獲得
$100
和價值
$5500
理財體驗金獎勵!