
The Model Context Protocol (MCP) is an open standard that defines a universal interface for connecting AI models to external tools, data sources, and services. Originally developed by Anthropic and announced in November 2024 [1], MCP has rapidly evolved into the dominant interoperability layer for agentic AI — often described as the "USB-C of AI integrations" [2]. By April 2026, the protocol has crossed 97 million monthly SDK downloads, supports over 10,000 public servers, and is governed by the Linux Foundation's Agentic AI Foundation (AAIF) with backing from Anthropic, OpenAI, Google, Microsoft, AWS, and Cloudflare [3][4].
MCP solves a fundamental problem: without a shared protocol, every AI application must build bespoke integrations for every external system it needs to access. MCP replaces this N×M integration matrix with a single standardized contract built on JSON-RPC 2.0, enabling any compliant client to discover and use any compliant server's capabilities at runtime [5].
MCP uses a three-layer architecture with clearly separated roles:
graph TD
subgraph Host["Host (AI Application)"]
C1[MCP Client 1]
C2[MCP Client 2]
C3[MCP Client N]
end
C1 <-->|JSON-RPC 2.0| S1[MCP Server A<br/>e.g. Filesystem]
C2 <-->|JSON-RPC 2.0| S2[MCP Server B<br/>e.g. GitHub API]
C3 <-->|JSON-RPC 2.0| S3[MCP Server C<br/>e.g. Database]
S1 --- D1[(Local Files)]
S2 --- D2[GitHub REST API]
S3 --- D3[(PostgreSQL)]
| Role | Description |
|---|---|
| Host | The AI application (e.g., Claude Desktop, Cursor, a custom agent) that orchestrates the session. A host creates and manages one or more MCP clients. |
| Client | A connector within the host that maintains a 1:1 stateful session with a single MCP server. Handles capability negotiation, request routing, and lifecycle management. |
| Server | A lightweight program that exposes specific capabilities — tools, resources, and prompts — to the client. The server knows nothing about the host's internal implementation. [5][6] |
initialize request declaring its supported protocol version and capabilities. The server responds with its own capabilities.tools/list, tools/call, resources/read, etc.).MCP servers expose capabilities through three core primitives, each serving a distinct purpose in the agent's reasoning loop:
Tools are executable functions that the AI model can invoke to perform actions — querying databases, calling APIs, running computations, or manipulating files. Each tool is uniquely identified by a name and described by a JSON Schema defining its input parameters [7].
{
"name": "search_products",
"description": "Search the product catalog by keyword",
"inputSchema": {
"type": "object",
"properties": {
"query": { "type": "string" },
"limit": { "type": "integer", "default": 10 }
},
"required": ["query"]
}
}
Tools return structured content (text, images, audio, resource links, or embedded resources) with optional annotations for audience, priority, and timestamps. The 2025-06-18 spec added structured tool output, allowing servers to define an outputSchema so clients can programmatically parse results [8].
The 2025-11-25 spec introduced experimental Tasks support — any tool call can now return a task handle, enabling "call-now, fetch-later" patterns for long-running operations [9].
Resources are data sources the agent can read for context — files, logs, database records, API responses, configuration documents. Unlike tools, resources are not actions; they represent what the AI can know rather than what it can do. This distinction shapes how agents reason about available information [6].
Resources are identified by URIs and can be:
resources/list for discoveryresources/read to fetch contentPrompts are reusable templates that structure interactions with language models. Instead of hardcoding system prompts in the application layer, servers expose them through MCP, enabling different agents to discover and use the same high-quality prompt engineering without duplication across codebases [6].
Prompts support parameterization and can include multi-turn message sequences, making them useful for few-shot examples, system instructions, and workflow templates.
MCP is not purely server-to-client. Clients can offer features back to servers:
| Feature | Description |
|---|---|
| Sampling | Server-initiated LLM interactions — the server can request the client to run an inference, enabling recursive agentic behaviors. Users must explicitly approve sampling requests. [5] |
| Roots | Server-initiated queries about filesystem or URI boundaries the server should operate within. |
| Elicitation | Server-initiated requests for additional information from the user (added in 2025-06-18), including URL-mode elicitation for OAuth flows (added in 2025-11-25). [8][9] |
MCP separates the data layer (JSON-RPC messages) from the transport layer (how those messages are delivered). The protocol currently defines two production transports, with a third deprecated:
The client spawns the MCP server as a child process and communicates over standard input/output streams. Zero network overhead, zero infrastructure, perfect process isolation [6][10].
Use when: The server runs on the same machine as the client — CLI tools, desktop applications, local development.
sequenceDiagram
participant Client as MCP Client
participant Server as MCP Server (child process)
Client->>Server: spawn process
Client->>Server: JSON-RPC via stdin
Server->>Client: JSON-RPC via stdout
Client->>Server: close stdin
Server-->>Client: process exits
Introduced in the 2025-03-26 specification, Streamable HTTP is the recommended transport for remote MCP servers. The client sends JSON-RPC requests as HTTP POST to a single endpoint (e.g., /mcp), and the server responds with either a standard JSON response or an SSE stream for streaming results [10][11].
Key advantages over legacy SSE:
The original Server-Sent Events transport required two separate endpoints — one for the SSE stream and one for client-to-server POST requests. It was deprecated in the 2025-03-26 spec in favor of Streamable HTTP. Legacy SSE is still supported by some clients for backward compatibility, but new integrations should use Streamable HTTP [10][11].
| Transport | Topology | Session State | Infrastructure | Best For |
|---|---|---|---|---|
| stdio | Local (same machine) | Process lifetime | None | CLI tools, desktop apps, local dev |
| Streamable HTTP | Remote (network) | Optional (session headers) | HTTP server | Cloud services, multi-client, production |
| SSE (deprecated) | Remote (network) | Connection lifetime | HTTP + SSE server | Legacy integrations only |
WebSocket transport has been proposed for long-lived bidirectional connections with session persistence, but is not yet in the spec as of early 2026 [6].
The MCP spec has gone through four major revisions, each driven by real-world adoption feedback:
| Version | Date | Key Changes |
|---|---|---|
| 2024-11-05 | Nov 2024 | Initial stable release. Core primitives (tools, resources, prompts), stdio and SSE transports, JSON-RPC 2.0 messaging. [8] |
| 2025-03-26 | Mar 2025 | Streamable HTTP replaces SSE as recommended remote transport. Deprecation of legacy SSE. [10][11] |
| 2025-06-18 | Jun 2025 | Structured tool output. OAuth Resource Server classification with RFC 8707 Resource Indicators. Elicitation. Resource links. Removed JSON-RPC batching. [8][13] |
| 2025-11-25 | Nov 2025 | Experimental Tasks (async call-now, fetch-later). OpenID Connect Discovery. Icons metadata. Incremental scope consent. Client ID Metadata Documents (CIMD). Enterprise-managed authorization. Machine-to-machine auth. [9][14][15] |
Security has been a major focus of the spec's evolution:
The MCP project maintains official SDKs in multiple languages, all implementing the full protocol lifecycle:
| SDK | Repository | Maturity |
|---|---|---|
| TypeScript | modelcontextprotocol/typescript-sdk | Production — most mature, reference implementation |
| Python | modelcontextprotocol/python-sdk | Production — widely used for data/ML servers |
| Java | modelcontextprotocol/java-sdk | Stable |
| Kotlin | modelcontextprotocol/kotlin-sdk | Stable |
| C# | modelcontextprotocol/csharp-sdk | Stable |
| Ruby | modelcontextprotocol/ruby-sdk | Stable |
| Go | Community-maintained | Stable [17][18] |
registry.modelcontextprotocol.io for discovering and publishing MCP servers [19][20].As of early 2026, MCP client support is widespread across the AI tooling landscape:
The server ecosystem has exploded. Key categories include:
Multiple discovery platforms compete: the official MCP Registry, GitHub's awesome-mcp-servers lists, Smithery, mcp.so, and various curated directories — some tracking 8,000+ entries [20][23].
On December 9, 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation [4][24]. The AAIF provides vendor-neutral stewardship for:
Platinum members include AWS, Google, Microsoft, OpenAI, Anthropic, Cloudflare, Block, and others. The move mirrors the governance model that helped Linux, Kubernetes, and PyTorch scale through transparent, community-driven evolution [4][24][25].
A minimal Python MCP server exposing a single tool:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("demo-server")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
if __name__ == "__main__":
mcp.run(transport="stdio")
A client (e.g., Claude Desktop) discovers this tool at runtime via tools/list, sees the auto-generated JSON Schema from the type hints, and can invoke it via tools/call — no hardcoded integration required [6][17].
MCP is the de facto standard for connecting AI agents to external systems in 2026, with industry-wide adoption across Anthropic, OpenAI, Google, Microsoft, and the broader open-source ecosystem.
Three primitives cover the design space: Tools (actions), Resources (context), and Prompts (templates) provide a clean separation of concerns for agent capabilities.
Streamable HTTP is the future of remote transport, replacing SSE with a simpler, more infrastructure-friendly approach. stdio remains the optimal choice for local servers.
Security has matured rapidly — from API keys to OAuth 2.1 with PKCE, Resource Indicators, enterprise-managed authorization, and machine-to-machine auth in under a year.
Neutral governance under the Linux Foundation's AAIF ensures the protocol evolves as a community standard rather than a single-vendor project.
The ecosystem is vast but fragmented — with 10,000+ servers and multiple competing registries, discovery and quality curation remain active challenges.
The spec continues to evolve — experimental Tasks support, WebSocket transport proposals, and deeper OAuth integration signal that MCP is still in its rapid growth phase.
[1] Anthropic, "Introducing the Model Context Protocol," November 2024. https://www.anthropic.com/news/model-context-protocol
[2] SparkCo, "Why Model Context Protocol Became the USB-C of AI Agents," 2026. https://sparkco.ai/blog/mcp-in-2026-why-model-context-protocol-became-the-usb-c-of-ai-agents
[3] MarsDev, "MCP (Model Context Protocol): Developer Guide [2026]," 2026. https://www.marsdevs.com/blog/model-context-protocol-mcp
[4] Linux Foundation, "Announces the Formation of the Agentic AI Foundation (AAIF)," December 2025. https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
[5] Model Context Protocol, "Specification — Latest," https://modelcontextprotocol.io/specification/latest
[6] ByteIota, "Model Context Protocol: Complete Developer Implementation Guide 2026," April 2026. https://byteiota.com/model-context-protocol-complete-developer-implementation-guide-2026/
[7] Model Context Protocol, "Tools — Server Specification," https://modelcontextprotocol.io/specification/latest/server/tools
[8] Model Context Protocol Info, "Specification Versions," https://modelcontextprotocol.info/specification/
[9] WorkOS, "MCP 2025-11-25 Spec Update: Async Tasks, Better OAuth, Extensions," 2025. https://workos.com/blog/mcp-2025-11-25-spec-update
[10] APIGene, "MCP SSE vs Stdio: Transport Options Explained (2026)," 2026. https://apigene.ai/blog/mcp-sse-vs-stdio
[11] fka.dev, "Why MCP Deprecated SSE and Went with Streamable HTTP," June 2025. https://blog.fka.dev/blog/2025-06-06-why-mcp-deprecated-sse-and-go-with-streamable-http/
[12] Fast.io, "MCP Streamable HTTP Transport Guide," 2026. https://fast.io/resources/mcp-streamable-http-transport/
[13] DasRoot, "Securing Model Context Protocol: OAuth, mTLS, Zero Trust," February 2026. https://dasroot.net/posts/2026/02/securing-model-context-protocol-oauth-mtls-zero-trust/
[14] Aaron Parecki, "Client Registration and Enterprise Management in the November MCP Spec," November 2025. https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update
[15] Auth0, "MCP November 2025 Specification Update," 2026. https://auth0.com/blog/mcp-november-2025-specification-update.md
[16] Alibaba Cloud, "Comprehensive Analysis of New Features in the MCP Specification," 2025. https://www.alibabacloud.com/blog/602206
[17] Model Context Protocol, "Build an MCP Client — Quickstart," https://modelcontextprotocol.io/quickstart/client
[18] Stainless, "MCP SDK Comparison: Python vs TypeScript vs Go," https://www.stainless.com/mcp/mcp-sdk-comparison-python-vs-typescript-vs-go-implementations
[19] Model Context Protocol Info, "Tools — MCP Registry, Inspector, Debugging," https://modelcontextprotocol.info/tools/
[20] GitHub, "modelcontextprotocol/registry," https://github.com/modelcontextprotocol/registry
[21] OpenAI, "Model Context Protocol (MCP) — OpenAI Agents SDK," https://openai.github.io/openai-agents-js/guides/mcp
[22] Web4Agents, "Model Context Protocol (MCP) — Docs," February 2026. https://web4agents.org/en/docs/mcp
[23] APIGene, "MCP Marketplace Guide: Find the Right Server (2026)," 2026. https://apigene.ai/blog/mcp-marketplace
[24] WindowsForum, "MCP Joins Linux Foundation AAIF," December 2025. https://windowsforum.com/threads/mcp-joins-aaif-under-linux-foundation-to-standardize-agent-interoperability.393291/
[25] Gend.co, "OpenAI & Linux Foundation Launch Agentic AI Foundation," December 2025. https://www.gend.co/blog/openai-agentic-ai-foundation-linux