Why MCP?
Why MCP?
Section titled “Why MCP?”The reason MCP is necessary lies in the M×N integration problem. As the number of AI applications (M) and external tools (N) grows, the amount of custom integration code multiplies, making the system impossible to scale. MCP reduces this to M+N.
The M×N Integration Problem
Section titled “The M×N Integration Problem”A World Without MCP
Section titled “A World Without MCP”Without MCP, each AI application must implement dedicated code for every tool it wants to connect to.
Consider a scenario with three AI apps and three tools:
AI App 1 → custom code for Database
AI App 1 → custom code for File System
AI App 1 → custom code for Calculator API
AI App 2 → custom code for Database (separate implementation)
AI App 2 → custom code for File System (separate implementation)
AI App 2 → custom code for Calculator API (separate implementation)
AI App 3 → ... (same, three more)Three apps × three tools = 9 separate integrations required.
graph LR
subgraph "Without MCP (M×N = 9 integrations)"
A1[AI App 1] -->|custom code| T1[(Database)]
A1 -->|custom code| T2[File\nSystem]
A1 -->|custom code| T3[Calculator API]
A2[AI App 2] -->|custom code| T1
A2 -->|custom code| T2
A2 -->|custom code| T3
A3[AI App 3] -->|custom code| T1
A3 -->|custom code| T2
A3 -->|custom code| T3
endWhy the Problem Gets Worse
Section titled “Why the Problem Gets Worse”This structure has the following issues:
| Problem | Description |
|---|---|
| Exploding development costs | Integration code grows multiplicatively as M and N increase |
| Maintenance difficulty | When a tool’s API changes, every AI app using it must be updated |
| Inconsistent quality | Each team implements independently, leading to varied error handling and security |
| Burden on tool providers | Must provide separate SDKs or plugins for each AI platform |
In practice, LLM providers are multiplying (OpenAI, Anthropic, Google, Mistral, etc.), and tools are diversifying (Slack, GitHub, Notion, various databases, etc.). Without a standard, the number of integration combinations approaches infinity.
Understanding the M×N Problem With Real Services
Section titled “Understanding the M×N Problem With Real Services”Abstract explanations can be hard to grasp, so let me use real service names to make this concrete.
A Concrete Scenario
Section titled “A Concrete Scenario”3 AI apps: Claude Desktop, Cursor, Windsurf 3 external tools: GitHub, Slack, Figma
Without MCP, here is what the integration table looks like:
| AI App | GitHub Integration | Slack Integration | Figma Integration |
|---|---|---|---|
| Claude Desktop | Custom impl. A | Custom impl. B | Custom impl. C |
| Cursor | Custom impl. D | Custom impl. E | Custom impl. F |
| Windsurf | Custom impl. G | Custom impl. H | Custom impl. I |
3 × 3 = 9 separate pieces of “glue code” are needed.
And if GitHub updates its API, I have to fix custom implementations A, D, and G — all three, separately.
With MCP
Section titled “With MCP”The same scenario with MCP:
- GitHub publishes one GitHub MCP server
- Slack publishes one Slack MCP server
- Figma publishes one Figma MCP server (→ Figma Native MCP)
- Claude Desktop, Cursor, and Windsurf each implement one MCP client
Total: 3 (AI apps) + 3 (tools) = just 6 implementations.
If GitHub updates its API, only one place needs updating — the GitHub MCP server.
The Difference Grows With Scale
Section titled “The Difference Grows With Scale”| Scale | Without MCP | With MCP | Reduction |
|---|---|---|---|
| 3 apps × 3 tools | 9 | 6 (3+3) | 33% |
| 10 apps × 20 tools | 200 | 30 (10+20) | 85% |
| 100 apps × 100 tools | 10,000 | 200 (100+100) | 98% |
The more apps and tools there are, the greater the impact of MCP.
The MCP Solution: M+N
Section titled “The MCP Solution: M+N”A World With MCP
Section titled “A World With MCP”Introducing MCP fundamentally changes the structure:
- Each AI app implements an “MCP client” exactly once
- Each tool implements an “MCP server” exactly once
- Because they speak a common protocol (MCP), no custom code is needed for new combinations
graph LR
subgraph "With MCP (M+N = only 6 implementations)"
A1[AI App 1\nMCP Client] --> MCP{MCP\nProtocol}
A2[AI App 2\nMCP Client] --> MCP
A3[AI App 3\nMCP Client] --> MCP
MCP --> T1[(Database\nMCP Server)]
MCP --> T2[File System\nMCP Server]
MCP --> T3[Calculator API\nMCP Server]
endComparing the Reduction
Section titled “Comparing the Reduction”| Scenario | Without MCP | With MCP |
|---|---|---|
| 3 apps × 3 tools | 9 integrations | 6 (3+3) |
| 10 apps × 20 tools | 200 integrations | 30 (10+20) |
| 100 apps × 100 tools | 10,000 integrations | 200 (100+100) |
| Impact of a tool’s API change | All AI apps affected | Only 1 MCP server needs updating |
The more apps and tools there are, the greater MCP’s impact.
Real-World Benefits
Section titled “Real-World Benefits”For AI App Developers
Section titled “For AI App Developers”Once I implement an MCP client, I can immediately use every MCP-compatible tool available. When new tools appear, I do not need to change my app’s code.
For Tool Providers
Section titled “For Tool Providers”Once I publish an MCP server, every MCP-compatible AI app can access it. I no longer need to develop custom integrations for specific AI platforms.
For End Users
Section titled “For End Users”When I switch AI apps, I can continue using the same tools without any changes. The freedom to combine tools and apps increases significantly.
Summary
Section titled “Summary”- Without MCP, integrating M AI apps with N tools requires M×N implementations
- With MCP, each side implements once, reducing the total to M+N
- The more apps and tools that exist, the more dramatic this reduction becomes
Next Steps
Section titled “Next Steps”- MCP Architecture — A detailed look at the three-layer structure of Host, Client, and Server
- MCP Capabilities — Tools, Resources, and Prompts that MCP servers can provide
- What is MCP? — Return to the MCP overview and definition
- Remote MCP vs Local MCP — The difference between local and remote MCP, using Figma Native MCP as an example
Frequently Asked Questions
Section titled “Frequently Asked Questions”Q: How was the M×N problem solved before MCP existed?
A: Each AI provider offered its own plugin system (e.g., OpenAI Plugins, LangChain’s tool functionality). However, these were not standardized, so a plugin built for one AI app could not be reused in another. MCP aims to solve this problem as an industry-wide common standard.
Q: How is MCP different from frameworks like LangChain or LlamaIndex?
A: LangChain and LlamaIndex are frameworks written in specific programming languages (primarily Python/TypeScript) — libraries for building AI applications. MCP is a language-agnostic protocol (communication standard). It is entirely possible to combine them: a LangChain app implementing an MCP client and communicating with MCP servers.
Q: Does every AI app support MCP?
A: As of March 2026, MCP adoption is growing rapidly. Major AI development tools including Claude Desktop, Cursor, Windsurf, and Zed support MCP. However, not every AI app has adopted it yet, and adoption is still expanding.
References
Section titled “References”Link to this page (Japanese): なぜMCPが必要か