Skip to content
X

MCP Capabilities

MCP servers can provide three types of capabilities to clients: Tools, Resources, and Prompts. Each differs in terms of who controls the invocation, whether side effects are involved, and what kind of data is handled.

TypeControllerSide EffectsPrimary Use
ToolsAI model (autonomous)YesExecuting operations, calling external APIs
ResourcesApplication (Host)NoProviding read-only data
PromptsUser / DeveloperNoManaging prompt templates

Tools are functions that execute something on behalf of the AI model. They perform operations, computations, or requests to external APIs — that is, operations with side effects.

  • The AI model decides autonomously to call them (model-driven)
  • Perform processing or computation with effects and side effects
  • A mechanism to request user permission before execution is recommended

How Tools Work: A Weather Retrieval Example

Section titled “How Tools Work: A Weather Retrieval Example”

When a user asks “What is the weather in Tokyo?”, the following flow occurs:

  1. The LLM determines that it needs to call the get_weather tool to look up the weather
  2. The MCP client sends get_weather(location="Tokyo") to the server
  3. The server calls the weather API and returns the result as JSON
  4. The LLM generates the answer “Tokyo is 18°C and sunny” based on the result
// Example tool call request
{
  "name": "get_weather",
  "arguments": {
    "location": "Tokyo"
  }
}

// Example server response
{
  "temperature": 18,
  "condition": "sunny",
  "humidity": 55
}

Tools involve operations that act on the outside world — writing files, sending emails, making API requests, and so on. For this reason, Hosts like Claude Desktop are designed to ask the user “May I use this tool?” before proceeding.

Examples of operations requiring user permission

  • Creating, updating, or deleting files
  • Sending emails or messages
  • Sending data to external APIs
  • Writing to a database

Resources are a mechanism for providing read-only data to the AI model. They function like a database or knowledge base, and their defining characteristic is that they have no side effects.

  • Controlled by the application (Host) — the AI model does not access them autonomously
  • Provide read-only data (no side effects)
  • Identified by URI (file://, https://, db://, etc.)
  • High safety from a privacy standpoint
URI example: file:///Users/user/documents/report.pdf

// Resource metadata
{
  "uri": "file:///Users/user/documents/report.pdf",
  "name": "Quarterly Report",
  "mimeType": "application/pdf"
}

When the Host application determines that “the content of this file should be included in the context,” it adds the Resource to the AI model’s context.

ComparisonToolsResources
ControllerAI model (autonomous)Application (Host)
Side effectsYes (writing, sending, etc.)No (read-only)
SecurityUser permission recommendedRelatively safe
Execution timingWhen the LLM determines it is neededWhen the Host determines it is needed

Prompts are predefined prompt templates or conversation flows managed by an MCP server. They allow standardized instructions and conversation patterns to guide AI behavior, all managed centrally on the server side.

  • Controlled by the user or developer — the AI model does not select them autonomously
  • Managed server-side, so they can be updated without changing the client app
  • Can accept arguments (parameters) to generate dynamic prompts
  • Enable standardization of reusable prompt patterns
// Example prompt definition
{
  "name": "code_review",
  "description": "Prompt template for code review",
  "arguments": [
    {
      "name": "language",
      "description": "Programming language",
      "required": true
    },
    {
      "name": "focus",
      "description": "Review focus (security/performance/readability)",
      "required": false
    }
  ]
}

When a user selects the code_review template and specifies language=Python, the server generates a concrete prompt that reflects those parameters and returns it to the client.

Managing Prompts on the server side provides the following benefits:

  • Centralized management: Prompt improvements are made server-side; clients automatically use the latest version
  • Reusability: The same prompt templates can be shared across a team
  • Version control: The history of prompt changes can be managed on the server

Here is a decision guide for selecting the appropriate type:

graph TD
    A[What capability to provide?] --> B{Side effects?}
    B -->|Yes\ne.g., file write, API send| C[Tools]
    B -->|No| D{Is it AI-driven\nautonomously?}
    D -->|Yes| E[Tools\n※ Even without side effects,\nif model-driven use Tools]
    D -->|No| F{Data or prompt?}
    F -->|Data provision| G[Resources]
    F -->|Prompt\ntemplate| H[Prompts]
ScenarioAppropriate TypeReason
Fetching weather informationToolsAI calls autonomously, accesses external API
Reading a document fileResourcesRead-only, controlled by Host
Code review promptPromptsReusable template, selected by user
Sending an emailToolsHas side effects, requires user permission
Reading from a databaseResourcesNo side effects, data retrieval only
Project-specific instructionsPromptsStandardized flow defined by dev team
  • Tools: “Execution functions” the AI model calls autonomously. Side effects require careful security management.
  • Resources: Read-only data provision. Controlled by the Host with no side effects, providing high safety.
  • Prompts: Prompt templates managed on the server side. Can be updated without changing the client.
  • What is MCP? — Return to the MCP overview and definition
  • Why MCP? — The M×N integration problem explained
  • MCP Architecture — The three-layer structure of Host, Client, and Server

Q: Are Tools and Function Calling the same thing?

A: The concepts are similar but distinct. Function Calling is a feature specific to certain LLMs (such as those from OpenAI), where the AI interprets function signatures and generates arguments. Tools is a concept within the MCP protocol — a broader mechanism that may include Function Calling. MCP Tools are sometimes implemented internally using Function Calling.

Q: What is the difference between Resources and RAG (Retrieval-Augmented Generation)?

A: RAG (Retrieval-Augmented Generation) is a technique for searching external documents and adding them to the AI’s context. Resources can be used as one mechanism for implementing RAG, but Resources are not limited to RAG. Resources can provide any read-only data — files, databases, API results, and so on.

Q: Can a single MCP server provide all three types — Tools, Resources, and Prompts?

A: Yes, a single MCP server can provide all three types. For example, a GitHub MCP server could provide repository information as Resources, file creation as Tools, and PR creation templates as Prompts.


Link to this page (Japanese): MCPのケイパビリティ