Skip to content
Back to Insights
MCP AI LLM Anthropic

Model Context Protocol Explained: Why MCP Is the Standard for AI Tooling

Until MCP, every AI integration was bespoke — custom adapters, custom auth, custom error handling. Model Context Protocol changes that. This is the technical primer we wish we'd had six months ago.

Codecanis Admin

9 min read
AI and connections
An MCP-connected agent reading from three internal systems simultaneously.

If you've built more than one AI integration, you know the shape of the problem. Every tool is a bespoke adapter. Every data source has its own auth flow. Every model provider has a slightly different idea of what "tool calling" means. The result is that a year of agent engineering produces a pile of code that solves the same problems over and over again with subtle incompatibilities.

Model Context Protocol (MCP), introduced by Anthropic in late 2024 and now broadly adopted, is the standardisation layer that the AI tooling ecosystem has been quietly converging on. It's the closest thing the industry has to USB: a wire protocol for LLM clients to talk to external capabilities, with everyone agreeing on the shape of the conversation.

This is the technical primer.

What MCP Actually Is

MCP is a client-server protocol over JSON-RPC, designed to let any LLM-powered client (Claude Desktop, Cursor, your custom agent runtime) discover and use capabilities from any compatible server (a CRM connector, a filesystem provider, a database query tool, a private wiki).

The components:

  • MCP Host — the application that wants AI capabilities (Claude Desktop, an IDE, your custom agent).
  • MCP Client — embedded in the host, manages one connection to one MCP server.
  • MCP Server — exposes capabilities (tools, resources, prompts) to a client over a defined transport (stdio for local, HTTP+SSE or WebSocket for remote).

The protocol is intentionally narrow. It does not tell you how to build an agent, how to do reasoning, how to handle memory, or how to authenticate users. It tells you exactly one thing: how an LLM-powered client and a capability provider talk to each other. That narrowness is its strength.

The Three Primitives

Everything in MCP reduces to three primitives: resources, prompts, and tools. Most people start with tools because they map cleanly to function calling, but resources and prompts are where MCP gets interesting.

Resources

Resources are data the client can read. They have a URI, a MIME type, and contents that can be text or binary. The classic example is a file: file:///etc/hosts is a resource. A database row, a wiki page, a Jira ticket — all resources.

Resources are discoverable: clients can ask the server "what resources do you have?" and the server can return a list. Resources are also subscribable: a client can ask to be notified when a resource changes. That's a powerful primitive — it means an MCP server can act as a live data feed, not just a request/response endpoint.

Prompts

Prompts are templated workflows the server offers to the client. The server defines reusable prompt templates with named parameters; the client surfaces them in its UI (typically as a slash command or a suggested action), the user picks one, and the server returns a populated message sequence that the client sends to the LLM.

This is the primitive most people miss when they first encounter MCP. It moves prompt engineering from "scattered across every integration" to "owned by the server that knows its domain best." A GitHub MCP server can ship a /review-pr prompt that knows the right way to ask for a PR review against that domain — and every MCP-compatible client gets that workflow for free.

Tools

Tools are actions the LLM can take — the part of MCP that maps directly to function calling. A tool has a name, a description, a JSON Schema for its inputs, and a handler that returns a result. When the LLM decides to call a tool, the client invokes it via MCP and feeds the result back into the conversation.

The difference between MCP tools and bespoke function calling: an MCP tool is portable across clients. The same tool definition works in Claude Desktop, Cursor, custom agents — anything that speaks MCP. You write the tool once, and it composes with every host.

The Wire Protocol

Under the hood, MCP is JSON-RPC 2.0 over a transport (stdio, HTTP+SSE, or WebSocket). The handshake looks roughly like this:

// Client → Server: initialize
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-06-18",
    "capabilities": {
      "roots": { "listChanged": true },
      "sampling": {}
    },
    "clientInfo": { "name": "claude-desktop", "version": "1.4.2" }
  }
}

// Server → Client: capability declaration
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2025-06-18",
    "capabilities": {
      "tools":     { "listChanged": true },
      "resources": { "subscribe": true, "listChanged": true },
      "prompts":   { "listChanged": true }
    },
    "serverInfo": { "name": "github-mcp", "version": "0.7.1" }
  }
}

Once the handshake completes, the client can list tools, resources, and prompts, and call them. A tool invocation looks like:

// Client → Server: tools/call
{
  "jsonrpc": "2.0",
  "id": 42,
  "method": "tools/call",
  "params": {
    "name": "search_issues",
    "arguments": {
      "repo": "anthropics/anthropic-sdk-python",
      "query": "is:open label:bug",
      "limit": 20
    }
  }
}

// Server → Client: result (content blocks)
{
  "jsonrpc": "2.0",
  "id": 42,
  "result": {
    "content": [
      { "type": "text", "text": "Found 7 open issues with label:bug. ..." }
    ],
    "isError": false
  }
}

The content array is the same shape used by Claude's messages API, which makes round-tripping trivial.

Why It Matters: The Composition Win

The standout property of MCP isn't any single feature. It's composition. Once a capability is wrapped in an MCP server, every MCP-compatible client gets it. The first time you connect Claude Desktop to a filesystem MCP server, a GitHub MCP server, and your company's internal CRM MCP server simultaneously, and ask Claude to "find the latest support ticket from Acme Corp and draft a status update PR" — and it actually works — the value of standardisation hits you.

Pre-MCP, that workflow required custom integration code linking three systems together. Post-MCP, it's three independent servers that the host composes at runtime. Adding a fourth system (say, Linear) means writing or installing one more server — not rewriting the orchestration logic.

The Ecosystem in Early 2026

Reference clients: Claude Desktop and Cursor ship native MCP support. The Anthropic API supports MCP servers as a first-class concept in agent loops. There's a growing list of third-party clients (Zed, Continue, several emerging agent frameworks).

Reference servers: Anthropic publishes reference servers for filesystem, git, GitHub, Postgres, Slack, Google Drive, and more under modelcontextprotocol/servers on GitHub. Most are TypeScript; some are Python. They're all small enough to read in an afternoon and adapt to your needs.

SDKs: official SDKs exist for TypeScript, Python, Rust, Java, and Go. Building a new server in TypeScript or Python takes about 30 minutes once you've read one reference implementation.

Comparison: MCP vs Bespoke Function Calling

The question we hear most: "Do I need MCP if I already have function calling working with OpenAI/Anthropic/Gemini?"

The honest answer: not always, but increasingly yes. If you're building a single-purpose chatbot that calls three internal APIs, raw function calling is fine. If you're building anything that:

  • Needs to expose its capabilities to other AI clients (Claude Desktop, Cursor, future hosts).
  • Composes capabilities from multiple servers/teams.
  • Wants to ship reusable prompt templates alongside tools.
  • Needs subscribable resources that push updates rather than being polled.

...then MCP is doing real work for you, and the ~30 minutes of overhead per server pays back fast.

Key Takeaways

  • MCP is a JSON-RPC protocol for LLM clients to talk to capability servers. Narrow by design.
  • Three primitives: resources (readable data), prompts (templated workflows), tools (actions). Most people underuse prompts.
  • The composition win is the headline: write a capability once, every MCP client can use it.
  • Reference clients (Claude Desktop, Cursor) and reference servers from Anthropic make adoption cheap.
  • For multi-system or multi-client work, MCP is now the default. For single-purpose bots, raw function calling is still fine.
Let's build something

Want to work together?

If this article made you think about your architecture, your roadmap, or a problem you haven't solved yet — let's talk.