Skip to content
Back to Insights
MCP TypeScript AI Agents Tutorial

Building Your First MCP Server: An End-to-End Walkthrough

MCP servers are simpler than they look. By the end of this walkthrough you'll have a real server exposing two tools and a resource, connected to Claude Desktop and ready to deploy. Total time: under an hour.

Codecanis Admin

11 min read
Developer at keyboard
Live build session — wiring up an internal-CRM MCP server for a B2B sales team.

The MCP ecosystem can look intimidating from outside — protocols, transports, capability negotiation. The reality is that a working MCP server is under 200 lines of TypeScript. This post walks through building one end-to-end: project setup, defining a tool, defining a resource, wiring it to Claude Desktop, and deploying it for remote access. We're going to build a small internal-CRM server that exposes two tools (search contacts, log an interaction) and one resource (a contact's full record).

Project Setup

Start with a clean directory. We use tsx for dev because it runs TypeScript without a separate build step.

mkdir crm-mcp && cd crm-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript tsx @types/node

# tsconfig.json
npx tsc --init --target ES2022 --module Node16 --moduleResolution Node16 \
  --outDir dist --rootDir src --esModuleInterop true --strict true

Add a "bin" field to package.json so the server can be installed as an executable later, and a "dev" script:

{
  "type": "module",
  "bin": { "crm-mcp": "dist/index.js" },
  "scripts": {
    "dev": "tsx src/index.ts",
    "build": "tsc"
  }
}

The Skeleton: Capability Declaration

Every MCP server starts with a Server instance, a capability declaration, and a transport. For local Claude Desktop integration we use stdio — the host runs your server as a child process and pipes JSON-RPC over its stdin/stdout.

// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
  ListResourcesRequestSchema,
  ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  { name: "crm-mcp", version: "0.1.0" },
  {
    capabilities: {
      tools: {},
      resources: {},
    },
  },
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  // Note: do NOT console.log to stdout — it's the protocol channel.
  // Use console.error for diagnostics; the host typically pipes it to a log file.
  console.error("crm-mcp ready");
}

main().catch((err) => {
  console.error("Fatal:", err);
  process.exit(1);
});

That single "don't write to stdout" rule has bitten every team we've seen build their first MCP server. The protocol owns stdout. Use stderr for logs.

A Mock CRM Backend

For the walkthrough we'll fake the CRM. In production this would be HTTP calls to Salesforce, HubSpot, or your internal API. The point is to show the MCP shape, not the CRM shape.

// src/crm.ts
export type Contact = {
  id: string;
  name: string;
  email: string;
  company: string;
  status: "lead" | "prospect" | "customer" | "churned";
  interactions: Interaction[];
};

export type Interaction = {
  id: string;
  contactId: string;
  type: "email" | "call" | "meeting" | "note";
  summary: string;
  timestamp: string;
};

const contacts = new Map([
  ["c_1", { id: "c_1", name: "Ada Lovelace", email: "ada@analytical.eng",
            company: "Analytical Engines Ltd", status: "customer", interactions: [] }],
  ["c_2", { id: "c_2", name: "Grace Hopper", email: "grace@unisys.example",
            company: "Unisys", status: "prospect", interactions: [] }],
  ["c_3", { id: "c_3", name: "Linus Torvalds", email: "linus@kernel.example",
            company: "Linux Foundation", status: "customer", interactions: [] }],
]);

export function searchContacts(query: string, limit = 10): Contact[] {
  const q = query.toLowerCase();
  return [...contacts.values()]
    .filter(c =>
      c.name.toLowerCase().includes(q) ||
      c.company.toLowerCase().includes(q) ||
      c.email.toLowerCase().includes(q))
    .slice(0, limit);
}

export function getContact(id: string): Contact | undefined {
  return contacts.get(id);
}

export function logInteraction(input: Omit): Interaction {
  const contact = contacts.get(input.contactId);
  if (!contact) throw new Error(`Contact ${input.contactId} not found`);
  const interaction: Interaction = {
    ...input,
    id: `i_${Date.now()}_${Math.random().toString(36).slice(2, 7)}`,
    timestamp: new Date().toISOString(),
  };
  contact.interactions.push(interaction);
  return interaction;
}

Defining Tools

Tools have a name, a description, and a JSON Schema for inputs. We use Zod to define schemas in TypeScript and convert them to JSON Schema for the protocol.

// src/index.ts (continued)
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { searchContacts, getContact, logInteraction } from "./crm.js";

const SearchContactsInput = z.object({
  query: z.string().min(1).describe("Name, email, or company substring to search for"),
  limit: z.number().int().min(1).max(50).default(10),
});

const LogInteractionInput = z.object({
  contactId: z.string().describe("The CRM contact ID, e.g. c_1"),
  type: z.enum(["email", "call", "meeting", "note"]),
  summary: z.string().min(1).max(2000)
    .describe("Short factual summary of what was discussed"),
});

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "search_contacts",
      description: "Search the CRM for contacts by name, email, or company.",
      inputSchema: zodToJsonSchema(SearchContactsInput) as Record,
    },
    {
      name: "log_interaction",
      description: "Record an interaction (email, call, meeting, or note) against a contact. " +
                   "Use this after the user has actually had the interaction.",
      inputSchema: zodToJsonSchema(LogInteractionInput) as Record,
    },
  ],
}));

Tool descriptions are part of the model's reasoning context. The LLM uses them to decide when to call a tool. Spend disproportionate effort on these — descriptions that are vague, missing examples, or that fail to clarify when not to call the tool are the most common cause of agent misbehaviour.

The Tool Handler

The handler dispatches on tool name, validates input through Zod, executes the underlying logic, and returns content blocks. Errors should be returned as structured tool results with isError: true — not thrown — so the model can reason about them.

server.setRequestHandler(CallToolRequestSchema, async (req) => {
  const { name, arguments: rawArgs } = req.params;

  try {
    if (name === "search_contacts") {
      const args = SearchContactsInput.parse(rawArgs);
      const results = searchContacts(args.query, args.limit);
      return {
        content: [{
          type: "text",
          text: results.length
            ? results.map(c => `${c.id} | ${c.name} (${c.company}) — ${c.status}`).join("\n")
            : `No contacts matched "${args.query}".`,
        }],
      };
    }

    if (name === "log_interaction") {
      const args = LogInteractionInput.parse(rawArgs);
      const interaction = logInteraction(args);
      return {
        content: [{
          type: "text",
          text: `Logged ${interaction.type} interaction ${interaction.id} at ${interaction.timestamp}.`,
        }],
      };
    }

    return {
      content: [{ type: "text", text: `Unknown tool: ${name}` }],
      isError: true,
    };
  } catch (err) {
    const message = err instanceof z.ZodError
      ? `Invalid arguments: ${err.issues.map(i => `${i.path.join(".")}: ${i.message}`).join("; ")}`
      : err instanceof Error ? err.message : String(err);

    return {
      content: [{ type: "text", text: message }],
      isError: true,
    };
  }
});

Defining a Resource

Resources are read-only data the client can fetch by URI. We'll expose each contact as a resource at crm://contact/{id}.

server.setRequestHandler(ListResourcesRequestSchema, async () => {
  const all = ["c_1", "c_2", "c_3"]
    .map(id => getContact(id)!)
    .map(c => ({
      uri: `crm://contact/${c.id}`,
      name: `${c.name} — ${c.company}`,
      description: `Full CRM record for ${c.name}, including interaction history`,
      mimeType: "application/json",
    }));
  return { resources: all };
});

server.setRequestHandler(ReadResourceRequestSchema, async (req) => {
  const uri = req.params.uri;
  const match = /^crm:\/\/contact\/(.+)$/.exec(uri);
  if (!match) throw new Error(`Unsupported URI: ${uri}`);

  const contact = getContact(match[1]);
  if (!contact) throw new Error(`Contact not found: ${match[1]}`);

  return {
    contents: [{
      uri,
      mimeType: "application/json",
      text: JSON.stringify(contact, null, 2),
    }],
  };
});

The host's UI now shows these as attachable resources — the user can pin a contact's record into their conversation, and the LLM gets the full structured data without a tool call.

Connecting to Claude Desktop

Claude Desktop discovers MCP servers through a JSON config file. On macOS the path is ~/Library/Application Support/Claude/claude_desktop_config.json; on Windows it's under %APPDATA%\Claude\.

{
  "mcpServers": {
    "crm": {
      "command": "npx",
      "args": ["-y", "tsx", "/Users/you/dev/crm-mcp/src/index.ts"],
      "env": {
        "CRM_API_KEY": "${env:CRM_API_KEY}"
      }
    }
  }
}

Quit and reopen Claude Desktop. The MCP indicator in the input bar should show "crm" with two tools and three resources. Ask Claude: "search the CRM for anyone at the Linux Foundation, then log a note that we discussed kernel scheduling." If it works, you'll see two tool calls happen and the right structured response come back.

Deploying for Remote Access (HTTP + SSE)

stdio is great for local use. For a server multiple people share, you want HTTP transport with Server-Sent Events for streaming. The MCP SDK ships an HTTP transport — the change to your code is minimal:

// src/http.ts — alternative entry point for remote deployment
import express from "express";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
// import `server` from your shared module

const app = express();

app.get("/sse", async (req, res) => {
  // Require auth on real deployments — Bearer token, OAuth, mTLS, your call
  if (req.headers.authorization !== `Bearer ${process.env.MCP_TOKEN}`) {
    res.status(401).send("unauthorized");
    return;
  }

  const transport = new SSEServerTransport("/messages", res);
  await server.connect(transport);
});

app.post("/messages", express.json(), async (req, res) => {
  // The SSE transport hands this back to the server internally
  // (see SDK docs for the latest binding pattern)
});

app.listen(8080, () => console.error("crm-mcp listening on :8080"));

Put this behind your reverse proxy (Caddy, nginx, Cloudflare Tunnel) with TLS, add real auth, and you have a remote MCP server any compatible client can connect to.

Production Checklist

  • Validate every tool input with a schema. Return structured errors, don't throw.
  • Log to stderr only. Never write to stdout.
  • Tool descriptions: include when to use, when NOT to use, and a concrete example.
  • For remote deployments: require auth, set request size limits, rate limit per client.
  • Wrap any tool with side effects (write/delete/send) in idempotency keys.
  • Emit structured logs (tool name, latency, success/failure) for observability.

Key Takeaways

  • An MCP server is under 200 lines of TypeScript for a real, useful integration.
  • Tool descriptions are part of the model's reasoning — invest in them like product copy.
  • Return errors as isError: true content blocks; the model handles structured errors gracefully.
  • stdio for local Claude Desktop integration; HTTP+SSE for shared remote deployment.
  • Resources let users attach structured data without a tool call — underused but powerful.
Let's build something

Want to work together?

If this article made you think about your architecture, your roadmap, or a problem you haven't solved yet — let's talk.