16 min read

What Is MCP? The Model Context Protocol Developer Guide 2026

MCP connects Claude, Cursor, and VS Code to your local data. Learn what Model Context Protocol is, how it works, and how to expose MCP servers remotely.

🤖 MCP · AI Tools · Protocol · 2026

What Is MCP? The Model Context Protocol Developer Guide

Claude knows everything in its training data. It knows nothing about your database, your internal docs, your running services, or the file you opened five minutes ago. Every AI tool ships with a different plugin system, a different context format, and a different way to pull in external data. That's not a feature ecosystem. That's fragmentation. MCP, the Model Context Protocol, fixes this with a single open standard that any AI client can speak and any server can implement. This guide explains exactly what MCP is, how it works in Claude Desktop, Cursor, and VS Code, and how to run your own MCP server, including how to expose it securely to remote clients using a tunnel.

🤖 Works with Claude · Cursor · VS Code · Copilot 🔌 Open standard by Anthropic, 2024 🌐 Local or remote MCP server support

The Problem MCP Solves: Every AI Tool Is an Island

Before MCP, connecting an AI model to an external data source required custom code for every combination. Want Claude to read your Notion docs? Write a Notion integration. Want Copilot to query your database? Write a different integration. Want both to work at the same time? Write two integrations and maintain them separately as both APIs change.

This N×M problem scales badly. There are dozens of AI clients and thousands of possible data sources. Every integration is one-off glue code: fragile, non-transferable, and invisible to the AI model itself.

MCP replaces all of that glue code with one protocol: you build an MCP server once and any MCP-compatible AI client can connect to it.

What Is MCP? The Model Context Protocol Explained

MCP (Model Context Protocol) is an open protocol published by Anthropic in late 2024. It defines a standard way for AI models to communicate with external servers that provide tools, data, and instructions. Think of it as a USB-C standard for AI context: one connector, hundreds of compatible devices.

The protocol runs between two sides. The MCP client is the AI application (Claude Desktop, Cursor, VS Code with Copilot, your own app). The MCP server is a process you run locally or remotely that exposes capabilities: file system access, database queries, API calls, custom functions. The client asks the server what it can do, the server lists its tools and resources, and the model decides when and how to call them during a conversation.

MCP is transport-agnostic. The same server can communicate over stdio (a local subprocess pipe), HTTP with Server-Sent Events (SSE), or WebSocket. Clients that run local subprocesses use stdio. Clients that connect to remote or shared servers use HTTP+SSE. This distinction matters when you want more than one machine to share the same MCP server.

🛠 Tools Executable functions the AI model can call. Examples: run_sql_query, search_files, send_email. Each tool has a name, description, and JSON schema for its parameters.
📡 Resources Static or dynamic data the model can read. A resource has a URI (e.g., file:///home/user/report.md or db://customers/recent) and returns content when fetched.
💬 Prompts Pre-written instruction templates the server provides to the client. Useful for defining reusable workflows like "summarize this code diff" or "review this PR against our style guide."
🔌 Transports: stdio vs HTTP+SSE Stdio is for local servers launched as subprocesses; no network needed. HTTP+SSE is for remote or shared servers; requires a reachable URL and works across machines.
🔑 Capability negotiation When a client connects, the server declares exactly what it supports. The model sees those capabilities and can invoke them mid-conversation without any manual tool registration on the client side.
🌐 Open standard, many SDKs Anthropic published the spec. Official SDKs exist for Python (mcp), TypeScript (@modelcontextprotocol/sdk), and community ports for Go, Rust, and Java.

🤖 MCP in Claude Desktop

Claude Desktop was the first major MCP client. It supports both stdio servers (local subprocesses) and HTTP+SSE servers (remote URLs). Configuration lives in a single JSON file at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS, or %APPDATA%\Claude\claude_desktop_config.json on Windows.

claude_desktop_config.json — stdio server (local subprocess)
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Documents"]
    },
    "sqlite": {
      "command": "uvx",
      "args": ["mcp-server-sqlite", "--db-path", "/Users/you/data.db"]
    }
  }
}

With stdio servers, the server process starts and stops with Claude Desktop. Only your local machine can use it, and only one Claude Desktop instance at a time. If you want a colleague to access the same MCP server, stdio won't cut it. You need HTTP+SSE, and the server needs a public URL.

claude_desktop_config.json — HTTP+SSE server (remote URL via Localtonet)
{
  "mcpServers": {
    "shared-tools": {
      "url": "https://yourname.localto.net/sse"
    }
  }
}

When you expose your local MCP server through a Localtonet tunnel (covered in detail below), every teammate can add that URL to their own claude_desktop_config.json and use the same tools and resources without running anything locally. No cloud deployment. No Docker registry. The server keeps running on your machine.

⚡ MCP in Cursor

Cursor added MCP support in version 0.43. It works in both Agent mode and normal chat, giving the AI access to any tools your MCP server exposes. Configuration lives in ~/.cursor/mcp.json (global) or .cursor/mcp.json at the root of a project (per-project).

~/.cursor/mcp.json
{
  "mcpServers": {
    "my-tools": {
      "command": "node",
      "args": ["/home/you/mcp-servers/my-tools/index.js"]
    },
    "remote-db": {
      "url": "https://yourname.localto.net/sse"
    }
  }
}

In Cursor's Agent mode, the model automatically decides when to call MCP tools based on the task. Ask it to "check the customers table for orders placed in the last 7 days" and it calls run_sql_query on your locally running MCP server, without you writing any query yourself.

The per-project .cursor/mcp.json is particularly useful for team repositories: commit it to your repo and every developer who opens the project in Cursor gets the same MCP tools, pointed at the same remote URL.

💻 MCP in VS Code: GitHub Copilot and Continue.dev

VS Code gained MCP support through two routes. GitHub Copilot (version 1.99+) added native MCP support in its Agent mode. Continue.dev, the open-source AI coding extension, added MCP support in v0.9.

GitHub Copilot (VS Code 1.99+):

Add your MCP servers to VS Code's settings.json under the github.copilot.chat.mcp.servers key. This works in Copilot's Agent mode, where the model can call tools mid-conversation.

VS Code settings.json — GitHub Copilot MCP
{
  "github.copilot.chat.mcp.servers": {
    "my-mcp-server": {
      "type": "sse",
      "url": "https://yourname.localto.net/sse"
    }
  }
}

Continue.dev:

In Continue's config.json (at ~/.continue/config.json), add an mcpServers array. Continue passes the server's tools to whatever model you have configured (GPT-4o, Claude, a local Ollama model).

~/.continue/config.json — Continue.dev MCP
{
  "mcpServers": [
    {
      "name": "my-tools",
      "transport": {
        "type": "sse",
        "url": "https://yourname.localto.net/sse"
      }
    }
  ]
}

Both extensions reload MCP server connections automatically when you update the config file. No restart required. This is useful during active development of the server itself.

How to Build and Expose an MCP Server: Step-by-Step (2026)

1

Create a minimal MCP server with HTTP+SSE transport

HTTP+SSE transport is required for remote access. The server listens on a local port and accepts SSE connections at /sse and POST messages at /messages.

Node.js — minimal MCP server (server.mjs)
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
import express from "express";

const app = express();
app.use(express.json());

const server = new Server(
  { name: "my-mcp-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [{
    name: "get_time",
    description: "Returns the current server time",
    inputSchema: { type: "object", properties: {} }
  }]
}));

server.setRequestHandler(CallToolRequestSchema, async (req) => {
  if (req.params.name === "get_time") {
    return { content: [{ type: "text", text: new Date().toISOString() }] };
  }
  throw new Error("Unknown tool");
});

let transport;
app.get("/sse", async (req, res) => {
  transport = new SSEServerTransport("/messages", res);
  await server.connect(transport);
});
app.post("/messages", async (req, res) => {
  await transport.handlePostMessage(req, res);
});

app.listen(3000, "0.0.0.0", () => console.log("MCP server on http://0.0.0.0:3000"));
Install dependencies and start
npm install @modelcontextprotocol/sdk express
node server.mjs
2

Create a free Localtonet account

Go to localtonet.com/register and sign up. No credit card required.

3

Install the Localtonet Client and log in with your token

Download the Client from localtonet.com/download for your platform (Windows, Linux, macOS, or Docker). Open it and log in using the token from localtonet.com/usertoken. The Client stays running in the background and handles all tunnel traffic.

4

Reserve a fixed subdomain for your MCP server

Go to localtonet.com/tunnel/http in the dashboard. Before saving, set a custom subdomain (e.g., mytools) so the URL becomes https://mytools.localto.net. This matters because every AI client config file hardcodes your MCP server URL. A changing URL on restart means updating config files on every machine.

5

Create the HTTP tunnel pointing at port 3000

Set the host to localhost, port to 3000, choose a server region close to your team, and click Save. Then click Start on the dashboard. Your MCP server is now reachable at https://mytools.localto.net.

6

Add the URL to your AI client config

Use the HTTPS URL in Claude Desktop, Cursor, VS Code Copilot, or Continue.dev as shown in the sections above. Confirm the connection works by asking the AI "what tools do you have?" and checking that your server's tools appear.

Quick verification with curl
curl -N https://mytools.localto.net/sse
⚠️ Most tutorials skip this step: binding to 0.0.0.0, not 127.0.0.1

Node.js and Python HTTP servers default to binding on 127.0.0.1 (loopback only). When your MCP server is on 127.0.0.1:3000, the Localtonet Client running on the same machine may fail to forward traffic to it on some OS network stack configurations. Always explicitly bind to 0.0.0.0. In the example above, app.listen(3000, "0.0.0.0", ...) does exactly this. For Python MCP servers using Starlette or FastAPI: uvicorn server:app --host 0.0.0.0 --port 3000. This does not expose your port to the internet directly. The tunnel handles all public traffic.

🔑 Stdio vs HTTP+SSE: which transport should your MCP server use?

Use stdio if your MCP server is for personal use on one machine only. It's simpler, zero network config, and Claude Desktop or Cursor launches it automatically. Use HTTP+SSE as soon as you need any of these: more than one AI client connecting simultaneously, a server running on a different machine, or teammates sharing the same tools. Localtonet only works with HTTP+SSE servers. If you're currently using stdio and want to share the server, switching transport takes about 15 lines of code change using the examples above.

🛠 Tips for Running MCP Servers in Production

🌐 Always use a fixed subdomain AI client configs hardcode MCP server URLs. If your URL changes after a tunnel restart, every teammate's Claude Desktop, Cursor, and VS Code breaks simultaneously. Reserve a subdomain from day one.
🔒 Restrict access with IP allowlisting Your MCP server may have access to sensitive internal tools or databases. Use the Localtonet dashboard to restrict the tunnel to specific IP addresses (your office IP, a VPN egress, or individual teammates' IPs).
🔑 Add a Bearer token check For shared servers, add a simple Bearer token middleware in Express or FastAPI. Clients pass the token in the Authorization header. Unauthenticated SSE connections get a 401 before the MCP handshake starts.
Keep the server process alive with PM2 On Linux, run your MCP server under PM2 (pm2 start server.mjs --name mcp) and the Localtonet Client as a systemd service. Both restart automatically after reboots or crashes.
📡 Run multiple MCP servers on separate ports A database MCP server on port 3000, a filesystem server on port 3001, and a Slack integration on port 3002 can each get their own Localtonet tunnel and their own stable subdomain.
🛠 Test tools with the MCP Inspector Before connecting a real AI client, use the official MCP Inspector (npx @modelcontextprotocol/inspector) to verify your server's tool list, schemas, and return values locally. Faster iteration than restarting Claude Desktop each time.

Frequently Asked Questions

What is MCP (Model Context Protocol)?

MCP is an open protocol published by Anthropic in 2024 that standardizes how AI models communicate with external data sources and tools. An MCP server exposes tools (callable functions), resources (readable data), and prompts (reusable instructions). An MCP client, such as Claude Desktop, Cursor, or VS Code Copilot, connects to the server and makes its capabilities available to the AI model during a conversation. Build one MCP server and any compatible AI client can use it.

How do I add an MCP server to Claude Desktop?

Edit the config file at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows). Add your server under the mcpServers key. For local stdio servers, provide a command and args. For remote HTTP+SSE servers, provide a url pointing to your server's SSE endpoint (e.g., https://mytools.localto.net/sse). Restart Claude Desktop after saving.

How do I expose a local MCP server to the internet?

Your MCP server must use HTTP+SSE transport and bind to 0.0.0.0 (not 127.0.0.1). Then install the Localtonet Client, log in with your token, and create an HTTP tunnel in the dashboard pointing at your server's port (e.g., 3000). Reserve a custom subdomain so the URL stays stable. Start the tunnel from the dashboard. Your MCP server is now reachable at https://yoursubdomain.localto.net/sse from any AI client anywhere.

What's the difference between stdio and HTTP+SSE transport in MCP?

Stdio transport means the AI client launches your server as a child process and communicates over stdin/stdout pipes. It's simpler but only works locally, for one client at a time. HTTP+SSE transport means your server runs as an independent HTTP process and clients connect over the network. This is required for remote access, multiple simultaneous clients, or sharing a server with teammates. If you want to expose your MCP server via a tunnel, HTTP+SSE is the only option.

Does Cursor support MCP servers?

Yes, Cursor has supported MCP since version 0.43. Configure servers in ~/.cursor/mcp.json (global) or .cursor/mcp.json in your project root (per-project). Both stdio and HTTP+SSE servers work. In Cursor's Agent mode, the model calls MCP tools automatically when it decides they're relevant to the current task.

Can multiple AI tools connect to the same MCP server at once?

Yes, with HTTP+SSE transport. Each client opens its own SSE connection to the server. You can have Claude Desktop, Cursor, and VS Code Copilot all connected to the same MCP server simultaneously, each getting its own session. Stdio transport doesn't support this: it's one process per client, launched on demand.

What programming languages can I use to build an MCP server?

Anthropic maintains official SDKs for Python (pip install mcp) and TypeScript/Node.js (npm install @modelcontextprotocol/sdk). Community-maintained SDKs also exist for Go, Rust, Java, and Kotlin. The underlying protocol is JSON-RPC 2.0 over either stdio or HTTP, so you can implement it in any language that can handle HTTP and JSON, even without an SDK.

Is it safe to expose an MCP server publicly via a tunnel?

The tunnel encrypts all traffic with TLS and requires no open inbound ports on your machine. The risk is at the application level: your MCP server's tools may have access to sensitive data or actions. Mitigate this with two layers: first, restrict the Localtonet tunnel to specific IP addresses using the dashboard's allowlist feature; second, add a Bearer token check in your server middleware so unauthenticated clients get a 401 before the MCP session starts.

Ready to share your MCP server with your team?

Create a free Localtonet account, install the Client, and expose your local MCP server at a stable HTTPS URL in under five minutes. No credit card, no domain, no router config.

Get Started Free →

Localtonet is a secure multi-protocol tunneling and proxy platform designed to expose localhost, devices, private services, and AI agents to the public internet supporting HTTP/HTTPS tunnels, TCP/UDP forwarding, mobile proxy infrastructure, file server publishing, latency-optimized game connectivity, and developer-ready AI agent endpoint exposure from a single unified control plane.

support