← Research
Developer Tools

Claude Code MCP Servers: How to Add, Configure, and Use Them

By 8bitconcepts  ·  May 2026  ·  10 min read

MCP (Model Context Protocol) servers are what turn Claude Code from a capable coding assistant into a tool that can actually reach into your infrastructure. Without MCP, Claude Code knows your codebase. With MCP, it can query your database, read GitHub issues, post to Slack, or search external data sources — all from a single conversation.

This guide covers the complete setup: adding servers with the right command syntax, scoping them correctly for solo work versus team projects, passing secrets safely, and verifying the connection worked.

What MCP servers do in Claude Code

An MCP server is a process or remote endpoint that exposes a set of tools Claude can call during a session. When you write "pull the last 10 Postgres errors from the production log table," Claude doesn't do that by guessing — it calls a query tool exposed by a configured MCP server, gets back structured data, and reasons over it.

Tools appear automatically in the session context once a server is connected. Claude sees the tool names and their descriptions and decides when to invoke them. You don't have to explicitly ask it to use a tool — if the task calls for it, it will.

Adding your first MCP server

The claude mcp add command handles registration. The syntax depends on the transport type:

HTTP servers (remote)

claude mcp add --transport http <name> <url>

For example, to add Not Human Search (an MCP search engine for discovering agent-ready tools):

claude mcp add --transport http nothumansearch https://nothumansearch.ai/mcp

Stdio servers (local process)

claude mcp add --transport stdio <name> -- <command> [args]

The -- separator is required. Everything after it is the command Claude Code will spawn:

claude mcp add --transport stdio github -- npx -y @github/mcp-server

Flag ordering matters. All flags (--transport, --scope, --env) must come before the server name. Putting them after will silently fail or produce confusing errors.

With authentication headers

For HTTP servers that need a bearer token:

claude mcp add --transport http github \
  https://api.githubcopilot.com/mcp/ \
  --header "Authorization: Bearer $GITHUB_TOKEN"

With environment variables

Pass secrets to stdio servers using --env. The flag is repeatable:

claude mcp add --transport stdio postgres \
  --env DATABASE_URL="$DATABASE_URL" \
  -- npx -y @modelcontextprotocol/server-postgres

The three scope levels

Where a server gets stored determines who can use it and when:

ScopeFlagStored inWho sees it
local(default)~/.claude.json (project entry)You, current project only
project--scope project.mcp.json in project rootAnyone with the repo
user--scope user~/.claude.json (global)You, all projects

The default local scope is right for personal credentials on a shared project. The project scope — stored in .mcp.json at the repo root — is what you commit to version control so the whole team gets it automatically.

Team-shared MCP config with .mcp.json

Adding a project-scoped server writes a .mcp.json file to your project root:

claude mcp add --scope project --transport stdio postgres \
  --env DATABASE_URL="${DATABASE_URL}" \
  -- npx -y @modelcontextprotocol/server-postgres

The resulting .mcp.json looks like this:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "DATABASE_URL": "${DATABASE_URL}"
      }
    }
  }
}

The ${DATABASE_URL} syntax is a variable expansion — each developer's local environment variable gets substituted at session start. You commit the config file, not the secrets. This is the right pattern for database connections, internal APIs, and any tool that should be available to everyone on the project but needs per-user credentials.

Claude Code will prompt for approval the first time a project-scoped server starts in a new environment. This is intentional — the project owner committed the config, but each new developer confirms it once before tools are enabled.

MCP servers worth actually adding

GitHub — issues, PRs, and code search

claude mcp add --transport http github \
  https://api.githubcopilot.com/mcp/ \
  --header "Authorization: Bearer $GITHUB_TOKEN"

Lets Claude read issues, search code across repos, and create or update PRs without leaving the terminal session. Most useful on codebases where the issue tracker and the code are tightly coupled.

Postgres — live database access

claude mcp add --transport stdio postgres \
  --env DATABASE_URL="$DATABASE_URL" \
  -- npx -y @modelcontextprotocol/server-postgres

Claude can run read-only queries against your database, inspect schema, and diagnose data issues. Only give it read access unless you have a specific reason not to.

Filesystem — structured file access

claude mcp add --transport stdio filesystem \
  -- npx -y @modelcontextprotocol/server-filesystem /allowed/path

Useful when you want Claude to work with files outside the current project directory — logs, config files, shared asset directories.

Slack — team communication

claude mcp add --transport stdio slack \
  --env SLACK_BOT_TOKEN="$SLACK_BOT_TOKEN" \
  -- npx -y @modelcontextprotocol/server-slack

Read channel history, search messages, post summaries. Works well for incident response workflows where you want Claude to correlate code changes with Slack discussions.

Discovery — finding new MCP servers

claude mcp add --transport http nothumansearch https://nothumansearch.ai/mcp

Not Human Search indexes 1,700+ agent-ready sites with live JSON-RPC verification. Use it to find MCP servers for tools you don't have wired yet — it's the practical answer to "is there an MCP server for X?"

Verifying the connection

Once a server is registered, run claude mcp list to see what's configured:

claude mcp list

Inside a session, type /mcp to see connection status and tool count for each server. A connected server will show its tools; a failed one will show an error reason.

If a server fails to connect, the most common causes are:

Removing a server

claude mcp remove <name>

This removes it from whichever scope it was registered in. If you added a project-scoped server that's also in .mcp.json, you'll need to delete or update that file separately.

Environment variable reference

Three environment variables control MCP behavior at the session level:

Project-scoped versus user-scoped: when to use each

The clearest split: anything that should follow the project goes in .mcp.json. Anything that follows you goes in user scope.

Your Postgres client for the production database is project-scoped — the connection config belongs in the repo, and other engineers on the project should get it automatically. Your personal search engine or note-taking tool is user-scoped — it makes no sense to force it on teammates.

A common mistake is registering everything at local scope (the default). Local scope is project-specific but not version-controlled, which means you have to re-add it every time you clone or when a teammate joins. If more than one person needs a server, commit it to .mcp.json.

The context cost of MCP tools

Every connected MCP server adds tool definitions to your session context. For small MCP configurations (2–4 servers), the overhead is negligible. For large ones — teams wiring in a dozen or more servers — tool definitions alone can consume 10–15% of the usable context window.

More importantly, tool calls cost context. Each round-trip (tool invocation + tool result) adds tokens. A session that calls a database query 8 times, reads 4 GitHub issues, and checks 3 Slack threads burns context fast — often several times faster than a pure coding session.

This is where context management matters. Sessions with active MCP tool use should be scoped narrower than pure coding sessions. One complex multi-tool investigation per session is better than trying to span multiple projects.

Context limits hit hard during MCP sessions

Heavy MCP tool use burns context faster than any other Claude Code pattern. When you hit the limit mid-investigation, you lose all the tool results and working state. Bring Your AI is built to solve exactly this: pass a complete working session — including MCP context, tool results, and in-progress decisions — to the next session without starting over.

See how Bring Your AI works →

What to read next