Executive summary

NothingHumanSearch has been crawling the web for agent-readiness signals since launch, and the numbers tell a consistent story about the gap between claiming MCP support and shipping it. As of 2026-04-17 the index holds 5,578 sites with at least one agent-discovery signal (llms.txt, ai-plugin.json, OpenAPI, or an MCP manifest). Of those, 575 — 10.3% — pass a live JSON-RPC probe against their declared /mcp endpoint. The remaining 5,003 sites (89.7%) either mention MCP in their documentation without implementing it, or host a manifest that fails the handshake (404, 500, wrong Content-Type, or a server that answers HTTP but never completes the initialize round-trip).

5,578
agent-ready sites indexed on NothingHumanSearch (2026-04-17)
575
pass a live JSON-RPC MCP handshake — 10.3% of the index
4,795
publish llms.txt — 86.0% of the index

What verification actually means

Static scanners — the ones that produce most of the MCP directory listings you see today — treat a string match for mcp in llms.txt or a link to /.well-known/mcp.json as a positive signal. That's how you end up with directories claiming 10,000+ MCP servers when the actual live count is smaller by an order of magnitude. A real check has to open a connection, send {"method": "initialize"}, wait for a protocol handshake, and confirm the server responds with a valid result block citing a protocolVersion. Anything short of that is a citation, not an implementation.

NHS's verify_mcp tool (published as part of the NHS MCP server at nothumansearch.ai/mcp) does exactly this live-probe for any URL you hand it. When we recrawled the full index in April 2026 with the probe turned on, the verified-MCP count stayed stable around 575 even as the total indexed population kept climbing. The gap between "sites that mention MCP" and "sites that implement MCP" is widening, not narrowing — which is the opposite of what the marketing cycle would have you believe.

This matters for anyone building an agent. If you rely on a static MCP directory to decide which tools your agent should discover at runtime, you will waste connections and context tokens on dead endpoints. The 90% unverified cohort isn't malicious — it's mostly stale docs, misconfigured reverse proxies, and manifests that reference endpoints the author never actually wired up. But for an autonomous agent, the failure mode is the same: a call that eats latency, fails, and doesn't advance the task.

Top categories by indexed count

Across verified MCP servers and the broader agent-ready population, the category distribution is heavily concentrated. Developer tools alone accounts for 1,249 sites (22.4% of the index). AI-native tools follow closely — unsurprising given that MCP emerged from the AI-tools ecosystem. After that, category density drops off a cliff.

Category Sites Share
Developer tools1,24922.4%
AI-native tools82214.7%
Data / analytics3245.8%
Finance / fintech2033.6%
Productivity1442.6%
Security1342.4%
E-commerce1212.2%
Health / medical691.2%
Communication551.0%
Education260.5%

New this week

Ten MCP servers were newly verified in the last seven days. Every one of them scored 100 on the NHS agentic-readiness rubric — meaning they publish llms.txt, ai-plugin.json, an OpenAPI spec, and pass the live JSON-RPC MCP handshake. The pattern is consistent: teams that ship one discovery file tend to ship all of them, and teams that ship none ship none. There is no middle.

Domain Name Category Score
voidly.aivoidlyAI-native tools100
savordish.comSavor DishAI-native tools100
mail.misar.ioMisarMailCommunication100
claudereviews.comClaude WilderAI-native tools100
agentndx.aiAgentNDXDeveloper tools100
passdown.arflow.ioPassDownProductivity100
prereason.comPreReason - Market Context APIFinance / fintech100
feedoracle.ioFeedOracleSecurity100
deadends.devdeadends.devDeveloper tools100
borealhost.aiBorealHost.aiAI-native tools100

Gaps in the ecosystem

The gap that matters for builders: regulated high-value verticals are still thinly represented. Finance has 203 sites, health has 69, and education has 26. These are the same verticals where agents would deliver the most leverage per call — underwriting assistance, clinical documentation, degree-audit lookups — and they are the verticals where the MCP ecosystem is the least mature. Jobs (20) and news (6) round out the long tail. If you are deciding where to ship an MCP server and you want reach-per-server rather than defensibility through a crowd, the signal is clear: any vertical below 134 indexed sites is green field.

The pattern is consistent across every index we maintain: the teams that ship real MCP endpoints are a small fraction of the teams that claim MCP support. For builders, that gap is an opportunity — the verticals that are thin today will not stay thin for long.

Methodology

NHS crawls submitted URLs and auto-discovered candidates from public sources (awesome-mcp-servers, PulseMCP, llmstxt.site, and a handful of curated feeds), then scores each site against seven weighted signals: llms.txt present and parseable; ai-plugin.json at /.well-known/; an OpenAPI or AsyncAPI spec at a discoverable path; an MCP manifest; a live JSON-RPC MCP handshake; documented rate-limit and auth headers; and an accessible structured API response. Sites are re-crawled weekly. Scores range 0-100; any score above 75 corresponds to a site that an autonomous agent can realistically integrate without human help. The full probe methodology is open-source at nothumansearch.ai/methodology.

Weekly submission volume is running at 7,631 candidates for the week starting 2026-04-10, most from autonomous discovery agents rather than human submissions.

Download raw data: The MCP ecosystem health dataset is mirrored as a public gist — CSV · Markdown · view on GitHub. Auto-updated every weekly regeneration, canonical raw URLs are stable across revisions.

Implications for builders

Implications for builders. If you are shipping an MCP server: the live-probe bar is low but not everyone clears it — make sure your deployment actually answers initialize, not just serves a manifest. If you are building an agent that consumes MCP servers at runtime: discover against a live-verified index, not a static list. And if you are choosing a vertical: the crowded categories (developer tools, AI-native tools) are fighting over the same agent integrations, while finance, health, and education are asking to be built.

What's next

For the human side of this market — who is hiring the engineers to build against this infrastructure — see Q2 2026 AI Engineering Hiring Snapshot. For the engineering maturity ladder that separates teams shipping real MCP servers from teams publishing manifests that don't work, see Beyond the Prompt. For the governance dimension — what happens when agents start acting against these endpoints rather than just reading from them — see The Agentic Accountability Gap. Full reading paths at the Research Atlas.