How agents call mcpindex.ai.
An MCP-native API. Free, no key required, low-latency. Three integration shapes: direct HTTP, drop-in MCP server (recommended), or embedded into your platform. The whole surface fits on this page.
The shape
Five components in the request path; a refresh job keeps the catalog current.
Top-down: a request originates in your agent client, passes through a discovery adapter into the recommendation API, ranks against an indexed catalog of MCP servers, and returns three ranked picks with reasoning and install commands. The catalog is rebuilt daily from an upstream source.
What you don’t need to care about as a caller: which storage layer backs the catalog, where the refresh worker runs, what compute hosts the API. The contract is the recommendation endpoint; everything else is internal.
Three ways to use it
Pick the shape that matches where the agent lives.
Direct HTTP API
For server-side agents, custom orchestrators, anything outside an MCP client.
curl "https://mcpindex.ai/api/v1/recommend?task=read+pdf+to+s3"Returns ranked picks as JSON. Same shape an MCP client gets back.
Drop-in MCP server
For Claude Desktop, Cursor, Cline, Zed. Install once, the agent finds the rest from inside the loop.
npm install -g mcp-server-mcpindexThe package is a thin client to the same API. Zero config in most clients — see §03.
Embedded in your platform
For platforms (Composio, Mastra, Toolhouse, IDE plays) that want MCP discovery as a feature.
// Server-side, your code:
const res = await fetch("https://mcpindex.ai/api/v1/recommend?task=" +
encodeURIComponent(userTask));
const { recommendations } = await res.json();Attribution appreciated. Email hello@mcpindex.ai if you want a higher rate limit.
Wire it to your client
The server is identical across clients. Only the config-file location and shape differ. Restart the client after editing.
Claude Desktop
~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"mcpindex": {
"command": "npx",
"args": ["-y", "mcp-server-mcpindex"]
}
}
}Cursor
.cursor/mcp.json (project) or ~/.cursor/mcp.json (global)
{
"mcpServers": {
"mcpindex": {
"command": "npx",
"args": ["-y", "mcp-server-mcpindex"]
}
}
}Cline (VS Code)
Cline settings panel → MCP Servers → Add
Command: npx
Args: -y mcp-server-mcpindexZed
~/.config/zed/settings.json
{
"context_servers": {
"mcpindex": {
"command": "npx",
"args": ["-y", "mcp-server-mcpindex"]
}
}
}Once installed, the four tools are available in any agent loop: recommend_mcp_for_task, search_mcp_servers, get_install_command, compare_servers. Ask your agent something like “find me an MCP server that can read PDFs and write to S3” and watch it call recommend_mcp_for_task automatically.
Anatomy of a response
The recommendation endpoint returns a tight JSON envelope. Three picks ranked by composite score, each with reasoning, install commands per registry type, and the live MCP Quality Score.
{
"task": "read pdf and save to s3",
"recommendations": [
{
"rank": 1, // composite-rank position (1-3)
"slug": "io-github-foo-pdf-mcp", // url-safe ident, used in the per-server page
"name": "io.github.foo/pdf-mcp", // canonical registry name
"title": "PDF Tools MCP Server", // display name
"description": "Generate PDF from HTML…", // one-line description from the registry
"category": "docs", // inferred category (28 total)
"qualityScore": 95, // 0-100, see /methodology
"reasoning": "Matches \"pdf\" in docs-category server.", // why it ranked
"installs": {
"npm": "@foo/pdf-mcp", // present if registry has an npm package
"pypi": null,
"docker": null,
"remote": null // present if registry has a remote URL
},
"url": "https://mcpindex.ai/server/io-github-foo-pdf-mcp"
},
/* … 2 more ranked picks … */
],
"note": "v0 ranker — heuristic score blends keyword match (70%) with MCP Quality Score (30%). See /methodology."
}Limits + guarantees
- Rate
60 requests / minute / IP on the free tier. No key required. 429 with Retry-After when exceeded. Email hello@mcpindex.ai for higher limits or for a Pro key.
- Schema stability
/api/v1/*is versioned. Breaking changes ship behind/api/v2; v1 stays available for at least 6 months after v2 lands. Field additions are not breaking and ship to v1. - Cache
Responses are cached at the edge with stale-while-revalidate fallback. Repeat queries within minutes are essentially free; the first call after a cache miss adds modest latency.
- Fallback
If the API is unreachable, fall back to
/llms.txtand/llms-full.txtfor static reference data. The MCP server package surfaces a clear error to the agent rather than fabricating results. - Data freshness
Catalog rebuilt nightly. Integrity checks reject obviously partial refreshes before they go live. Worst-case staleness: 24 hours.
- Authentication
None on free tier — public endpoints. CORS open. Pro tier (when ramped) uses bearer tokens; existing free endpoints stay open.
How this compares
Five common ways an agent (or developer) finds an MCP server today. mcpindex.ai is the only one that hits all four traits an agent at inference time actually needs.
| Method | Agent-callable | Ranked picks | Install-ready | Stays current |
|---|---|---|---|---|
mcpindex.ai recommendation API + drop-in MCP server | yes | yes composite score | yes per-client config | daily |
Anthropic official registry registry.modelcontextprotocol.io | yes raw HTTP | no paginated list | partial in payload | live |
Human directories PulseMCP · Smithery · Glama · MCP.so | no browsing UX | partial hand-curated | partial | daily |
awesome-mcp-servers (GitHub) punkpeye/awesome-mcp-servers | no | no flat list | varies | weekly PR-driven |
Ask the LLM directly e.g. "Claude, what MCP servers exist for X?" | yes | no hallucination-prone | no | training cutoff |
The honest framing: mcpindex.ai is a recommendation surface on top of the Anthropic registry — not a replacement for it. The registry is the canonical source of truth. Existing human directories serve a real purpose for developers browsing on a laptop. mcpindex.ai sits in the agent-callable slot between them and is designed for the moment your IDE or autonomous agent needs to pick a server in <500ms with no human in the loop.
Footnote on “ranked picks”: the registry returns servers in publication order; PulseMCP and Smithery offer hand-curated featured collections but no programmatic per-task ranking. mcpindex.ai computes a composite score (search match × MCP Quality Score) per request — see /methodology for the algorithm.
Where to next
- /api/v1/recommend — try the API live with any natural-language task
- /methodology — open MCP Quality Score methodology, source on GitHub
- /leaderboard — top 50 servers ranked by Quality Score
- /changelog.rss — RSS feed of new servers indexed each day
- github.com/mcpindex-ai/mcp-server-mcpindex — npm package source, MIT
Found a gap, a typo, or a wiring question that isn’t answered here? hello@mcpindex.ai.