MCP Quality Score
A 0–100 composite computed from public registry data only. No private telemetry, no opaque inputs. Source: lib/quality.ts. PRs welcome to refine weights or add signals.
- Freshness0–25
Recency of the latest registry update. 25 if updated within 30 days, decays linearly to 0 by 365 days. Stale servers slip down even if otherwise complete.
- Completeness0–25
5 each for: distinct title (vs. raw name), description ≥50 chars, repository URL, website URL, and an icon. Penalizes drive-by registrations.
- Installability0–25
25 if the entry has any runnable path — npm/pypi/docker package OR a remote streamable-http/SSE URL. 0 otherwise. The single biggest signal of usable vs. theoretical.
- Documentation0–15
5 for repository present + 0–10 for env-var documentation coverage (every required env var has a description text). Servers without env vars get the full 10 by definition.
- Stability0–10
10 if the version is ≥1.0.0, 5 if 0.x, 0 if missing. Crude proxy for whether the author has shipped a stable contract.
- · GitHub stars (gameable, lagging, not in registry)
- · Download counts (no canonical source for MCP)
- · Sentiment from issues / discussions (noisy, biased)
- · Vendor pay-to-rank (never)
When upstream data improves (e.g., the official registry adds a verified-by-vendor field), this score will absorb that signal. v1 may add: tool-count, last-commit activity (cached daily from public Git providers), and aggregate-error rate from the recommendation API itself.
Bharti, G. "MCP Quality Score." mcpindex.ai/methodology, 2026.
https://mcpindex.ai/methodologyOr just link to a server's detail page — the score is rendered there with the breakdown.