io.github.capyBearista/gemini-researcher
Stateless MCP server that proxies research queries to Gemini CLI, reducing agent context/model usage
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"-y",
"gemini-researcher"
],
"env": {
"GEMINI_API_KEY": "<your-gemini_api_key>",
"PROJECT_ROOT": "<project_root>",
"RESPONSE_CHUNK_SIZE_KB": "<response_chunk_size_kb>",
"CACHE_TTL_MS": "<cache_ttl_ms>",
"DEBUG": "<debug>",
"GOOGLE_APPLICATION_CREDENTIALS": "<google_application_credentials>",
"GOOGLE_CLOUD_PROJECT": "<google_cloud_project>",
"VERTEX_AI_PROJECT": "<vertex_ai_project>"
}
}
}
}{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"-y",
"gemini-researcher"
],
"env": {
"GEMINI_API_KEY": "<your-gemini_api_key>",
"PROJECT_ROOT": "<project_root>",
"RESPONSE_CHUNK_SIZE_KB": "<response_chunk_size_kb>",
"CACHE_TTL_MS": "<cache_ttl_ms>",
"DEBUG": "<debug>",
"GOOGLE_APPLICATION_CREDENTIALS": "<google_application_credentials>",
"GOOGLE_CLOUD_PROJECT": "<google_cloud_project>",
"VERTEX_AI_PROJECT": "<vertex_ai_project>"
}
}
}
}npx -y gemini-researcherGEMINI_API_KEYGemini API key (optional if you already authenticated Gemini CLI via "gemini" login)
PROJECT_ROOTOverride the project root directory used for path validation (defaults to current working directory)
RESPONSE_CHUNK_SIZE_KBChunk size threshold (KB) for large responses (default: 10)
CACHE_TTL_MSChunk cache TTL in milliseconds (default: 3600000 / 1 hour)
DEBUGEnable debug logging (set to "true" or "1")
GOOGLE_APPLICATION_CREDENTIALSVertex AI / Google auth: path to a service account JSON credentials file
GOOGLE_CLOUD_PROJECTVertex AI / Google auth: GCP project ID (used by some auth configurations)
VERTEX_AI_PROJECTVertex AI / Google auth: Vertex AI project identifier (used by some auth configurations)
Focused MCP server for OpenAI image/audio generation (v2.0.0). Wraps endpoints via HAPI CLI.
Public MCP server for the LLM Search Engine
Real-time web search, reasoning, and research through Perplexity's API