# GPT Researcher > GPT Researcher is an open-source autonomous AI research agent that conducts comprehensive deep research on any topic, gathers information from dozens of trusted sources in parallel, and produces long-form research reports with inline citations. It is the #1-ranked deep-research system on Carnegie Mellon University's DeepResearchGym benchmark and powers production research workflows for thousands of teams. ## What it is - An autonomous agent (not a chatbot) that plans, searches, validates, and writes. - Pluggable across LLMs (OpenAI, Anthropic, Google, Mistral, DeepSeek, local models via Ollama, LiteLLM) and search providers (Tavily, Bing, Google, DuckDuckGo, SearXNG, etc.). - Open source under MIT, available as a Python package, a self-hosted FastAPI server, a Docker image, and an MCP (Model Context Protocol) server. - Designed to be embedded inside other agents, IDEs, and workflows. ## When to use it - When a single LLM call or web-search tool is not enough and you need a *researched* answer with sources. - When you need long-form reports (executive summaries, market analyses, due-diligence briefs, literature reviews) generated end-to-end. - When an LLM agent (Claude, ChatGPT, Cursor, custom) needs a "deep research" sub-tool to fetch grounded, cited context. - When you want full control over the LLM, search engine, and data sources (including private / internal docs). ## Primary capabilities - `deep_research` - autonomous multi-source research with planning, source validation, and citation tracking. - `quick_search` - low-latency web search with snippets across any GPT Researcher-supported retriever. - `write_report` - generate long-form research reports from accumulated research context. - `get_research_sources` - list all sources gathered during a research run. - `get_research_context` - return the full accumulated research context. - Streaming output via WebSockets and Server-Sent Events. ## Use cases - Investment & equity research (company / sector deep-dives) - Market and competitive intelligence - Academic literature review and citation gathering - Due-diligence reports and risk analyses - Briefing documents for executives and product teams - Long-form content generation grounded in real sources ## Constraints and important notes - Deep research runs typically take 30 - 60 seconds per query. This is the trade-off for grounded, multi-source reports. - Quality depends on the configured LLM and retriever. The defaults (OpenAI + Tavily) work out of the box; everything is swappable. - GPT Researcher requires Python 3.11+ and at least one LLM API key plus one search-API key. - Output includes inline citations, but the user / calling agent is responsible for final fact-checking. - The hosted research at gptr.dev is a marketing landing page; the *product* is the open-source agent at github.com/assafelovic/gpt-researcher and the MCP server at github.com/assafelovic/gptr-mcp. ## Key links - Homepage: [gptr.dev](https://gptr.dev/) - Agent-optimized view: [gptr.dev/agent](https://gptr.dev/agent) - Documentation: [docs.gptr.dev](https://docs.gptr.dev/) - Quickstart: [docs.gptr.dev/docs/gpt-researcher/getting-started/introduction](https://docs.gptr.dev/docs/gpt-researcher/getting-started/introduction) - GitHub (core): [github.com/assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) - GitHub (MCP server): [github.com/assafelovic/gptr-mcp](https://github.com/assafelovic/gptr-mcp) - PyPI package: [pypi.org/project/gpt-researcher](https://pypi.org/project/gpt-researcher) - MCP server docs: [docs.gptr.dev/docs/gpt-researcher/mcp-server/getting-started](https://docs.gptr.dev/docs/gpt-researcher/mcp-server/getting-started) - Discord community: [discord.gg/QgZXvJAccX](https://discord.gg/QgZXvJAccX) ## Agent integration - **Canonical MCP server (recommended):** [github.com/assafelovic/gptr-mcp](https://github.com/assafelovic/gptr-mcp) - self-hosted, exposes `deep_research`, `quick_search`, `write_report`, `get_research_sources`, `get_research_context`. Supports stdio (Claude Desktop, Cursor), SSE (Docker, n8n, web), and Streamable HTTP. Requires user-supplied `OPENAI_API_KEY` and `TAVILY_API_KEY`. **This is the real research server.** - MCP discovery bridge: [https://gptr.dev/api/mcp](https://gptr.dev/api/mcp) - hosted at gptr.dev; does NOT execute research. Exposes read-only metadata tools (`get_full_mcp_server`, `get_overview`, `get_quickstart`, `list_tools_in_full_mcp_server`, `get_discovery_endpoints`, `get_pricing`) so AI agents can locate and install the canonical gptr-mcp server above. Agents should call `get_full_mcp_server` first. - NLWeb /ask endpoint: [gptr.dev/ask](https://gptr.dev/ask) - POST `{query, prefer:{streaming:true}}` for SSE results. - OpenAPI spec: [gptr.dev/openapi.json](https://gptr.dev/openapi.json) - API reference: [gptr.dev/api-reference](https://gptr.dev/api-reference) - Developer hub: [gptr.dev/developers](https://gptr.dev/developers) - Webhooks: [gptr.dev/webhooks](https://gptr.dev/webhooks) - A2A agent card: [gptr.dev/.well-known/agent-card.json](https://gptr.dev/.well-known/agent-card.json) - OpenAI plugin manifest: [gptr.dev/.well-known/ai-plugin.json](https://gptr.dev/.well-known/ai-plugin.json) - MCP discovery: [gptr.dev/.well-known/mcp.json](https://gptr.dev/.well-known/mcp.json) - Agent skills index: [gptr.dev/.well-known/agent-skills/index.json](https://gptr.dev/.well-known/agent-skills/index.json) - Full product manual: [gptr.dev/llms-full.txt](https://gptr.dev/llms-full.txt) ## Optional - Pricing: GPT Researcher is free and open-source (MIT). Operational cost is the underlying LLM and search-API usage you choose. Machine-readable details: [gptr.dev/pricing.md](https://gptr.dev/pricing.md). - Self-hosting: `pip install gpt-researcher` or `docker run` - see docs for details.