# GPT Researcher Pricing

> GPT Researcher is free and open-source software under the MIT License.
> There is no SaaS subscription on gptr.dev. You only pay the underlying
> LLM and search-API providers you configure.

## Plans

### Open Source - $0

- All features of the GPT Researcher Python package.
- All features of the official `gptr-mcp` MCP server.
- Self-host on your own machine, server, or container platform.
- Use any supported LLM provider and any supported retriever.
- License: [MIT](https://github.com/assafelovic/gpt-researcher/blob/master/LICENSE).

| Feature | Open Source |
|---|---|
| Deep research agent | Included |
| Quick search | Included |
| Long-form report writing | Included |
| MCP server (`gptr-mcp`) | Included |
| All LLM providers (OpenAI, Anthropic, Google, Mistral, DeepSeek, local) | Included |
| All retrievers (Tavily, Bing, Google, DuckDuckGo, SearXNG, MCP, local docs) | Included |
| Streaming via WebSocket / SSE | Included |
| Docker image | Included |
| Multi-agent orchestration | Included |
| PDF / DOCX / Markdown report export | Included |

## Operating costs

You pay your chosen providers directly. Typical per-query cost depends on:

- **LLM provider and model** - e.g. GPT-4o, Claude Sonnet, Gemini, DeepSeek, or a local model.
- **Retriever** - Tavily, Bing, Google CSE, etc.; DuckDuckGo and SearXNG can be free.
- **Number of subtopics and iterations** - configurable; defaults aim for 10 - 30 sources.

Indicative ballpark for a single deep-research run with default settings on OpenAI + Tavily: a few cents to a few dollars of provider spend.

## Enterprise / Hosted

There is no first-party hosted SaaS today. If you need:

- A managed deployment of GPT Researcher inside your VPC,
- Custom retrievers connected to private corpora,
- SLA, support, or onboarding,

reach out via [Discord](https://discord.gg/QgZXvJAccX) or email `assaf.elovic@gmail.com`.

## Frequently asked questions

- **Is GPT Researcher free forever?** The OSS project is MIT-licensed and free. Future managed offerings (if any) would be separate from the OSS distribution.
- **Will my data leave my machine?** Only via the LLM and retriever providers you configure. Self-host with a local model and SearXNG to keep everything on-prem.
- **Are there usage limits?** None imposed by GPT Researcher. Limits come from your underlying LLM / retriever providers' rate limits.
- **Is there a free tier of the hosted MCP server?** The hosted MCP server is community-run; the recommended path is self-hosting `gptr-mcp` so you control credentials and cost.
