Get up and running in under a minute. LLMTest works two ways: as an MCP server in your IDE, and as an API proxy in your production app.
Sign up at llmtest.io/signup, add $5 in credits, and copy your API key from the dashboard. It starts with llmt_.
For your IDE. Get model suggestions, run benchmarks, and optimize costs directly in Claude Code, Cursor, or Windsurf.
For your production app. Route all LLM calls through LLMTest for cost tracking, automatic fallbacks, and JSON recovery.
Most teams use the proxy in production for reliability and cost tracking, and the MCP server during development for benchmarking and model selection. They share the same API key and data.
https://llmtest.io/v1 — the proxy is compatible with any OpenAI-format SDK. Starting fresh? Even easier — set the base URL and pick a model.