Prompt caching breaks even at 1.3 requests. Here's the math. Prompt caching cuts LLM costs 90% on Anthropic and 50% on OpenAI, but only when your workload fits. Here's the exact break-even math per provider. Apr 27, 2026 · 5 min read infracostprompt-caching