Feb 3, 2025

Introducing Coder Model

Today, we're excited to announce Coder Model—the world's first virtual model designed specifically for agentic coding. It's not just another LLM, but a lovingly crafted OpenAI-compatible endpoint that selects the best LLM for each request, performs meta analysis on your agentic flows, and applies a deterministic enhancement layer to reduce unnecessary, costly interactions.

Why Coder Model?

As developers increasingly rely on AI coding agents, we've observed three critical challenges:

  1. Model Selection: Different coding tasks require different model capabilities, but manually switching between models is impractical.
  2. Error Prevention: AI models can make mistakes that lead to costly back-and-forth iterations.
  3. Token Efficiency: Unnecessary interactions waste tokens and increase costs.

Coder Model addresses these challenges by automatically selecting the optimal model for each request and enhancing the interaction with meta analysis and deterministic checks. This reduces mistakes and saves you tokens—at the same per-token cost as using Sonnet.

How It Works

Using Coder Model is simple:

  1. Configure Your Agent: Set your cline or OpenHands configuration to use Coder Model's base URL and API key.
  2. Send Your Request: Your coding agent forwards requests through our API.
  3. Dynamic Model Selection: Our system automatically chooses the most suitable model from across the ecosystem—often Sonnet, but also others when needed (for example, to break out of repetitive loops, handle long contexts, or spend compute more efficiently).
  4. Meta Analysis & Enhancement: We perform real-time meta analysis on your agentic flow. Our enhancement layer then applies deterministic checks and feedback to minimize mistakes and avoid costly interactions.
  5. Deliver the Result: The final, refined output is returned to your agent as if it came straight from a specific LLM.

Simple, Transparent Pricing

We believe in simple, transparent pricing. Coder Model uses the same per-token rates as Sonnet:

  • Input: $3 / MTok
  • Prompt caching write: $3.75 / MTok
  • Prompt caching read: $0.30 / MTok
  • Output: $15 / MTok

You gain all the benefits of dynamic model selection and LLM enhancement—at no extra per-token cost and with fewer costly agent interactions as a result.

Pre-Order Today

We're offering a special pre-launch discount of 20% off all token credits. Pre-order now to lock in this discount and be among the first to experience the future of agentic coding.

Have questions? Contact us—we'd love to hear from you.