Calcis
Verified pricing across OpenAI, Anthropic, and Google

Cost certainty,
before the request.

Precision spend forecasting for every LLM call. Calcis models your prompt against the latest pricing from the frontier providers so your budget never catches you off-guard.

Exact tokenizers

js-tiktoken for OpenAI, Anthropic countTokens for Claude, Google GenAI for Gemini.

Verified pricing

Every model row carries a source URL and a calibration date. No stale numbers.

Auto response length

We read your prompt and forecast how long the model will reply, so the cost is honest.

Start free. Upgrade when you outgrow it.

Three tiers, no surprises. Free covers most weekend projects. Pro is for solo engineers shipping production AI. Max is for teams.

Free

$0

For tinkerers sizing up a weekend project.

  • Instant token count for every frontier model
  • Cost forecasts on OpenAI, Anthropic, and Google
  • Response-length auto-estimate from the prompt
  • Forever free, no card required
Most popular

Pro

$12/ month

For solo engineers shipping production AI.

  • Everything in Free
  • 10,000 forecasts / month
  • Saved prompt library with per-prompt history
  • Side-by-side model comparisons
  • Monthly spend projections at your scale
Coming soon

Max

$100/ month

For teams with real budget pressure.

  • Everything in Pro
  • Unlimited forecasts (fair use)
  • Team workspaces with shared prompt library
  • CSV + API export of forecasts
  • Priority support + early access to new models

Limited to 100 seats across the full product.

Coming soon

Stop guessing what your prompts cost.

Drop a prompt into the estimator and see the projected spend before you ever hit Send.