Back to blog
ProductApril 24, 2026·6 min read

Why Froots is model-agnostic by default

Locking yourself into one model ages badly. We let you swap brains per task — here's the architecture that makes it boring to do.

The Froots team
Froots

Pick the best model today and you’ll be wrong in six months. That’s not pessimism, that’s the last three years of releases. We built Froots with the assumption that the frontier moves, often, and you shouldn’t have to migrate your entire workflow every time it does.

The bet

Most AI apps pick a horse. They wire their prompts, tools, and evals to one model’s quirks, and ship something that works really well — until that model is no longer the right model. We picked the opposite bet. Every Froots primitive — chat, skills, routines, memory — is model-agnostic by construction.

What that looks like in practice

Each conversation has a model selector. Each routine has its own. Each skill can declare a preferred model or fall back to whatever’s active. You can run a $0.0002-per-call Haiku on triage and route the hard cases up to Opus, in the same chat thread, without restarting anything.

Under the hood we use a thin adapter layer that normalizes tool-calling, streaming, and content-block formats across providers. The model sees a consistent shape; we translate at the edge. New providers ship as a single file.

Why this matters for you

It means when a new model lands — and one will, soon — you don’t rebuild. You change a dropdown. Your skills keep working, your routines keep firing, your memory graph stays intact. The upgrade tax is gone.

It also means cost. The cheapest capable model has dropped roughly 10x per year for two years running. Model-agnostic design is the only way to actually capture that. Lock into one provider and you pay their price; let the work flow to whichever model fits, and the average cost of every task trends toward the floor.

What we’re not promising

We’re not promising every model is equal. They aren’t. Claude is still the best at long agent loops; GPT is still the most ecosystem-rich; Gemini still wins on raw context; Llama wins when you need it offline. Model-agnostic doesn’t mean “the model doesn’t matter” — it means *you* pick which one matters, per task, and we make swapping painless.