Froots
Deep dive

The local-first AI workspace stack, 2026

If you're building (or just using) a local-first AI workspace in 2026, these are the ten building blocks — editor, vault, runtime, sync, tools, model routing, memory, inbox, widgets, plugins — and how the serious options stack up across each.

Apr 8, 2026 · 6 min read ·By Jordan Reed ·local-first · stack · comparison · agents

Every year someone publishes "the [X] stack for [Y]." They're always a little reductive and a little useful. This is the 2026 one for local-first AI workspaces — the apps that treat your data as yours, run agents against it, and don't require a cloud to function.

We build one of these (Froots), so we're biased. We've tried to make the comparisons honest anyway.

The ten layers

A local-first AI workspace is a stack. Not every app implements every layer; the ones that miss a layer usually pair with a companion app that fills the gap.

  1. Editor — where you write
  2. Vault — where the writing lives on disk
  3. Runtime — what runs agents and tools
  4. Sync — how it moves between devices
  5. Tools — what agents can do (filesystem, shell, network, integrations)
  6. Models — what language models are in use, and how they're routed
  7. Memory — what the agent remembers between sessions
  8. Inbox — how outside messages (email, Slack, iMessage, etc.) come in
  9. Widgets — how the workspace shows up outside the app (lock screen, menu bar, desktop)
  10. Plugins — how third parties extend the surface

Below, how each layer looks in 2026, and how the serious options compare.

1. Editor

The minimum: markdown, with wikilinks, backlinks, and live preview. The table stakes above that: syntax-highlighted code blocks (Shiki), block-level anchors, checklists, tables you can tab through, math (KaTeX), and Mermaid diagrams rendered live.

The current good options:

2026 change: AI-assisted writing is now assumed. A note-taking app without inline agent hints feels dated. All five options above have some version of it.

2. Vault

The minimum: plain .md files on your disk, in a folder you choose. Anything proprietary is disqualifying — you don't own notes you can't read in cat.

Layouts vary but have converged on some conventions:

~/Vault/
├── notes/            your notes
├── attachments/      or .assets/
├── templates/        starter files
├── .git/             optional version control
└── .obsidian/ .froots/ .bear/   per-app config (never your content)

Froots adds:

├── routines/         saved automations (*.routine.md)
├── agents/           your agent identities
└── memory/           what agents learned about you

That's deliberate — in a workspace-grade tool, the agent's stuff is also yours, also plain markdown, also in the vault. No magic database.

3. Runtime

Where the agents actually run. This is the layer that's diverged most in the last 18 months.

Canonical options:

The 2026 trend is hybrid: a sidecar runtime (local or cloud) that the app talks to over MCP or an OpenAI-compatible API. This gives you the best of all three — tight UX, swappable runtimes, and auditability.

4. Sync

Local-first doesn't mean single-device. The question is how your vault ends up on your phone.

Four patterns:

Pattern Example Tradeoff
Cloud drive (you pick) Obsidian + iCloud/Dropbox Zero cost, imperfect conflict handling
Git remote Anyone + GitHub Free, great for devs, bad for large attachments
CRDT sync Froots, Logseq Conflict-free, usually a managed service, sometimes paid
Encrypted blob Obsidian Sync, Froots Pro End-to-end encrypted, paid, usually the best UX

2026 default: pick CRDT sync when it's available (Yjs is the standard); fall back to git for devs who want version control of their notes.

5. Tools

What an agent can do, concretely. In 2026 the expected minimum is:

More ambitious workspaces add:

Everyone has filesystem + shell. Differentiation happens in the middle (OS integrations, vault operations) and at the top (browser + app automation).

6. Models

Not which model is best — which model for which task. The 2026 pattern is per-agent, per-task model routing:

agents:
  lime:    model: llama-3.3-8b-local    # fast, local, reads a lot
  yuzu:    model: claude-4-sonnet       # quality writing
  clem:    model: gpt-4.1-mini          # good at tool use
  cherry:  model: claude-4-haiku        # cheap cleanup

The best workspaces let you set this per-agent and override per-task. The worst workspaces have one model, globally, configured once at setup. Get the first kind.

Local model support matters here because the "cheap, frequent" tasks (summarize this paragraph, draft this sentence, rename this note) don't need the best model. A local 8B can do them faster and for free.

7. Memory

What the agent remembers about you, across sessions. The 2026 state of the art has three layers:

Froots and a few others implement all three with plain markdown files:

The key property: you can read and edit any memory. No opaque embedding blob. If the agent decides your name is "Danny" and you're actually "Dylan," you go find that line and fix it.

8. Inbox

Messages from the outside world: email, Slack, iMessage, Telegram, etc. A workspace with an inbox is substantively different from one without — the agent can read, summarize, and reply on your behalf, making the app a genuine communications cockpit rather than just a notepad.

Every channel is a plugin. The question is which channels ship out of the box. As of April 2026, Froots ships iMessage, Gmail, IMAP/SMTP, Slack, Discord, Telegram, Signal, WhatsApp, X, Bluesky, and Mastodon. Others ship fewer; most ship none.

9. Widgets

Native OS widgets — macOS Notification Center, Windows Widget Board, iOS Lock Screen, Android AOD — are the 2026 equivalent of what browser tabs were in 2016: the workspace bleeding out into the rest of your life.

Most workspaces don't do this yet. Froots ships a generative widget system where an agent can emit a native widget from a short JSON layout spec (vstack, hstack, text, gauge, action). It's an emerging category; expect the rest of the field to catch up by 2027.

10. Plugins

Third-party extension. Two models compete:

The 2026 convergence point is MCP servers — plugins that expose tools over the Model Context Protocol. Any MCP server can plug into any workspace that speaks MCP, which most of the serious ones now do.

The short recommendation

If you're picking a workspace today, evaluate on these axes:

  1. Vault format is plain markdown. Non-negotiable.
  2. Agent runtime is swappable. You should be able to point at a different model or a local runtime without replacing the app.
  3. Memory is readable. You can open the memory file in any text editor.
  4. At least three inbox channels. Email is the floor.
  5. Per-agent model routing. Global-model-only is a 2024 design.
  6. Open sync backend. You can point sync at storage you control.

If you want one that checks all six today, try Froots. If you want the most auditable runtime as a standalone, look at Hermes. If you want the most mature editor layer standalone, Obsidian is still the answer.

The meta-point is: in 2026 you don't have to choose between "AI tool" and "notes tool." The stack has merged. The question is whose version of that merger fits your brain.

Try Froots free →

Further reading

FAQ

What does "local-first" actually mean in 2026?

Local-first means your data lives on your device first. The app works fully offline, the canonical copy is on your disk, and cloud sync (if any) is an optional optimization — not a dependency. The phrase was coined by Ink & Switch; in 2026 it's a de-facto requirement for serious note/agent apps.

Do I need all ten layers to have a workspace?

No. A workspace is whichever layers you actually use. The point of listing them is to expose design trade-offs — an app that does editor + vault + runtime well can be a workspace; an app that ignores sync or memory is half a workspace. Pick deliberately.

Is local-first actually private if I bring my own cloud model?

Partially. Your notes stay local, but the text you send to the model leaves your machine. If privacy is absolute, run a local model (Ollama / LM Studio / llama.cpp / mlx). If you're fine with "the model sees what I ask it, not the rest of my vault," cloud models are fine. Most workspaces let you pick per-agent; Froots does.

What's the single most important layer to get right?

The vault format. Everything else can be replaced — the editor, the runtime, the sync backend. But if your vault is locked in a proprietary binary format, you're locked in forever. Plain markdown (or Org-mode, or plaintext + metadata) is the one non-negotiable.

Will local models replace cloud models for workspace agents in 2026?

Partially. For reading, summarizing, drafting, and agent routing — local 7B-13B models are already good enough. For deep reasoning, complex code generation, and multi-hop tool use, cloud models still lead. A well-designed workspace routes: cheap local models for the 80% case, cloud for the 20% that needs it.

JR
Jordan ReedHead of Product · Froots

Try Froots — free, local, yours.

One app for notes, routines, and four always-on agents. Runs on your machine; your data never leaves unless you say so.

Download Froots