Product · Memory

Memory

A semantic search index over the notes your agent writes. Three living markdown files plus a vector store, all local to your machine.

Memory
0:24 / 1:12
Preview

What it is

Memory in Froots is two things working together: three core markdown files that hold the working state of the agent, and a semantic vector index over every other note the agent writes. Both live on your machine.

The three files are context.md (what you’re working on right now), decisions.md (choices made and the reasoning behind them), and learnings.md (mistakes worth remembering). The agent reads and edits them like any other file. They’re plain markdown — you can open them in any editor.

Everything else the agent writes into workspace/memory/ or workspace/kb/ gets automatically chunked, embedded, and indexed so it can be retrieved by similarity later.

384
dimensions per embedding (BGE-small)
0.45
minimum cosine similarity to surface
8
results retrieved per turn (configurable)

How retrieval works

When you send a prompt, Froots embeds your message and runs an approximate-nearest-neighbour search against the vector index. The store is libsql with a DiskANN index on a 384-dim F32_BLOB column — fast enough to return top results in milliseconds even on years of notes.

Anything above the similarity threshold is rescored exactly, ranked, and injected into the system prompt as a markdown context block before the model runs. You can tune the threshold and the result count per assistant.

LayerWhat it doesWhere it lives
Core filesHand-edited working state — context, decisions, learningsworkspace/memory/*.md
Indexed notesAutomatic semantic recall over everything else the agent writesmotive-x.db (libsql)
Active promptTop-K snippets injected per turnSystem prompt
Local-only by default. The libsql file sits next to the app. There’s no remote sync in v1 — your memory never leaves your machine unless you point it somewhere.

How writes happen

Memory writes are file-driven, not turn-driven. The agent edits a markdown file in the workspace. A file-watcher notices the change and runs the indexer: it chunks the file by markdown heading (with ~20% overlap so sentences near a boundary don’t get split), batch-embeds the chunks, and atomically updates the memories table.

Each row stores the chunk text, its embedding, the source file path, a category, and bookkeeping timestamps (created_at, last_retrieved, retrieval_count). Tracking retrievals is what powers the “most-used memories” views in the UI.

You see every write. Memories are markdown files you can edit, diff, and version-control. There’s no hidden internal state — if it’s in memory, it’s in a file you can open.

What it isn’t (yet)

Memory is currently ranked by cosine similarity alone. Recency, retrieval frequency, and supersession (where a new decision replaces an old one) are tracked in the schema but don’t yet affect ranking. Cloud sync, multi-device memory, and team-shared memory are planned, not shipped.

If you’ve seen the graph view in the Memories tab, it’s a UI visualization derived from the vector index on-the-fly — clusters and edges are computed from category overlap and a 0.62 similarity threshold. The underlying store is flat; the graph is the picture.

What you can do with it

  • Resume threads. Ask “what was I doing with the API rewrite last month?” — the index surfaces the decisions and notes that mention it.
  • Hand-curate the core. Edit context.md directly when you want the agent to know something without re-explaining it. The next turn picks it up.
  • Share across assistants. Each assistant has its own memory directory. Point a routine’s assistant at the same folder and they share recall.