Memory
A semantic search index over the notes your agent writes. Three living markdown files plus a vector store, all local to your machine.
What it is
Memory in Froots is two things working together: three core markdown files that hold the working state of the agent, and a semantic vector index over every other note the agent writes. Both live on your machine.
The three files are context.md (what you’re working on right now), decisions.md (choices made and the reasoning behind them), and learnings.md (mistakes worth remembering). The agent reads and edits them like any other file. They’re plain markdown — you can open them in any editor.
Everything else the agent writes into workspace/memory/ or workspace/kb/ gets automatically chunked, embedded, and indexed so it can be retrieved by similarity later.
How retrieval works
When you send a prompt, Froots embeds your message and runs an approximate-nearest-neighbour search against the vector index. The store is libsql with a DiskANN index on a 384-dim F32_BLOB column — fast enough to return top results in milliseconds even on years of notes.
Anything above the similarity threshold is rescored exactly, ranked, and injected into the system prompt as a markdown context block before the model runs. You can tune the threshold and the result count per assistant.
| Layer | What it does | Where it lives |
|---|---|---|
| Core files | Hand-edited working state — context, decisions, learnings | workspace/memory/*.md |
| Indexed notes | Automatic semantic recall over everything else the agent writes | motive-x.db (libsql) |
| Active prompt | Top-K snippets injected per turn | System prompt |
How writes happen
Memory writes are file-driven, not turn-driven. The agent edits a markdown file in the workspace. A file-watcher notices the change and runs the indexer: it chunks the file by markdown heading (with ~20% overlap so sentences near a boundary don’t get split), batch-embeds the chunks, and atomically updates the memories table.
Each row stores the chunk text, its embedding, the source file path, a category, and bookkeeping timestamps (created_at, last_retrieved, retrieval_count). Tracking retrievals is what powers the “most-used memories” views in the UI.
What it isn’t (yet)
Memory is currently ranked by cosine similarity alone. Recency, retrieval frequency, and supersession (where a new decision replaces an old one) are tracked in the schema but don’t yet affect ranking. Cloud sync, multi-device memory, and team-shared memory are planned, not shipped.
If you’ve seen the graph view in the Memories tab, it’s a UI visualization derived from the vector index on-the-fly — clusters and edges are computed from category overlap and a 0.62 similarity threshold. The underlying store is flat; the graph is the picture.
What you can do with it
- Resume threads. Ask “what was I doing with the API rewrite last month?” — the index surfaces the decisions and notes that mention it.
- Hand-curate the core. Edit
context.mddirectly when you want the agent to know something without re-explaining it. The next turn picks it up. - Share across assistants. Each assistant has its own memory directory. Point a routine’s assistant at the same folder and they share recall.