AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.
For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.
Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I
Rowboat has two parts:
(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.
(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.
Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.
Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.
Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.
We’d love to hear your thoughts and welcome contributions!
In practice, i connected gmail and asked it: "can you archive emails that have an unsubscribe link in them (that are not currently archived)?" and it got stuck on "I'll check what MCP tools are available for email operations first." But i connected gmail through your interface, and I don't see in settings anything about it also having configured the mcp? I also looked at the knowledge graph and it had 20 entities, NONE of which I had any idea what they were. I'm guessing its just putting in people trying to spam me into the contacts? It didn't finish running, but I didn't want to burn endless tokens trying to see if it would find actual people i care about, so I shut it down. One "proxy" for "people i care about" might be "people I send emails to"? I could see how this is a hard problem. I also think regardless I want things more transparent. So for the moment, I'm sticking with Craft Code for this even though it is missing some major things but at least its more clear what it is: its claude code, with a nice UI.
Hope this was helpful. I know there are multiple people working on things in this family, and I will probably be "largely solved" by the end of 2026, and then we will want it to do the next thing! Good luck, I will watch for updates and these are some nice ideas!
On Gmail actions - we currently don’t take write actions on inboxes like archiving or categorizing emails. The Google connection is read-only and used purely to build the knowledge graph. We’re working on adding write actions, but we’re being careful about how we implement them. Also probably why the agent was confused and was looking for an MCP to accomplish the same job.
On noise in the knowledge graph — this is something we’re actively tuning. We currently have different note-strictness levels that auto-inferred based on the inbox volume (configurable in ~/.rowboat/config/note-creation.json) that control what qualifies as a new node. Higher strictness prevents most emails from creating new entities and instead only updates existing ones. That said, this needs to be surfaced in the product and better calibrated. Using “people I send emails to” as a proxy for importance is a really good idea.
Would love to know what kind of scripts or plugins you’re using in Logseq, and what you’re primarily using it for.
If I get some time later today, I'll post my scripts.
I've seen a whole heap of graph-based startups start and then pivot or fail because having a graph doesn't seem to add any additional value that a Sqllite or Postgres database offers. That is, saying we have a "context graph" is just marketing speak, it doesn't really add any new possibility or feature that isn't possible from using other search and database tools.
Also, this kind of tool, using AI to extract a decision from Granola for example, is possible to one-shot prompt. I.e. you don't need any special tool to do it, you just need a single prompt. Granola itself has this kind of functionality.
I think you're trying to solve a problem that doesn't really exist, what surface patterns are you going to uncover that someone don't know to look for? Either there is a todo from a meeting or note or email or there isn't.
Writing notes or prep for a meeting isn't rocket science, and you don't need a graph database to "surface patterns" to prepare your pre-meeting notes. If you use something like Granola, 99% of the time (or maybe 100% really), the Granola summary is all you need. If you want something more you copy the whole transcript and send it to Claude for some specific reason.
Since you're a YC company, I assume this will become paid at some point, but why would I pay when just having an AI note-taker and Claude access is already perfect?
1. Do you see any downsides to storing your graph as markdown files on filesystem, rather than, say, a graph DB? I have little experience with either but I imagine there would be perf advatages to certain operations on a graph DB at least?
2. If you're using Obsidian-like .md files, why not use the Obsidian format? I bet some folks would love to have an AI coworker helping build and maintain their Obsidian vault.
1. We chose Markdown deliberately so each node is human-readable and editable. The idea is that a project or person note represents the current state of that entity, so you can just open it and understand what’s going on. That also lets users add updates manually, for example from offline conversations that aren’t captured in email or meetings.
In terms of performance, the graph mainly acts as an index over structured notes, and retrieval happens at the note level rather than through complex graph queries. So for our use case, plain files have been sufficient and keep the system simple and transparent.
2. It’s actually Obsidian-compatible. The notes use Obsidian-style backlinks, and you can open the folder directly as an Obsidian vault if you’d like.
One design choice we made was to make each node human-readable and editable. For example, a project note contains a clear summary of its current state derived from conversations and tasks across tools like Gmail or Granola. It’s stored as plain Markdown with Obsidian-style backlinks so the user can read, understand, and edit it directly.
All the knowledge is stored in Markdown files on disk. You can edit them through the Rowboat UI (including the backlinks) or any editor of your choice. You can use the built in AI to edit it as well.
On background tasks - there is an assistant-skill that lets it schedule and manage background tasks. For now, background tasks cannot execute shell-commands on the system. They can execute built-in file handling tools and MCP tools if connected. We are adding an approval system for background tasks as well.
There are three types of schedules - (a) cron, (b) schedule in a window (run every morning at-most once between 8-10am), (b) run once at x-time. There is also a manual enable/disable (kill switch) on the UI.
What are the plans for monetization?
Prompting is a very specialized skill, average users just don't know what to ask for to get the most out of the LLMs.
Ideally the UX should organize and surface information to the user that is important automatically, without needing to be prompted.
For contradictory or stale information, since these are based on emails and conversations, we use the timestamp of the conversation to determine the latest information when updating the corresponding note. The agent operates on that current state.
That said, handling contradictions more explicitly is something we’re thinking about. For example, flagging conflicting updates for the user to manually review and resolve. Appreciate you raising it.
That's a great idea. The inconsistencies in a given graph are just where attention is needed. Like an internal semantic diff. If you aim it at values it becomes a hypocrisy or moral complexity detector.
His manual approach, which won't work if applied directly ( or more specifically, it will, but it would be unnecessarily labor intensive and on big enough set prohibitively so ), because it would require constant filtering re-evaluating all emails, can still be done though.
As for exact approach, its a slightly longer answer, because it is a mix of small things.
Since I try to track, which llm excel at which task ( and assign tasks based on those tracking scores ). It may seem irrelevant at first, but small things like: 'can it handle structured json' rubric will make a difference.
Then we get to the personas that process the request, and those may make a difference in a corporate environment. Again, as silly as its sounds, you want to effectively have a Dwight and Jim ( yes, it is an office reference ) looking at those ( more if you have a use case that requires more complex lens crafting ) as will both be looking for different things. Jim and Dwight would add their comments noting the sender, what they seem to try to do and issues they noted ( if any ).
Notes from Jim and Dwight for a given message is passed to a third persona, which will attempt to reconcile it noting discrepancies between Jim and Dwight and checking against other like notes.
...and so it goes.
As for flagging itself, that is a huge topic just by itself. That said, at least in its current iteration, I am not trying to do anything fancy. Right now, it is almost literally, if you see something contradictory ( X said Y then, X says Y now ), show it in a summary. It doesn't solve for multiple email accounts, personas or anything like that.
Anyway, hope it helps.
Google Mail should not be used, nor its use encouraged. Nor should you encourage the use of LLMs of large corporations which suck in user data for mining, analysis, and surveillance purposes.
I would also be worried about energy use, and would not trust an "agent" to have shell access, that sounds rather unsafe.
The weird: the javascript it's supposed to run is included as part of the prompt, for the LLM to write to a file via tool calls.
The naive: "Never actually send emails - only create drafts" yeaah the text generator really doesn't work like that.
https://github.com/rowboatlabs/rowboat/blob/f68887496bcb608e...
https://github.com/rowboatlabs/rowboat/blob/f68887496bcb608e...
The raw sync layer (Gmail, calendar, transcripts, etc.) is idempotent and file-based. Each thread, event, or transcript is stored as its own Markdown file keyed by the source ID, and we track sync state to avoid re-ingesting the same item. That layer is append-only and not deduplicated.
Entity consolidation happens in a separate graph-building step. An LLM processes batches of those raw files along with an index of existing entities (people, orgs, projects and their aliases). Instead of relying on string matching, the model decides whether a mention like “Sarah” maps to an existing “Sarah Chen” node or represents a new entity, and then either updates the existing note or creates a new one.
Thanks! How much context does the model get for the consolidation step? Just the immediate file? Related files? The existing knowledge graph? If the graph, does it need to be multi-pass?
Before each batch, we rebuild an index of all existing entities (people, orgs, projects, topics) including aliases and key metadata. That index plus the batch’s raw content goes into the prompt. The agent also has tool access to read full notes or search for entity mentions in existing knowledge if it needs more detail than what’s in the index.
It’s effectively multi-pass: we process in batches and rebuild the index between batches, so later batches see entities created earlier. That keeps context manageable while still letting the graph converge over time.