AMA :)
The agent will update the graph.
If you have thousands of nodes in md it means you have a highly non-trivial large code base and this is where lat will start saving you time - agents will navigate code much faster and you'll be reviewing semantic changes in lat in every diff, potentially suggesting the agents to alter the code or add more context to lat.
You still have to be engaged in maintaining your codebase, just at a higher level.
So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it.
Other than that the goal of lat is to make the agent use it and update it and it has tools to enforce that.
Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
The pattern works.
But I keep catching myself spending more time on how to organize context than on what the agent is actually supposed to accomplish.
Feels like the whole space has that problem right now.
security.md ist missing apparently.
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
> I suspect there's ways to shrink that context even more.
Yeah, I'm experimenting with some ideas on that, like adding `lat agent` command to act as a subagent to search through lat and summarize related knowledge without polluting the parent's agent context.
- lat.md # high-level description of the project
- frontend/lat.md # frontend-related knowledge
- backend/lat.md # details about your backendmore general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
I'm very early into this and need to build proper harness, but I can see sometimes lat allowing for up to 2x faster coding sessions. But the main benefit to me isn't speed, it's the fact that I can now review diffs faster and stay more engaged with the agent.
The one thing I saw in the README is that lat has a format for source files to link back to the lat.md markdown, but I don't see why you couldn't just define an "// obs:wikilink" sort of format in your AGENTS.md
Unlike obsidian, lat allows markdown files link into functions/structs/classes/etc too.
This saves agents time on grepping but also allows you to build better workflows with tests.
Test cases can be described as sections in `lat.md/` and marked with `require-code-mention: true`. Each spec then must be referenced by a `// @lat:` comment in test code. `lat check` flags any spec without a backlink, so you can review and maintain test coverage from the knowledge graph.
https://news.ycombinator.com/item?id=47543324
what's the point of markdown? there's nothing useful you can do with it other than handing it over to llm and getting some probabilistic response
if you mean this paragraph - imo that's still too hand-wavy compared to enforcement through generative tests for spec conformance.
So give you agent a whole obsidian
I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.
https://docs.python.org/3/library/doctest.html
> To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented. To perform regression testing by verifying that interactive examples from a test file or a test object work as expected. To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”.
Seems pretty related to me.