Rendered at 14:04:30 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
mmastrac 14 minutes ago [-]
I definitely agree with the need for this. There's just too much to put into the agents file to keep from killing your context window right off the bat. Knowledge compression is going to be key.
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
ssyhape 3 hours ago [-]
Neat idea. The biggest problem I've had with code knowledge graphs is they go stale immediately -- someone renames a package and nobody updates the graph. Having it as Markdown in the repo is clever because it just goes through normal PR review like everything else, and you get git blame for free. My concern is scale though. Once you have thousands of nodes the Markdown files themselves become a mess to navigate, and at that point you're basically recreating a database with extra steps. Would love to see how this compares to just pointing an agent at LSP output.
ossianericson 6 minutes ago [-]
I would say that when you treat your Markdown as the authoritative source, I of course don't get it automated but that is my choice. It takes knowledge of the domain, but when you have deep specific knowledge that is worth so much more than automated updates. I use AI to get the initial MD but then I edit that. Sure it doesn't get auto updated, but I would never trust advice on the fly that got updated based on AI output on the internet.
cyanydeez 3 hours ago [-]
We all know this isn't for humans. It's for LLMs.
So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it.
drooby 49 minutes ago [-]
GraphRAG is for LLMS... markdown is for humans.. humans that exist in the meantime
stingraycharles 2 hours ago [-]
You’re replying to an LLM, too, fwiw.
2 hours ago [-]
touristtam 9 minutes ago [-]
At that point why not have an obsidian vault in your repo and get the Agent to write to it?
reactordev 1 hours ago [-]
I found having smaller structured markdowns in each folder explaining the space and classes within keeps Claude and Codex grounded even in a 10M+ LOC codebase of c/c++
Yokohiii 2 hours ago [-]
> "chalk": "^5.6.2",
security.md ist missing apparently.
touristtam 8 minutes ago [-]
good catch. Makes me wonder if we could feed the Agent with a repository of known vulnerability and security best practices to check against and get ride of most deps. Just ask _outloud_ so to speak.
nimonian 3 hours ago [-]
I have a vitepress package in most of my repos. It is a knowledge graph that also just happens to produce heat looking docs for humans when served over http. Agents are very happy to read the raw .md.
maxbeech 3 hours ago [-]
[dead]
jatins 2 hours ago [-]
tl;dr: One file, bad (gets too big for context)
So give you agent a whole obsidian
I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.
iddan 2 hours ago [-]
So we are reinventing the docs
/*/*.md directory? /s
I think this is a good idea just don’t really get why would you need a tool around it
tonyarkles 11 minutes ago [-]
One of the things that I've been chewing on lately is the sync problem. Having a CI job that identifies places where the docs have drifted from the implementation.
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it.
security.md ist missing apparently.
So give you agent a whole obsidian
I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.