The prompt would also maybe be better if it encouraged variety in diagrams. For somethings, a flow chart would fit better than a sequence diagram (e.g., a durable state machine workflow written using AWS Step Functions).
I don’t think the outright dismissal of AI is smart. (And, OP, I don’t mean to imply that you are doing that. I mean this generally.)
I also suspect people who level these criticisms have never really used a frontier LLM.
Feeding in a whole codebase that I’m familiar with, and hearing the LLM give good answers about its purpose and implementation from a completely cold read is very impressive.
Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.
So, I think the claims of improvement in productivity and regression in productivity can be true at the same time (and it's not just that people who don't find using LLMs productive are just prompting them wrong).
I think most can be gained by learning in which areas LLMs can give large productivity boosts and where it's better to avoid using them. Of course, this is a continuous process, given that LLMs are still getting better.
Personally, I am quite happy with LLMs. They cannot replace me, but they can do a chunk of the boring/repetitive work (e.g. boilerplate), so as a result I can focus on the interesting problems. As long as we don't have human-like performance (and I don't feel like we are close yet), LLMs make programming more interesting.
They are also a great learning aid. E.g., this morning I wanted to make a 3D model for something I needed, but I don't know OpenSCAD. I iteratively made the design with Claude. At some point the problem becomes too difficult for Claude, but with the code generated at that point, I have learned enough about OpenSCAD that I can fix the more difficult parts of the project. The project would have taken me a few hours (to learn the language, etc.), but now I was done in 30 minutes and learned some OpenSCAD in a pleasant way.
There is also the frontend and tnpse code bases don't need to be very old at all before AI falls down. NPM packages and clashing styles in a codebase and AI has been not very helpful to me at all.
Generally speaking, which AI is a fine enhancement to autocomplete, I haven't seen it be able to do anything more serious in a mature codebase. The moment business rules and tech debt sneak in in any capacity, AI becomes so unreliable that it's faster to just write it yourself. If I can't trust the AI to automatically generate a list of exports in an index.ts file. What can I trust it for?
Things have changed a lot in the past six weeks.
Gemini 2.5 Pro accepts a million tokens and can "reason" with them, which means you can feed it hundreds of thousands of lines of code and it has a surprisingly good chance of figuring things out.
OpenAI released their first million token models with the GPT 4.1 series.
OpenAI o3 and o4-mini are both very strong reasoning code models with 200,000 token input limits.
These models are all new within the last six weeks. They're very, very good at working with large amounts of crufty undocumented code.
Maybe in a generation or two codebases will become more uniform and predictible if fewer humans do it by hand. Same with self driving cars, if there were no human drivers out there the problem would become trivial to conquer.
They still make mistakes, and yeah they're still (mostly) next token predicting machines under the hood, but if your mental model is "they can't actually predict through how some code will execute" you may need to update that.
Cold read ability for this particular tool is still an open question. As others have mentioned, a lot of the example tutorials are for very popular codebases that are probably well-represented in the language model's training data. I'm personally going to test it on my private, undocumented repos.
IMHO, Ai text additions are generally not valuable and I assume, until proven wrong, that Ai text provides little to no value.
I have seen so many startups fold after they made some ai product that on the surface level appeared impressive but provided no substantial value.
Now, I will be impressed by the ai that can remove code without affecting the product.
Current AIs can already do this decently. With the usual caveats about possible mistakes/oversight.
Honestly, I wonder if I'm living in some parallel universe, because my experience is that "most engineers" are far from that position. The reactions I'm seeing are either "AI is the future" or "I have serious objections to and/or problems with AI".
If you're calling the latter group "the outright dismissal of AI", I would disagree. If I had to call it the outright dismissal of anything, it would be of AI hype.
> I also suspect people who level these criticisms have never really used a frontier LLM.
It's possible. At my workplace, we did a trial of an LLM-based bot that would generate summaries for our GitHub PRs. I have no idea whether it's a "frontier" LLM or not, but I came out of that trial equally impressed, disappointed, and terrified.
Impressed, because its summaries got so many details right. I could immediately see the use for a tool like that: even when the PR author provides a summary of the PR, it's often hard to figure out where to start looking at the PR and in which order to go through changes. The bulleted list of changes from the bot's summary was incredibly useful, especially because it was almost always correct.
Disappointed, because it would often get the most important thing wrong. For the very first PR that I made, it got the whole list of changes right, but the explanation of what the PR did was the opposite of the truth. I made a change to make certain behavior disabled by default and added an option to enable it for testing purposes, and the bot claimed that the behavior was impossible before this change and the PR made it possible if you used this option.
Terrified, because I can see how alluring it is for people to think that they can replace critical thinking with AI. Maybe it's my borderline burnout speaking, but I can easily imagine the future where the pressure from above to be more "efficient" and to reduce costs brings us to the point where we start trusting faulty AI and the small mistakes start accumulating to the point where great damage is done to millions of people.
> Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.
I have my doubts about this. Yes, if we get an AI that is reliable and doesn't make these mistakes, it can help us understand software faster, as long as we're willing to make the effort to actually understand it, rather than delegating to the AI's understanding.
What I mean by that is that there are different levels of understanding. How deep do you dive before you decide it's "deep enough" and trust what the AI said? This is even more important if you start also using the AI to write the code and not just read it. Now you have even less motivation to understand the code, because you don't have to learn something that you will use to write your own code.
I'll keep learning how to use LLMs, because it's necessary, but I'm very worried about what we seem to want from them. I can't think of any previous technological advance that aimed to replace human critical thinking and creativity. Why are we even pursuing efficiency if it isn't to give us more time and freedom to be creative?
Gemini 2.5 Flash Preview 04-17 another powerful model has 10 and 500.
OpenAI also allows you to use their API for free if you agree to share the tokens.
I'll just wait for a winner to shake out and learn that one. I've gotten tired of trying AIs only to get slop.
Sometimes it explains things like I am a child and sometimes it doesn't explain things well enough. I think fixing this just by a simple prompt change won't work - it may fix it in one part and make things worse in the other part. This is a problem which I have with LLM: you can fine-tune the prompt for a specific case but I find it difficult to write a universally-working prompt. The problem seems to be LLM "does not understand my intents", like it can't deduce what I need and "proactively" help. It follows requirements from the prompt but the prompt has to (and can't) handle all situations. I am getting tired of LLM.
It sounds like the tool (as it's currently set up) may not actually be that effective at writing tutorial-style content in particular. Tutorials [1] are usually heavily action-oriented and take you from a specific start point to a specific end point to help you get hands-on experience in some skill. Some technical writers argue that there should be no theory whatsoever in tutorials. However, it's probably easy to tweak the prompts to get more action-oriented content with less conceptual explanation (and exclamation marks).
Ensure the tone is welcoming and easy for a newcomer to understand{tone_note}.
- Output only the Markdown content for this chapter.
Now, directly provide a super beginner-friendly Markdown output (DON'T need ```markdown``` tags)
So just a change here might do the trick if you’re interested.
But I wonder how Gemini would manage different levels. From my take (mostly edtech and not in English) it’s really hard to tone the answer properly and not just have a black and white (5 year old vs expert talk) answer. Anyone has advice on that?
"Write simple, rigorous statements, starting from first principles, and making sure to take things to their logical conclusion. Write in straightforward prose, no bullet points and summaries. Avoid truisms and overly high-level statements. (Optionally) Assume that the reader {now put your original prompt whatever you had e.g 5 yo}"
Sometimes I write a few more lines with the same meaning as above, and sometimes less, they all work more or less OK. Randomly I get better results sometimes with small tweaks but nothing to make a pattern out of -- a useless endeavour anyway since these models change in minute ways every release, and in neural nets the blast radius of a small change is huge.
This really shows off that the simple node graph, shared storage and utilities patterns you have defined in your PocketFlow framework are useful for helping the AI translate your documented design into (mostly) working code.
Impressive project!
See design doc https://github.com/The-Pocket/Tutorial-Codebase-Knowledge/bl...
I changed it to use this line:
api_key=os.getenv("GEMINI_API_KEY", "your-api_key")
instead of the default project/location option.and I changed it to use a different model:
model = os.getenv("GEMINI_MODEL", "gemini-2.5-pro-preview-03-25")
I used the preview model because I got rate limited and the error message suggested it.I used this on a few projects from my employer:
- https://github.com/prime-framework/prime-mvc a largish open source MVC java framework my company uses. I'm not overly familiar with this, though I've read a lot of code written in this framework.
- https://github.com/FusionAuth/fusionauth-quickstart-ruby-on-... a smaller example application I reviewed and am quite familiar with.
- https://github.com/fusionauth/fusionauth-jwt a JWT java library that I've used but not contributed to.
Overall thoughts:
Lots of exclamation points.
Thorough overview, including of some things that were not application specific (rails routing).
Great analogies. Seems to lean on them pretty heavily.
Didn't see any inaccuracies in the tutorials I reviewed.
Pretty amazing overall!
https://github.com/mooreds/prime-mvc-tutorial
https://github.com/mooreds/railsquickstart-tutorial
https://github.com/mooreds/fusionauth-jwt-tutorial/
Other than renaming the index.md file to README.md and modifying it slightly, I made no changes.
Edit: added note that there are examples in the original link.
I bet in the next few months we'll be getting dynamic, personalized documentation for every library!! Good times
Like at least one other person in the comments mentioned, I would like a slightly different tone.
Perhaps good feature would be a "style template", that can be chosen to match your preferred writing style.
I may submit a PR though not if it takes a lot of time.
The tutorial on requests looks uncanny for being generated with no prior context. The use cases and examples it gives are too specific. It is making up terminology, for concepts that are not mentioned once in the repository, like "functional api" and "hooks checkpoints". There must be thousands of tutorials on requests online that every AI was already trained on. How do we know that it is not using them?
from ollama import chat, ChatResponse
def call_llm(prompt, use_cache: bool = True, model="phi4") -> str: response: ChatResponse = chat( model=model, messages=[{ 'role': 'user', 'content': prompt, }] ) return response.message.content
I'd love the ability to run the LLM locally, as that would make it easier to run on non public code.
I did however archive the wiki that it generated for the project I work on: https://web.archive.org/web/20240815184418/wiki.mutable.ai/g...
(The images aren't working. I believe those were auto-generated class inheritance or dependency diagrams.)
* The first paragraph is pretty good.
* The second paragraph is incorrect to call pw_rpc the "core" of Pigweed. That implies that you must always use pw_rpc and that all other modules depend on it, which is not true.
* The subsequent descriptions of modules all seemed decent, IIRC.
* The big issue is that the wiki is just a grab bag summary of different parts of the codebase. It doesn't feel coherent. And it doesn't mention the other 100+ modules that the Pigweed codebase contains.
When working on a big codebase, I imagine that tools like mutable.ai and Pocket Flow will need specific instruction on what aspects of the codebase to document.
https://news.ycombinator.com/item?id=42542512
The latter is what this thread claims ^
I think it should be possible to extract some more useful usage patterns by poking into related unit tests. How to use should be what matters to most tutorial readers.
You can tell that because simonw writes quite heavily-documented code an the logic is pretty straightforward, it helps the model a lot!
https://github.com/Florents-Tselai/Tutorial-Codebase-Knowled...
https://github.com/Florents-Tselai/Tutorial-Codebase-Knowled...
An LLM can't magically figure out your motivation behind doing something a certain way.
Frankly, most documentation is useless fluff. LLMs will be able to write a ton of that for sure :-)
It’s a pretty foolproof way for smart political operators to get out of a relatively dreary - but high leverage - task.
AI doesn’t complain. It just writes it. Makes the whole task a lot faster when a human is a reviewer for correctness instead of an author and reviewer.
Can see some finetuning after generation being required, but assuming you know your own codebase that's not an issue anyway.
You built in in one afternoon? I need to figure out these mythical abilities.
I've thought about this idea few weeks back but could not figure out how to implement it.
Amazing job OP
* Previously, it was simply infeasible for most codebases to get a decent tutorial for one reason or another. E.g. the codebase is someone's side project and they don't have the time or energy to maintain docs, let alone a tutorial, which is widely regarded as one of the most labor-intensive types of docs.
* It's always been hard to persuade businesses to hire more technical writers because it's perenially hard to connect our work to the bottom or top line.
* We may actually see more demand for technical writers because it's now more feasible (and expected) for software projects of all types to have decent docs. The key future skill would be knowing how to orchestrate ML tools to produce (and update) docs.
(But I'm also under no delusion: it definitely possible for TWs to go the way of the dodo bird and animatronics professionals.)
I think I have a very good way to evaluate this "turn GitHub codebases into easy tutorials" tool but it'll take me a few days to write up. I'll post my first impressions to https://technicalwriting.dev
P.S. there has been a flurry of recent YC startups focused on automating docs. I think it's a tough space. The market is very fragmented. Because docs are such a widespread and common need I imagine that a lot of the best practices will get commoditized and open sourced (exactly like Pocket Flow is doing here)
Put in postgres or redis codebase, get a good understanding and get going to contribute.
The burden of understanding still is with the engineers. All you would get is some (partially inaccurate at places) good overview of where to look for.
Do you have examples of LLMs running tutorials you can share?
You might get two or three tutorials built for yourself inside the free 25/day limit, depending on how many chapters it needs.
I think this could be solved with an “assume the reader knows …” part of the prompt.
Definitely looks like ELI5 writing there, but many technical documents assume too much knowledge (especially implicit knowledge of the context) so even though I’m not a fan of this section either, I’m not so quick to dismiss it as having no value.
I haven't used it, but it looks like it's in the same space and I've been curious about it for a while.
I've tried my own homebrew solutions, creating embedding databases by having something like aider or simonw's llm make an ingests json from every function, then using it as a rag in qdrant to do an architecture document, then using that to do contextual inline function commenting and make a doxygen then using all of that once again as an mcp with playwright to hook that up through roo.
It's a weird pipeline and it's been ok, not great but ok.
I'm looking into perplexica as part of the chain, mostly as a negation tool
One thing to note is that the tutorial generation depends largely on Gemini 2.5 Pro. Its code understanding ability is very good, combined with its large 1M context window for a holistic understanding of the code. This leads to very satisfactory tutorial results.
However, Gemini 2.5 Pro was released just late last month. Since Komment.ai launched earlier this year, I don't think models at that time could generate results of that quality.
I haven't switched back. At least for my use cases it's been meeting my expectations.
I haven't tried Microsoft's new 1.58 bit model but it may be a great swap out for sentencellm, the legendary all-MiniLM-L6-v2.
I found that if I'm unfamiliar with the knowledge domain I'm mostly using AI but then as I dive in the ratio of AI to human changes to the point where it's AI at 0 and it's all human.
Basically AI wins at day 1 but isn't any better at day 50. If this can change then it's the next step
DEFAULT_INCLUDE_PATTERNS = { ".py", ".js", ".jsx", ".ts", ".tsx", ".go", ".java", ".pyi", ".pyx", ".c", ".cc", ".cpp", ".h", ".md", ".rst", "Dockerfile", "Makefile", ".yaml", ".yml", } DEFAULT_EXCLUDE_PATTERNS = { "test", "tests/", "docs/", "examples/", "v1/", "dist/", "build/", "experimental/", "deprecated/", "legacy/", ".git/", ".github/", ".next/", ".vscode/", "obj/", "bin/", "node_modules/", ".log" }
Looks inside
REST API calls
The hype is that AI isn’t a tool but the developer.
But that’s what they sell, that AI could do what the author did with AI.
The question is, is it worth to put all that money and energy in AI. MS sacrificed its CO2 goals for email summaries and better autocomplete not to mention all the useless things we do with AI
Can you give an example of what you meant here? The author did use AI. What does "AI coming up with that" mean?
In the few years we will see complaints that it’s not AI that built a power station and a datacenter, so it doesn’t count as well.
They push AI into everything like it’s the ultimate solution but it is not instead is has serious limitations.
The AI companies sell it like the AI could do it by itself and developers are obsolete but in reality it‘s a tool that still needs developers to make something useful
For instance to me AI is useful because I don’t have to write boilerplate code but that’s rarely the case. For other things it still useful to write code but I am not faster because the time I save writing the code I need to fix the prompt, audit and fix the code.
Thanks buddy! this will be very helpful !!
It seems a trifle... overexcited at times.
I wonder why all examples are from projects with great docs already so it doesn't even need to read the actual code.
True
> It's more like you wrote a prompt.
False
> I wonder why all examples are from projects with great docs already so it doesn't even need to read the actual code.
False.
This: https://github.com/browser-use/browser-use/tree/main/browser...
Became this: https://the-pocket.github.io/Tutorial-Codebase-Knowledge/Bro...
Granted, this example (and others) have plenty of inline documentation. And, public documentation is likely in the training data for LLMs.
But, this is more than just a prompt. The tool generates really nicely structured and readable tutorials that let you understand codebases at a conceptual level easier than reading docstrings and code.
Even if it's only useful for public repos with documentation, that's still useful, and flippant dismissals are counterproductive.
I am keen to try this with one of my own (private, badly documented) codebases and see how it fares. I've actually found LLMs quite useful at explaining code, so I have high hopes.
I think Gemini 2.5 Pro is doing a lot of the heavy lifting here. I have tried this sort of thing before (documentation, not tutorials, granted) and it wasn't anywhere near this good.
If you trained a model from scratch to do this I would say you "built an AI", but if you're just calling existing models in a loop then you didn't build an AI. You just wrote some prompts and loops and did some RAG. Which isn't building an AI and isn't particularly novel.
> look inside
> it’s a ChatGPT wrapper
With the rise of AI understanding software will become relatively easy
Do you have anything in mind? Are you familiar enough with any of those codebases to suggest something useful?
The task will be much more interesting if there is not a good existing tutorial that the LLM may have trained on.
OS kernel: tutorial on how to write a driver?
OpenZFS: ?
https://github.com/openzfs/zfs/graphs/contributors
I would have preferred to see what would have been generated without my guidance, but since you asked:
* Explanations of how each sub-component is organized and works would be useful.
* Explanations of the modern disk format (an updated ZFS disk format specification) would be useful.
* Explanations of how the more complex features are implemented (e.g. encryption, raid-z expansion, draid) would be interesting.
Basically, making guides that aid development by avoiding a need to read everything line by line would be useful (the ZFS disk format specification, while old, is an excellent example of this). I have spent years doing ZFS development, and there are parts of ZFS codebase that I do not yet understand. This is true for practically all contributors. Having guides that avoid the need for developers to learn the hard way would be useful. Certain historical bugs might have been avoided had we had such guides.
As for the others, LLVM could use improved documentation on how to make plugins. A guide to the various optimization passes would also be useful. Then there is the architecture in general which would be nice to have documented. Documentation for various esoteric features of both FreeBSD and Linux would be useful. I could continue, but I the whole point of having a LLM do this sort of work is to avoid needing myself or someone else to spend time thinking about these things.
# Use the Session as a context manager
with requests.Session() as s:
s.get('https://httpbin.org/cookies/set/contextcookie/abc')
response = s.get(url) # ???
print("Cookies sent within 'with' block:", response.json())
https://the-pocket.github.io/Tutorial-Codebase-Knowledge/Req...I may miss the error, but could you elaborate where it is?
I tried this for some very small decompilation projects, and it was cute at best.
Then I sent it a boot loader. I should have posted it on a ceral box for better results.
Is someone going to suggest that I check the dissembly into git hub and watch it make a tutorial?