The biggest problem we face right now is that the large majority of people are terrible writers and can't recognize why this is awful. It really felt like the moment before chatgpt arrived we were coming into a new world where the craft of writing was surging in popularity and making a difference. That all feels lost.
This kind of post makes me have hope.
I have used AI to ask specific questions about a codebase and it has been helpful in narrowing the search base. Think of AI as probable cause, not evidence. It speeds up getting to the truth but cant be trusted as the truth.
> I’ve tried it on one of my pet projects and it produced an entire wiki full of dev docs
Did it? No, it didn't. "Wiki" is not a synonym for "project documentation". (You could set up a wiki to manage the documentation for your project. But that's not what any of these things are about.)
These aren't wikis.
brickLock: The lock of the brick.
brickDrink: The drink of the brick.
brickWink: The wink of the brick.
...which is to say, definitions that just restate whatever's evident from the code or variable names themselves, and that make sense if you're already familiar with the thing being defined, but don't actually explain their purpose or provide context for how to use them (in other words, the main reasons to have documentation).My role as a writer is then to (1) extract net-new information out of the team, (2) figure out how all of that new info fits together, (3) figure out the implications of that info for readers/users, and then (4) assemble it in an attractive manner.
An autogenerated code wiki (or a lazy human) can presumably do the fourth step, but it can't do the first three steps preceding it, and without those preceding steps you're just rearranging known data. There are times where that can be helpful, but it's more often just gloss
It doesn't seem impossible for an LLM to go "hmmm, the way this repo passes configurations around isn't standard. I should focus more on that." But that's a level of understanding I don't think they currently have
I think they do, at least in some of the cases, especially if it's something well represented in the dataset. I've been surprised sometimes by the insights it provides, and other times it's completely useless. That's one of the problems, it's unreliable, so you have to treat all info it gives you with doubt. But, anyways, at times it makes very surprising and seeming intelligent observations. It's worth at least considering it and thinking it through.
It's so odd and random it seems like there must be more to it.