It has little step-by-step tasks with automated tests. There are some other good ones like git and docker. It's pretty cool.
Something like LevelDB is relatively easy to write. Then you can build the rest of Redis on top of it.
I've found that LLMs are particularly bad at writing Zig because the language evolves quickly, so LLMs that are trained on Zig code from two years ago will write code that no longer compiles on modern Zig.
There seems to be a fair amount of stigma around using llms. And many people that use them are uncomfortable talking about it.
It's a weird world. Depending on who is at the wheel, whether an llm is used _can_ make no difference.
But the problem is, you can have no idea what you're doing and make something that feels like it was carefully hand-crafted by someone - a really great project - but there are hidden things or outright lies about functionality, often to the surprise of the author. Like, they weren't trying to mislead, just didn't take them time to see if it did all of what the LLM said it did.
these seem to occur only in college assignment projects, and in the output of text generators trained on those.
I put such emojis at the beginning of big headings, because my eyes detect compact shapes and colors faster than entire words and sentences. This helps me (and hopefully others) locate the right section easier.
In Slack, I put large emojis at the beginning of messages that need to stand out. These are few, and emojis work well in this capacity.
(Disclaimer: I may contain a large language model of some kind, but very definitely I cannot be reduced to it in any area of my activity.)
But the telltale signs are far more than just that. The whole document is exactly the kind of README produced by Claude.
(i actually do this in slack messages and folks find it funny and annoying, but more funny)
In my experience, when you work with something like agentic development tools, you describe your goals and give it some constraints like “use modern zig”, “always run tests”… and when you ask it to write a README, it will usually reproduce those constraints more or less verbatim.
The same thing happens with the features section, it reads like instructions for an LLM.
I might be wrong but I spend way too much time using Claude, Gemini, Codex… and IMHO it’s pretty obvious.
But hey, I don’t think it’s a problem! I write a lot of code using LLMs, mostly for learning (and… ejem, some of it might end up in production) and I’ve always found them great tools for learning (supposing you use the appropriate context engineering and make sure the agent has access to updated docs and all of that). For example, I wanted to learn Rust so I half-vibed a GPUI-based chat client for LLMs that works just fine and surprisingly enough, I actually learned and even had some fun.
only if you want to refactor/rewrite a lot