Show HN: Vibe coding a bookshelf with Claude Code
236 points
7 hours ago
| 48 comments
| balajmarius.com
| HN
spicyusername
7 hours ago
[-]
These are the perfect size projects vibe coding is currently good for.

At some point you hit a project size that is too large or has too many interdependencies, and you have to be very careful about how you manage the context and should expect the llm to start generating too much code or subtle bugs.

Once you hit that size, in my opinion, it's usually best to drop back to brainstorming mode, only use the llm to help you with the design, and either write the code yourself, or write the skeleton of the code yourself and have the llm fill it in.

With too much code, llms just don't seem able yet to only add a few more lines of code, make use of existing code, or be clever and replace a few lines of code with a few more lines of code. They nearly always will add a bunch of new abstractions.

reply
cube2222
7 hours ago
[-]
I agree with you as far as project size for vibe-coding goes - as-in often not even looking at the generated code.

But I have no issues with using Claude Code to write code in larger projects, including adapting to existing patterns, it’s just not vibe coding - I architect the modules, and I know more or less exactly what I want the end result to be. I review all code in detail to make sure it’s precisely what I want. You just have to write good instructions and manage the context well (give it sample code to reference, have agent.md files for guidance, etc.)

reply
OptionOfT
4 hours ago
[-]
> I know more or less exactly what I want the end result to be

This is key.

And this is also why AI doesn't work that well for me. I don't know yet how I want it to work. Part of the work I do is discovering this, so it can be defined.

reply
xur17
4 hours ago
[-]
I've found this to be the case as well. My typical workflow is:

1. Have the ai come up with an implementation plan based on my requirements

2. Iterate on the implementation plan / tweak as needed, and write it to a markdown file

3. Have it implement the above plan based on the markdown file.

On projects where we split up the task into well defined, smaller tickets, this works pretty well. For larger stuff that is less well defined, I do feel like it's less efficient, but to be fair, I am also less efficient when building this stuff myself. For both humans and robots, smaller, well defined tickets are better for both development and code review.

reply
cube2222
3 hours ago
[-]
Yeah, this exactly. And if the AI wanders in confusion during #3, it means the plan isn’t well-defined enough.
reply
blks
1 hour ago
[-]
Sounds like so much work just not to write it yourself.
reply
pigpop
4 hours ago
[-]
Or you can apply software architecture methods that are designed to help humans with exactly the same type of problems.

Once your codebase exceeds a certain size, it becomes counter-productive to have code that is dependent on the implementation of other modules (tight coupling). In Claude Code terms this means your current architecture is forcing the model to read too many lines of code into its context which is degrading performance.

The solution is the same as it is for humans:

  "Program to an interface, not an implementation." --Design Patterns: Elements of Reusable Object-Oriented Software (1994)
You have to carefully draw boundaries around the distinct parts of your application and create simple interfaces for them that only expose the parts that other modules in your application need to use. Separate each interface definition into its own file and instruct Claude (or your human coworker) to only use the interface unless they're actually working on the internals of that module.

Suddenly, you've freed up large chunks of context and Claude is now able to continue making progress.

Of course, the project could continue to grow and the relatively small interface declarations could become too many to fit in context. At that point it would be worthwhile taking a look at the application to see if larger chunks of it could be separated from the rest. Managing the number and breadth of changes that Claude is tasked with making would also help since it's unlikely that every job requires touching dozens of different parts of the application so project management skills can get you even further.

reply
tzs
3 hours ago
[-]
Is vibe architecting a thing too, or is architecting to make your vibe coder work better something that the human needs to know?
reply
pigpop
3 hours ago
[-]
Haha, actually yes. You can prompt them to be their own architect but I do find it works better when you help. You could probably get pretty far by prompting them to review their own code and suggest refactoring plans. That's essentially what Plan Mode is for in Claude Code.
reply
mellosouls
5 hours ago
[-]
Engineering code now is not just binary, it's a spectrum from vibe-coding through copilot-style (design and coding assistance) to your help-with-design-only to no-AI.

The capabilities now are strong enough to mix and match almost fully in the co-pilot range on substantial projects and repos.

reply
aurareturn
6 hours ago
[-]

  These are the perfect size projects vibe coding is currently good for.
So far... it's going to keep getting better to the point until all software is written this way.
reply
HarHarVeryFunny
5 hours ago
[-]
Sure, but that's basically the same as saying that we'll have human-equivalent AI one day (let's not call it AGI, since that means something different to everyone that uses it), and then everything that humans can do could then be done by AI (whether or not it will be, is another question).

So, yes, ONE DAY, AI will be doing all sorts of things (from POTUS and CEO on down), once it is capable of on-the-job learning and picking up new skills, and everything else that isn't just language model + agent + RAG. It the meantime, the core competence of an LLM is blinkers-on (context-on) executing - coding - according to tasks (part of some plan) assigned to it by a human who, just like a lead assigning tasks to human team members, is aware of what it can and can not do, and is capable of overseeing the project.

reply
__MatrixMan__
5 hours ago
[-]
It seems like it's approaching a horizontal asymptote to me, or is at the very least concave down. You might be describing a state 50 years from now.
reply
aurareturn
3 hours ago
[-]
reply
__MatrixMan__
2 hours ago
[-]
Improved benchmarks are undeniably an improvement, but the bottleneck isn't the models anymore, it's the context engineering necessary to harness them. The more time and effort we put into our benchmarking systems the better we're able to differentiate between models, but then when you take an allegedly smart one and try to do something real with it, it behaves like a dumb one again because you haven't put as much work into the harness for the actual task you've asked it to do as you did into the benchmark suite.

The knowledge necessary to do real work with these things is still mostly locked up in the humans that have traditionally done that work.

reply
anthonypasq
5 hours ago
[-]
sonnet 3.7 was released 10 months ago! (the first model truly capable of any sort of reasonable agentic coding at all) and opus 4.5 exists today.
reply
rabf
4 hours ago
[-]
To add to this: the tooling or `harness` around the models has vastly improved as well. You can get far better results with older or smaller models today than you could 10 months ago.
reply
croes
5 hours ago
[-]
Successfully building an IKEA shelf doesn’t make you a carpenter.
reply
exe34
4 hours ago
[-]
no, but I have furniture. it's important to keep sight of the end goal, unless the carpentry is purely a hobby.
reply
whattheheckheck
3 hours ago
[-]
What's the job title and education requirements for designing the supply chain and engineering of the ikea furniture?
reply
exe34
1 hour ago
[-]
I don't know, I don't work at IKEA. Sorry.
reply
rvz
5 hours ago
[-]
Air traffic control software is not going to be vibe-coded anytime soon and neither is the firmware controlling the plane.
reply
aurareturn
3 hours ago
[-]
Sure it will. But they will be tested far more stringently by both human experts and the smartest LLM models.
reply
A4ET8a8uTh0_v2
5 hours ago
[-]
I will be perfectly honest. Given what I am seeing, I fully expect someone to actually try just that.
reply
blks
1 hour ago
[-]
Considering how much work at Boeing is given to consultants and other third party contractors (eg famous MCAS), some piece of work after moving through the bowls of multiple subcontractors will end up in the hands of a under-qualified developer who will ask his favourite slop machine to generate code he doesn’t exactly understands purpose of.
reply
spzb
5 hours ago
[-]
I've got a bridge to sell you
reply
pranavm27
5 hours ago
[-]
Reminds me of Ken Miles saying 7000 rpm quote. At what size do you think this happens? Whatever is the most relevant metric of size for this context.
reply
wonderwonder
3 hours ago
[-]
I think this limitation goes away as long as your code is modular. If the Ai has to read the entire code base each time, sure but if everything is designed well then it only needs to deal with a limited set of code each time and it excels at that.
reply
solumunus
7 hours ago
[-]
> make use of existing code, or be clever and replace a few lines of code with a few more lines of code

You can be explicit about these things.

reply
auggierose
6 hours ago
[-]
Yes. It is called programming.
reply
solumunus
5 hours ago
[-]
Using agents is programming. Programming is done with the mind, the tapping of keys isn’t inherent to the process.
reply
batshit_beaver
5 hours ago
[-]
Unfortunately IDEs are not yet directly connected to our minds, so there's still that silly little step of encoding your ideas in a way that can be compiled into binary. Playing the broken telephone game with an LLM is not always the most efficient way of explaining things to a computer.
reply
spzb
5 hours ago
[-]
I am yet to see a vibe coded success that isn't a small program that already exists in multiple forms in the training data. Let's see something ground-breaking. If AI coding is so great and is going to take us to 10x or 100x productivity let's see it generate a new, highly efficient compression algorithm or a state-of-art travelling salesman solution.
reply
boplicity
5 hours ago
[-]
> Let's see something ground-breaking

Why? People don't ask hammers to do much more than bash in nails into walls.

AI coding tools can be incredibly powerful -- but shouldn't that power be focused on what the tool is actually good at?

There are many, many times that AI coding tools can and should be used to create a "small program that already exists in multiple forms in the training data."

I do things like this very regularly for my small business. It's allowed me to do things that I simply would not have been able to do previously.

People keep asking AI coding tools to be something other than what they currently are. Sure, that would be cool. But they absolutely have increased my productivity 10x for exactly the type of work they're good at assisting with.

reply
Teknomadix
5 hours ago
[-]
>People don't ask hammers to do much more than bash in nails into walls.

“It resembles a normal hammer but is outfitted with an little motor and an flexible head part which moves back and forth in a hammering motion, sparing the user from moving his or her own hand to hammer something by their own force and by so making their job easier”

https://gremlins.fandom.com/wiki/Electric_Hammer

reply
pigpop
4 hours ago
[-]
Good reference and a funny scene but doesn't quite hit home because we have invented improved hammers in the form of pneumatic nail guns and even cordless nailers (some pneumatic and some motorized) which could truly be called an "electric hammer".

With this context the example may support the quote, nail guns do make driving nails much faster and easier but that's all they do. You can't pull a nail with a nail gun and you can't use it for any of the other things that a regular hammer can do. They do 10x your ability to drive nails though.

On the other hand, LLMs are significantly more multi-purpose than a nail gun.

reply
ncallaway
5 hours ago
[-]
> People keep asking AI coding tools to be something other than what they currently are.

I think it's for a very reasonable reason: the AI coding tool salespeople are often selling the tools as something other than what they currently are.

I think you're right, that if you calibrate your expectations to what the tools are capable of, there's definitely. It would be nice if the marketing around AI also did the same thing.

reply
onion2k
5 hours ago
[-]
AI sales seems to be very much aligned with productivity improvement - "do more of the same but faster" or "do the same with fewer people"). No one is selling "do more".
reply
BeetleB
3 hours ago
[-]
> I think it's for a very reasonable reason: the AI coding tool salespeople are often selling the tools as something other than what they currently are.

And if this submission was an AI salesperson trying to sell something, the comment/concern would be pertinent. It is otherwise irrelevant here.

reply
Arisaka1
20 minutes ago
[-]
>Why?

Because I keep wondering myself if AI is here and our output is charged up, then why am I keep seeing more of the same products but with an "AI" sticker slapped on top of them? From a group of technologists like HN and the startup world, that live on the edge of evolution and revolution, maybe my expectations were a bit too high.

All I see is the equivalent of a "look how fast my new car made me go to the super market, when I'm not too demanding on the super market I want to end up with, and all I want is milk and eggs". Which is 100% fine, but at the end of the day I eat the same omelette as always. In this metaphor, I don't feel the slightest behind, or have any sense of FOMO if I cook my omelette slowly. I guess I have more time for my kids if I see the culinary arts as just a job. And it's not like restaurants suddenly get all their tables booked faster just because everyone cooks omelettes faster.

>It's allowed me to do things that I simply would not have been able to do previously.

You're not the one doing them. Me barking orders to John Carmack himself doesn't make me a Quake co-creator, and even if I micromanage his output like the world's most toxic micromanager who knows better I'm still not Carmack.

On top of that, you would have been able to do previously, if you cared enough to upskill to the point where token feeding isn't needed for you to feel productive. Tons of programmers broke barriers, and solved problems that haven't been solved by anyone in their companies before.

I don't see why everyone claiming that they previously couldn't do something is a bragging point. The LLM's that you're using were trained by the Google results you could've gotten if you Google searched.

reply
BobbyJo
5 hours ago
[-]
Yes! I can't tell you the number of times I thought to myself "If only there was a way for this problem to be solved once instead of being solved over and over again". If that is the only thing AI is good at, then it's still a big step up for software IMO.
reply
fiyec30375
4 hours ago
[-]
It's true. Why should everyone look up the same API docs and make the same mistakes when AI can write it for you instantly and correctly?
reply
blauditore
5 hours ago
[-]
Because that's the vision of many companies trying to sell AI. Saying that what it can do now is actually already good enough might be true, but it's also moving the goalposts compared to what was promised (or feared, depending who you're asking).
reply
simonw
5 hours ago
[-]
One of the many important skills needed to navigate our weird new LLM landscape is ignoring what the salespeople say and listening to the non-incentivized practitioners instead.
reply
darkerside
5 hours ago
[-]
Can we get specific? What company and salesperson made what claim?

Let's not disregard interesting achievements because they are not something else.

reply
boplicity
5 hours ago
[-]
To be clear, I see a lot of "magical thinking" among people who promote AI. They imagine a "perfect" AI tool that can basically do everything better than a human can.

Maybe this is possible. Maybe not.

However, it's a fantasy. Granted, it is a compelling fantasy. But its not one based on reality.

A good example:

"AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.” -- Elon Musk

This is, of course, ridiculous. But, why should we let reality get in the way of a good fantasy?

reply
falcor84
4 hours ago
[-]
> AI will probably be smarter than any single human next year.

Arguably that's already so. There's no clear single dimension for "smart"; even within exact sciences, I wouldn't know how to judge e.g. "Who was smarter, Einstein or Von Neumann?". But for any particular "smarts competition", especially if it's time limited, I'd expect Claude 4.5 Opus and Gemini 3 Pro to get higher scores than any single human.

reply
spzb
5 hours ago
[-]
> Why? People don't ask hammers to do much more than bash in nails into walls.

No one is propping up a multi-billion dollar tech bubble by promising hammers that do more than bash nails. As a point of comparison that makes no sense.

reply
pigpop
4 hours ago
[-]
The software development market is measured in tens of billions to hundreds of billions of dollars depending on which parts you're looking at so inventing a better hammer (development tool) can be expected to drive billions of dollars of value. How many billions depends on how good of a tool it turns out to be in the end. That's only counting software, it's also directly applicable to all media (image, video, audio, text) and some scientific domains (genetics, medicine, materials, etc.)
reply
falcor84
4 hours ago
[-]
That's nitpicking; in this manner you can dismiss any analogy, by finding an aspect on which it's different from the original comparandum.
reply
anon7000
5 hours ago
[-]
You’re right, but at the same time, 99% of software people need has already been done in some form. This gets back to the article on “perfect software” [1] posted last week. This bookshelf is perfect for the guy who wrote it and there isn’t anything exactly like it out there. The common tools on the App Store (goodreads) don’t fit his needs. But he was able to create a piece of “perfect software” that exactly meets his own goals and his own design preferences. And it was very easy to accomplish with LLMs, just by putting together pieces of things that have been done before.

This is still pretty great!

1: https://outofdesk.netlify.app/blog/perfect-software

reply
pigpop
4 hours ago
[-]
Yes, that's an excellent framing of where we're at and the role that LLM generated software is excelling in. Custom software has been out of reach for many people who would benefit from it due to requiring either a lot of money to pay someone to build it or a lot of time to learn how to build it yourself and execute on that process. Right now you can essentially use services like Claude as a custom software "app store", although I'd really call it a service, where you can say "I'd like an app that does X" and depending on the scope you can get that app as a Claude Artifact in a few minutes or, if you're familiar with software development and build/deployment processes, in a few hours to days as a more traditional software artifact which you can host somewhere or install locally. Google is working hard to make this even more achievable for non-developers with Google AI Studio https://aistudio.google.com/ and Firebase Studio https://firebase.studio/
reply
josu
4 hours ago
[-]
I was able to lead these two competitions using LLM agents, with no prior rust or c++ knowledge. They both have real world applications.

- https://highload.fun/tasks/15/leaderboard

- https://highload.fun/tasks/24/leaderboard

In both cases my score showed other players that there were better solutions and pushed them to improve their scores as well.

reply
jungturk
5 hours ago
[-]
Much of the coding we do is repetitive and exists in the training data, so I think its pretty great if AI can eliminate that toil and liberate the meat to focus on the creative work.
reply
SOLAR_FIELDS
5 hours ago
[-]
There’s a reason they call working at Google “shuffling protobufs” for the vast majority of engineers. Most software work isn’t innovative compression algorithms. It’s moving data around, which is a well understood problem
reply
skrotumnisse
5 hours ago
[-]
I find this type of comment depressing. This is a time for exploration and learning new things. This is a prefect way to do so. It’s a small project that solves the problem. Better time spent vibe coding it then to evaluate existing alternatives.
reply
fny
4 hours ago
[-]
I have worked on out of sample problems, and AI absolutely struggles, but it dramatically accelerates the research process. Testing ideas is cheap, support tools are quick to write, and the LLM itself is a tremendous research tool itself.

More generally, I do think LLMs grant 10x+ performance for most common work: most of what people do manually is in the training data (which is why there's so much of it in the first place.) 10x+ in those domains can in theory free up more brain space to solve the problems you're talking about.

My advice to you is to tone down the cynicism, and see how it could help you. I'll admit, AI makes me incredibly anxious about my future, but it's still fun to use.

reply
lowkey_
1 hour ago
[-]
You're just asking for the opposite of what AI does.

90-99% of an engineer's work isn't entirely novel coding that has never existed before, so by succeeding at what "already exists", it can take us to 10x-100x productivity.

The automation of all that work is groundbreaking in and of itself.

I think that, for a while into the future at least, humans will be relegated to generating that groundbreaking work, and the AI will increasingly handle the rest.

reply
falcor84
4 hours ago
[-]
I am yet to see an "AI doesn't impress me" comment that added anything to the discussion. Yes, there's always going to be a state of the art and things that are as of yet beyond the state of the art.
reply
MontyCarloHall
5 hours ago
[-]
Forget utterly groundbreaking things, I want to hear maintainers of complex, actively developed, and widely used open-source projects (e.g. ffmpeg, curl, openssh, sqlite) start touting a massive uptick in positive contributions, pointing to a concrete influx of high-quality AI-assisted commits. If AI is indeed a 10x force multiplier, shouldn't these projects have seen 10 years' worth of development in the last year?

Don't get me wrong, AI is at least as game-changing for programming as StackOverflow and Google were back in the day. Being able to not only look up but automatically integrate things into your codebase that already exist in some form in the training data is incredibly useful. I use it every day, and it's saved me hours of work for certain specific tasks [0]. For tasks like that, it is indeed a 10x productivity multiplier. But since these tasks only comprise a small fraction of the full software development process, the rest of which cannot be so easily automated, AI is not the overall 10x force multiplier that some claim.

[0] https://news.ycombinator.com/item?id=45511128

reply
simonw
5 hours ago
[-]
> I want to hear maintainers of complex, actively developed, and widely used open-source projects (e.g. ffmpeg, curl, openssh, sqlite) start touting a massive uptick in positive contributions

That's obviously not going to happen, because AI tools can't solve for taste. Just because a developer can churn out working code with an LLM doesn't mean they have the skills to figure out what the right working code to contribute to a project is, and how to do so in a way that makes the maintainers lives easier and not harder.

That skill will remain rare.

(Also SQLite famously refuses to accept external contributions, but that's a different issue.)

reply
SQLite
2 hours ago
[-]
No, Simon, we don't "refuse". We are just very selective and there is a lot of paperwork involved to confirm the contribution is in the public domain and does not contaminate the SQLite core with licensed code. Please put the false narrative that "SQLite refuses outside contributions" to rest. The bar is high to get there, but the SQLite code base does contain contributed code.
reply
mtlynch
47 minutes ago
[-]
Dr. Hipp, I love SQLite but also had simonw's misapprehension that the project did not accept contributions. The SQLite copyright page says:

> Contributed Code

> In order to keep SQLite completely free and unencumbered by copyright, the project does not accept patches. If you would like to suggest a change and you include a patch as a proof-of-concept, that would be great. However, please do not be offended if we rewrite your patch from scratch.

I realize that the section, "Open-Source, not Open-Contribution" says that the project accepts contributions, but I'm having trouble understanding how that section and the "Contributed Code" section can both be accurate. Is there a distinction between accepting a "patch" vs. accepting a "contribution?"

If you're planning to update this page to reduce confusion of the contribution policy, I humbly suggest a rewrite of this sentence to eliminate the single and double negatives, which make it harder to understand:

> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.

Could be rewritten as:

> In order to keep SQLite in the public domain and prevent contamination of the code from proprietary or licensed content, the project only accepts patches from people who have submitted an affidavit dedicating their contribution into the public domain.

[0] https://sqlite.org/copyright.html

reply
simonw
38 minutes ago
[-]
Yes, that "does not accept patches" line must have been where I picked up my incorrect mental model.
reply
simonw
1 hour ago
[-]
Thanks for the correction, and sorry for getting that wrong. I genuinely didn't know that.

Found that paperwork here: https://www.sqlite.org/copyright-release.html

I will make sure not to spread that misinformation further in the future!

Update: I had a look in fossil and counted 38 contributors:

  brew install fossil
  fossil clone https://www.sqlite.org/src sqlite.fossil
  fossil sql -R sqlite.fossil "
    SELECT user, COUNT(*) as commits
    FROM event WHERE type='ci'
    GROUP BY user ORDER BY commits DESC
  "
Blogged about this (since it feels important to help spread the correction about this): https://simonwillison.net/2025/Dec/29/copyright-release/
reply
zwnow
5 hours ago
[-]
> Being able to not only look up but automatically integrate things into your codebase that already exist in some form in the training data is incredibly useful.

Until it decides to include code it gathered from a stackoverflow post 15 years ago probably introducing security related issues or makes up libraries on the go or even worse, tries to make u install libs that were part of a data poisoning attack.

reply
MontyCarloHall
5 hours ago
[-]
It's no different from supervising a naïve junior engineer who also copy/pastes from 15 year old SO posts (a tale as old as time): you need to carefully review and actually grok the code the junior/AI writes. Sometimes this ends up taking longer than writing it yourself, sometimes it doesn't. As with all decisions in delegating work, the trick is knowing ahead of time whether this will be the case.
reply
spzb
5 hours ago
[-]
Naive junior engineers eventually learn and become competent senior engineers. LLMs forget everything they "learn" as soon as the context window gets too big.
reply
MontyCarloHall
5 hours ago
[-]
Very true! I liken AI to having an endless supply of newly hired interns with near-infinite knowledge but intern-level skills.
reply
cheevly
3 hours ago
[-]
There are like a dozen well-established ways to overcome this. Learn how to use the basic tools and patterns my dude.
reply
zwnow
4 hours ago
[-]
I have yet to see a junior trying to install random/non existing libs.
reply
pigpop
3 hours ago
[-]
If you forced them to try it from memory without giving them access to the web you sure would.
reply
anthonypasq
5 hours ago
[-]
the creator of claude code said on twitter he hasnt opened an ide in a month and merged 200 prs.
reply
MontyCarloHall
5 hours ago
[-]
Might the creator of Claude Code have some … incentives … to develop like that, or at least claim that he does?

As someone who frequently uses Claude Code, I cannot say that a year's worth of features/improvements have been added in the last month. It bears repeating: if AI is truly a 10x force multiplier, you should expect to see a ~year's worth of progress in a month.

reply
simonw
4 hours ago
[-]
Opus 4.5 is just a few days over a month old. Boris would have had access to that for a while before its release though.
reply
shimman
3 hours ago
[-]
Boris is someone that is employed by Anthropic and has a massive stake in them going public, standing to make millions.

They are by definition a biased source and should not be referenced as such.

reply
simonw
2 hours ago
[-]
Nobody here claimed that Boris wasn't a biased source.

I do however think he is not an actively dishonest source. When he says "In the last thirty days, I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed. Every single line was written by Claude Code + Opus 4.5." I believe he is telling the truth.

That's what dogfooding your own product looks like!

https://twitter.com/bcherny/status/2004887829252317325

reply
spzb
5 hours ago
[-]
curl in particular is being plagued by AI-slop security reports which are actively slowing development by forcing the maintainers to triage crap when they could be working on new features (or, you know, enjoying their lives) eg https://www.theregister.com/2025/07/15/curl_creator_mulls_ni...
reply
leleat
5 hours ago
[-]
On the other hand, we had this story[^1], where the maintainer of curl mentions a a bunch of actual useful reports by someone using AI tools.

[^1]: https://news.ycombinator.com/item?id=45449348

reply
pranavm27
5 hours ago
[-]
Its good that way right? Let me as a human do the interesting thinking for which my brains are meant while you AI do what they chips were built for.

I am happy as is tbh, not even looking for AGI and all. Just that the LLM be close enough to my thinking scale so that it does not feel "why am I talking with this robot".

reply
SJMG
5 hours ago
[-]
https://thenewstack.io/how-deepminds-alphatensor-ai-devised-...

Not either of the species of algorithms you've described, but still an advance.

reply
spzb
5 hours ago
[-]
That's about as far removed from vibe coding as you can get. It's the result of an algorithm developed for a specific purpose by researchers at one of the most advanced machine learning companies.
reply
fiyec30375
4 hours ago
[-]
Who really cares? The goalpost of "AI is useless because I can't vibe code novel discoveries" is a strawman. AI and vibe coding are transformational. So are AI-enhanced efforts to solve longstanding, difficult scientific problems. If cancer is cured with AI assistance, does it really matter if it was vibe-cured or state-of-the-art-lab-cured?
reply
spzb
3 hours ago
[-]
Ironic to call it a strawman whilst making a strawman yourself. I never said AI was useless, I said vibe coding hasn't produced anything novel.
reply
blks
1 hour ago
[-]
It will not generate anything novel because it can’t.
reply
bdcravens
2 hours ago
[-]
10x productivity can also mean allowing a single staff member to do the boring work of 10 humans.
reply
zellyn
5 hours ago
[-]
trifling.org is an entire Python coding site, offline first (localstorage after first load), with docs, turtle graphics, canvas, and avatar editor, vibe coded from start to finish, with all conversations in the GitHub repo here: https://github.com/zellyn/trifling/tree/main/docs/sessions

This is going to destroy my home network, since I never moved it off the little Lenovo box sitting in my laundry room beside the Eero waypoint, but I’m out of town for three days, so

Granted, the seed of the idea was someone posting about how they wired pyiodide to Ace in 400 lines of JavaScript, so I can’t truly argue it’s non-trivial.

As a light troll to hackernews, only AI-written contributions are accepted

[Edit: the true inception of this project was my kid learning Python at school and trinket.io inexplicably putting Python 3 but not 2 behind the paywall. Alas, Securely will not let him and his classmates actually access it ]

reply
plaidfuji
4 hours ago
[-]
… or, let’s see humans who are now 10-100x more productive (due to automation of mundane tasks that are already part of the training data) do the things you’re asking for.
reply
Forgeties79
5 hours ago
[-]
And to add to this, for some reason people really bristle if you say that many LLM’s are just search with extra steps. This feels like an extension of that. It’s just reinventing the wheel over and over again based on a third party’s (admittedly often a solid approximation but still not exact) educated guess of what a wheel may be. It all seems like a rather circuitous way to accomplish things unless your goal isn’t to build a wheel but rather tinker and experiment with the concept of a wheel and learn something in the process. Totally valid, but I’m pretty sure that’s not what open AI et al are pitching lol
reply
belter
4 hours ago
[-]
Every two months, I run a very simple experiment to decide whether I should stop shorting NVDA....Think of it as my personal Pelican on a Bike test. :-)

Here is how it works: I take the latest state of the art model, usually one of the two or three currently being hyped....and ask it to create a short document that teaches Java, Python, or Rust, in 30 to 60 min, complete with code examples. Then I ask the same model to review its own produced artifact, for correctness and best practices.

What happens next is remarkably consistent. The model produces a glowing review, confidently declaring the document “production ready”… while the code either does not compile, contains obvious bugs, or relies on outright bad practices.

When I point this out, the model apologizes profusely and generates a “fixed” version which still contains errors. I rinse and repeat until I give up.

This is still true today, including with models like Opus 4.5 and ChatGPT 5.2. So whenever I read comments about these models being historic breakthroughs, I can’t help but imagine they are mostly coming from teams proudly generating technical debt at 100× the usual speed.

Things go even worst, when you ask the model to review a Cloud Architecture....

reply
gjimmel
4 hours ago
[-]
Ok, but if you wrote some massive corpus of code with no testing it probably would not compile either.

I think if you want to make this a useful experiment you should use one of the coding assistants that can test and iterate on its code, not some chatbot which is optimized to impress nontechnical people while being as cheap as possible to run.

reply
belter
4 hours ago
[-]
>> Chatbot which is optimized to impress nontechnical people

Is that how we call Opus 4.5 now? :-)

reply
rabf
3 hours ago
[-]
That depends a lot on the system prompt and the tooling available to the model. Are you trying thin in Claude code or Factory.ai, or are you using a chat interface? The difference in the outcome can be large.
reply
belter
18 minutes ago
[-]
Random anecdotes from the Internet say no:

"I paid for the $100 Claude Max plan so you don't have to - an honest review" -https://www.reddit.com/r/ClaudeAI/comments/1l5h2ds/i_paid_fo...

reply
pigpop
3 hours ago
[-]
I'm sorry but I don't quite believe you because I've done exactly this for learning much more complicated topics. For fun I've been learning about video game programming in the Odin programming language using a Claude project where I have Opus 4.5 write tutorials, including working code examples that are designed to be integrated with each other into a larger project. We've covered maze generation, Delaunay triangulation, MSTs, state machines, rendering via Raylib and RayGUI, and tweening for animations. All of those worked quite well with only very minor corrections which Opus was also very helpful for diagnosing and fixing. I also had it produce a full tutorial on implementing a relational database in Odin but I haven't had time to work my way through all of it yet. This is all with a somewhat niche language like Odin that I wouldn't expect there to be a lot of training data for so you'll excuse my incredulity that you couldn't get usable introductory code for much more commonly used languages like Java and Python.

I'm wondering if your test includes allowing the models to run their code in order to validate it and then fix it using the error output? Would you be willing to share the prompts and maybe some examples of the errors?

I haven't had many problems working in Claude Code even with full on "vibe coding". One notable recent exception was in writing integration tests for a p2p app that uses WebRTC, XTerm.js, and Yjs where it ran into some difficulty creating a testing framework that involved a headless client and a local MQTT broker where we had to fork a few child processes to test communication between them. Opus got bogged down working on its own so I stepped in and got things set up correctly (while chatting with Opus through the web interface instead of CC). The problem seemed to be due to overfilling the context since the test suite files were too long so I could have probably avoided the manual work by just having Opus break those up first.

reply
cheevly
4 hours ago
[-]
You clearly live in a different reality than me entirely. Complete opposite experience.
reply
belter
4 hours ago
[-]
Would you kindly please detail how your experience is different? Complete opposite experience without details does not really say much.
reply
wonderwonder
3 hours ago
[-]
Microsoft is currently hiring engineers to rewrite their entire codebase in Rust via vibecoding. Something to the tune of a million lines of code per developer per month.
reply
cube2222
5 hours ago
[-]
> If AI coding is so great and is going to take us to 10x or 100x productivity

That seems to be a strawman here, no? Sure, there exist people/companies claiming 10x-100x productivity improvements. I agree it's bullshit.

But the article doesn't seem to be claiming anything like this - it's showing the use of vibe-coding for a small personalized side-project, something that's completely valid, sensible, and a perfect use-case for vibe-coding.

reply
dboreham
5 hours ago
[-]
This comment is wrong in two ways:

1. Current LLMs do much better than produce "small programs that already exist in multiple forms in the training data". Of course the knowledge they use does need to exist somewhere in training data, but they operate at a higher level of abstraction than simply spitting out programs they've already seen whole cloth. Way higher.

2. Inventing a new compression algorithm is beyond the expectations of all but the the most wild-eyed LLM proponents, today.

reply
rabf
3 hours ago
[-]
"the knowledge they use does need to exist somewhere in training data", I'm not to sure about that. The current coding enviroments for AI give the models a lot of reasoning power with tooling to test, iterate and web search. They frequently look at the results of their code runs now and try different approaches to get the desired result. Its common for them to write their own tests unprompted and re-evaluate accordingly.
reply
blauditore
5 hours ago
[-]
2. is not really true. There are famous people claiming that AI will fix climate change, so we as humans should stop bothering.
reply
rvz
5 hours ago
[-]
> let's see it generate a new, highly efficient compression algorithm or a state-of-art travelling salesman solution.

This is the "promise" that was being sold here and in reality, we yet haven't seen anything innovative or even a sophisticated original groundbreaking discovery from an LLM with most of the claims being faked or unverified.

Most of the 'vibe-coding' uses here are quite frankly performative or used for someone's blog for 'content'.

reply
Tiberium
6 hours ago
[-]
> What I needed was not a better app, but a way to tolerate imperfection without the whole system falling apart.

> Claude did not invent that idea. It executed it.

> Claude handled implementation. I handled taste.

This style of writing always gets me now :)

reply
AtreidesTyrant
6 hours ago
[-]
This style of writing isnt human. Its AI.

^^ These dramatic statements are almost always AI influenced, I seem to always see them in people's emails now as well. "we didnt reinvent the wheel. we are the wheel."

reply
TheChelsUK
5 hours ago
[-]
You are absolutely right. It’s not X that’s the give away, it’s A, and B — moreover it’s C that is the clincher.
reply
skybrian
5 hours ago
[-]
AI is popularizing a writing style that has been common in advertising for quite some time. For example, Apple uses it a lot. Now everyone can imitate advertising copy.
reply
IshKebab
4 hours ago
[-]
It's not that it didn't exist before. It's that it wasn't overused.

Heh.

reply
7moritz7
3 hours ago
[-]
Neurodivergent people tend to write like this too, there was a study about it.
reply
Tiberium
6 hours ago
[-]
Yeah, I know, I just feel like at this point it's useless to call that out.
reply
delichon
6 hours ago
[-]
We can learn useful rhetorical techniques from AI that can help us clearly communicate. We should separate those babies from the bathwater.
reply
estearum
6 hours ago
[-]
So far, any "useful rhetorical technique" one could've learned from AI has become a dead giveaway of AI slop (lazy writing and lazy thinking).

Seriously: what tool do you want to use that's immediately available to the absolute lowest common denominator "writers" on the Internet?

"It's not X, it's Y" literally makes my stomach churn from seeing so much of it on LinkedIn.

reply
skydhash
5 hours ago
[-]
You can also just find a book on writing. I recommend "On Writing Well" by William Zinsser. Dense and quite informative.
reply
F7F7F7
6 hours ago
[-]
Rhetorical techniques that are so easily identifiable as AI now.
reply
RickS
3 hours ago
[-]
Others, please chime in, I want to take a sort of poll here:

I usually grimace at "GPT smell". The lines you quoted stood out to me as well, but I interpreted them as "early career blogger smell". It's similar, but it didn't come off as AI. I think because it avoided grandiose words, and because it reused the exact same phrase format (like a human tic) rather than randomly sampling the format category (like an AI). This is what human text looks like when it's coming from someone who is earnestly trying to have a punchy, intentional writing style, but who has not yet developed an editor's eye (or asked others to proofread), which would help smooth out behaviors that seem additive in isolation but amateur in aggregate.

Did others share the impression that it's a human doing the same classic tricks that AI is trained to copy, or does anything in this category immediately betray AI usage?

reply
nyyp
2 hours ago
[-]
This post felt AI-touched to me, but the usage falls on a spectrum. You can write the whole post yourself, have an LLM write the whole post, or - what I suspect is the case here - have the LLM "polish" your first draft.

Many weaker or non-native writers might use AI for that "editor's eye" without realizing that they are being driven to sound identical to every other blog post these days. And while I'm certainly growing tired of constantly reading the same LLM style, it's hard to fault someone for wanting to polish what they publish.

reply
pranavm27
5 hours ago
[-]
I feel ya but it didn't get me on this one for some reason. But it gets me a lot on Linkedin - due to which I lost control and blasted off a post yesterday.

I think it some kind of value - vibe dynamics that play in making the brain conscious about it being written with AI or otherwise.

reply
Terretta
4 hours ago
[-]
“That was enough.”
reply
disconcision
3 hours ago
[-]
I actually did something similar recently for my website* and it was actually a Claude failure case; helped a little but ultimately cost me more time than doing it entirely myself. For my version I wanted the actual book spine images; since there's no database for these, I took pictures of my shelves (milkcrates) and wanted clickable regions. I also wanted the spines to link to a reasonable webpage for the book, falling back to goodreads if nothing else was available.

It was very surprising to me that claude wasn't very good at even the latter. It took several rounds of prompt refinement, including very detailed instructions, to get even 75% non-broken links, from ~30% on first attempt. It really liked to hallucinate goodreads URLs with the book title in them but an invalid ID (the title part of URLs is ignored by goodreads).

The former was less surprising... It attempted a very rough manual strategy for generating superimposed SVG outlines on the books, which often started okay but by the right side of the the image often didn't even intersect the actual spines. I tried to verbally coax it into using a third party segmenter, but with no luck. We eventually found a binary search style iterative cropping strategy that worked well but was taking 6 minutes and nearly a couple dollars per spine so I killed that and just did the SVG outlines in figma myself.

* https://andrewblinn.com scroll down to find the bookcase and drag down on it to zoom and unlock the books

reply
cdcarter
3 hours ago
[-]
I vibe coded an app to track my movie watches, at https://moviesonthe.computer.

I really enjoy the ability to get started quickly with a known idea like “make a single user letterboxd clone” with a system prompt that explains my preferred tech stack. From there it’s relatively easy to start going in and being the tastemaker for the project.

I think people being able to build their own bespoke apps is a huge super power. Unfortunately I don’t think the tools today do a good job of teaching you how to think if you aren’t already a software engineer. Sonnet rarely grasps for an abstraction.

reply
pixelmonkey
3 hours ago
[-]
That's cool. I've wanted to track my movie watch history for awhile but just couldn't find the right software to do it. I really dislike Letterboxd. I also agree with your point that having a software engineering (programming) background makes projects like these a bit easier to prompt for in a direct way.
reply
nithinbekal
5 hours ago
[-]
> Four hundred and sixty books is not a scale problem. Knowing when to delete working code is not something an AI can decide for you.

This is such a key thing I remind myself when I build apps like this for myself. I have a similar app that has a page with 900-odd ratings, and another with 550 owned books. I decided that I won't bother with infinite scroll or complex search and filtering until my browser can no longer handle rendering that data. "Find in page" works well enough for me for now.

reply
nindalf
6 hours ago
[-]
This is such a coincidence. I had the same idea a few days ago and also vibe coded a library using Claude. https://nindalf.com/books. The original version of this was meant to encourage me to read more, and I'm pleased to say it succeeded. I hit my goal for the year after a couple of lean years. I also like looking at my highlights and notes and this UI makes it easier to read them.

My experience with Claude was mostly very good. Certainly the UI is far better than what I'd come up with myself. The backend is close to what I'd write myself. When I'm unhappy I'm able to explain the shortcomings and it's able to mostly fix itself. This sort of small-scale, self-contained project was made possible thanks to Claude.

Other times it just couldn't. The validation for the start and end dates it decided was z.string().or(z.date()).optional().transform((val) => (val ? new Date(val) : undefined)). It looked way too complex. I asked if it could be simplified, Claude said no. I suggested z.date().optional(). Claude patiently explained this was impossible. I tried it anyway, it worked. Claude said "you're absolutely right!". But this behaviour was the exception rather than the rule.

reply
quinnjh
2 hours ago
[-]
ive been iterating on something very similar as well :D started in september and give it 30-60 mins here and there. i ended up with rows instead of a horizontal scroll. There definitely hve been a handful of times claude made terrible decisions and described them as brilliant, but with some very heavy guidance and worktrees its still (feels at least) quicker than if i wrote it out.

cool to check out your version as well thanks for sharing.

reply
tharos47
5 hours ago
[-]
Do you have the code for your book library ? I wanted to do something similar to help me remember the books I read in a year too.
reply
nindalf
4 hours ago
[-]
Which part? I have a python code base (https://github.com/nindalf/kindle-highlight-sync) that scrapes read.amazon.com for my book highlights. It then exports the data into markdown files that are imported by my website.
reply
m-hodges
6 hours ago
[-]
> I decided that 90 percent accuracy was enough.

So many systems are fault-tolerant, and it’s great to remember that in a world where LLMs introduce new faults. Kudos to OP for this mindset; more anti-AI posters would benefit from sitting with the idea from time to time.

reply
JKCalhoun
6 hours ago
[-]
Agree. We've all had occasional hilarious results when interacting with an LLM. If 90% of the interactions produce positive results… that's an improvement over my what I've come to expect plowing through Google search results.
reply
TheGoodBarn
53 minutes ago
[-]
An aside, I love the animation at the end of your website when you scroll at the bottom and the blue expands out. Awesome touch
reply
felixding
7 hours ago
[-]
Neat. I also used to make a simple "bookshelf" web page each year for the books I read, but mine were fully static HTML and nowhere near as fancy as this.

Side note: I once wrote about recreating Delicious Library: https://dingyu.me/blog/recreating-delicious-library-in-2025

reply
dewey
6 hours ago
[-]
I was about to post something about Delicious Library. That's one of my earlier Mac user memories and it always gave me joy to import / organize my books in there even if there's no real reason to do it.
reply
_august
1 hour ago
[-]
Same, I remember being in awe of how well-designed this app was.
reply
pixelmonkey
5 hours ago
[-]
Wow, this is cool. I had COMPLETELY forgotten about Delicious Library. That is such a nice look-and-feel for this sort of app.
reply
yoz-y
5 hours ago
[-]
At some point I also made a virtual bookshelf but for a different reason: I found that I often didn’t remember what I read about. So I started taking notes while reading and also making pixel art covers for the books I’ve written about. I feel that writing down ideas makes it easier to revisit them.

https://yozy.net/books/

reply
RickS
3 hours ago
[-]
I love that the OP made this app for themselves, enjoyed the result, and shared the journey, so that we could all have this convo. With that said, I feel that you've done what this "should have been" IMO. It doesn't have flashy spring motion scroll, but it does interactively capture what I'd care about, which is recording whether I've read the book, what I found relevant/important/memorable, and how it connects to other things I've read/thought about.
reply
tahirk99
6 hours ago
[-]
The size boundary point is real. Once projects get past a few thousand lines, you stop vibe coding and start managing intent and context again. At that stage the LLM becomes more of a fast junior than a magic wand.
reply
pokemyiout
1 hour ago
[-]
I like the scroll animation!

I've actually been working on something similar since I also had this pain (plus I'm too cheap to buy all the books I'm reading)

To solve the issue related to having take the photos myself, I scrape from places like eBay to get the accurate spine images. Obviously this isn't 100% accurate, but like you I also decided 90% accuracy is good enough.

https://www.tinyshelf.me/maler/fav

reply
wek
6 hours ago
[-]
"The gap between intention and execution was small, but it was enough to keep the project permanently parked in the someday pile." Well said!

This is my experience with agents, particularly Claude Code. It supplies sufficient activation energy to get me over the hump. It makes each next step easy enough that I take it.

reply
butlike
6 hours ago
[-]
It's nice that the project probably helps cut down on accidentally re-buying already owned books. I would hope the project doesn't remove the joy of randomly rediscovering joyous books in your own collection from time to time.
reply
oleggromov
4 hours ago
[-]
Good project, nice write-up.

However, to me as a person with an anti-library as well, this kind of defies the purpose of having it in the first place. I can't say I browse my books too often but when I want to find something, I'd rather browse physical things on a shelf rather than some out-of-date UI with fetched thumbnails. Of course the organization happens in physical space too: this is why we have shelves, labels and such.

Obviously no judgement or criticism for the author, just sharing thoughts.

reply
cube2222
6 hours ago
[-]
That’s really cool, and a great use-case for vibe coding!

I’ve been vibe-coding a personalized outliner app in Rust based on gpui and CRDTs (loro.dev) over the last couple days - something just for me, and in a big part just to explore the problem space - and so far it’s been very nice and fun.

Especially exploring multiple approaches, because exploring an approach just means leaving the laptop working for an hour without my attendance and then seeing the result.

Often I would have it write up a design doc with todos for a feature I wanted based on its exploration, and then just launch a bash for loop that launches Claude with “work on phase $i” (with some extra boilerplate instructions), which would have it occupied for a while.

reply
samwho
6 hours ago
[-]
I love this, the end result looks so good.

Something you don’t really mention in the post is why do this? Do you have an end goal or utility in mind for the book shelf? Is it literally just to track ownership? What do you do with that information?

reply
balajmarius
6 hours ago
[-]
Thanks! Honestly, there’s no big utility behind it. I didn’t build it to optimize anything or track data, it just felt good to make.

I want my website to slowly become a collection of things I do and like, and this bookshelf is just one of those pieces.

reply
AtreidesTyrant
6 hours ago
[-]
I like that it's fun, and that is what AI vibe coding should be.
reply
monerozcash
6 hours ago
[-]
It's a shame the blog post had to be written by AI too. If you're going to use AI to rewrite your text, you could at least ask it to keep the changes minimal.
reply
ionicabizau
6 hours ago
[-]
Wonderful project, Marius! :) I shared it with my brother who has a lot of books and tracks them in his own little app. Keep up the great work! So happy to see you around!
reply
shimman
3 hours ago
[-]
Why wouldn't the author share code? Or did I miss it?
reply
bhouston
7 hours ago
[-]
Good job!

I wonder if you could develop this as an add on to Hardcover.app - you could fetch people's books, images, and display the bookshelf.

All the data seems to be there:

https://hardcover.app/@BenHouston3D/books/read?order=owner_l...

reply
aboardRat4
4 hours ago
[-]
I found a printable book shelf poster generator (which is itself a fork).

https://gitlab.com/Lockywolf/bookshelf

reply
pranavm27
5 hours ago
[-]
Love it, only issue on my mobile is the scroll surface overlaps. I would want that shelf to be perfect fitting on my screen before I start scrolling.

Vibe coded a library last month for my website however its much simpler and has Antilibrary section for all the stuff I have not read.

reply
kgthegreat
7 hours ago
[-]
While currently taste is what we are banking on the thing humans will continue doing but I don’t think that will last for long.
reply
neogodless
7 hours ago
[-]
It is easy to defer to the "taste" of the mathematically mixed up assessment of "all internet-recorded human taste" if you prefer. And many will choose that. But many others will choose to remain in charge of their own taste, as best they can, and request that the machines still produce output per their bidding.
reply
MattRix
6 hours ago
[-]
Well no, you just need to tune the taste of the model to produce things that humans find appealing. This has already happened with the image generation models. I don’t see any reason it can’t happen with these code generation models too.

The whole thing feels a bit like god-of-the-gaps situation, where we keep trying to squeeze humanity into whatever remaining little gaps the current generation of AI hasn’t mastered yet.

reply
micromacrofoot
7 hours ago
[-]
yeah there's nothing special about taste most of the time, very few people even have a decent sense of it anyway

you can tell by how many people earnestly share AI generated images, many are completely tasteless but people don't care

reply
wiseowise
7 hours ago
[-]
Author has a great taste, you can see it by just visiting their website.
reply
dawnerd
5 hours ago
[-]
I’d love to see how much it cost though, particularly around the images.
reply
zittur
6 hours ago
[-]
I really love how the bookshelf display looks. Most sites just use a standard grid for books, which can feel a bit cookie-cutter. The way you’ve mixed in stacked and bookend-style arrangements is a breath of fresh air, it really stands out.
reply
kgthegreat
7 hours ago
[-]
I wrote about how I think about the separation of intent and execution here : https://bikeshedding.substack.com/p/the-agency-continuum
reply
theturtletalks
2 hours ago
[-]
Will people use SaaS in the future or roll their own?

I’ve been working on opensource.builders, and what I keep seeing is that proprietary software has incentives that make personal software appealing. Commercial software is built for many different use cases and as it grows, it probably wants to capture more of the market. But with each new use case, new complexity is introduced. At some point, users are “programming,” but it’s just your config. With AI, why not program something that works exactly for your business?

That’s why so many people are getting into vibe coding. They’ve gone through the nightmare of using software where they only need one feature, or tools that have slowly gotten enshittified, and they realize it might actually be easier to build something that fits them instead of wrestling with layers of complexity that exist for everyone else’s edge cases.

reply
walthamstow
5 hours ago
[-]
> I started asking for things I did not need ... Knowing when to delete working code is not something an AI can decide for you.

Very relatable!

reply
ear7h
6 hours ago
[-]
> I own more books than I can read.

> I started asking for things I did not need.

For a community that prides itself on depth of conversation, ideas, etc. I'm surprised to so much praise for a post like this. I'll be the skeptic. What does it bring to you to vibe code your vibe shelf?

To me, this project perfectly encapsulates the uselessness of AI, small projects like this are good learning or relearning experience and by outsourcing your thinking to AI you deprive yourself of any learning, ownership, or the self fulfillment that comes with it. Unless, of course, you think engaging in "tedious" activities with things you enjoy have zero value, and if getting lost in the weeds isn't the whole point. Perhaps in one of those books you didn't read, you missed a lesson about the journey being more important than the destination, but idk I'm more of a film person.

The only piece of wisdom here is the final sentence:

> Taste still does not [get cheaper].

Though, only in irony.

reply
NewsaHackO
1 hour ago
[-]
I don't think you fully understood the purpose of the project. He wanted an end product (the bookshelf app) that he had been putting off due to the time commitment. He did not say he wanted to learn about how to program in general, nor did he even say he liked programming. People care about results and the end product. If you like to program as a hobby, LLMs in no way stop you from doing this. At the end of the day, people with your viewpoint fall short of convincing people against using AI because you are being extremely condescending to the desires of regular people. Also, it is quite ironic that you attempted to make a point about him not reading all 500 books on his bookshelf, yet you don't seem to have read (or understood) the opening section of the post.
reply
phyzix5761
4 hours ago
[-]
How do you search if a book is in there? They don't seem to be in any predictable order.
reply
vtemian
7 hours ago
[-]
This nails what vibe coding actually is. The model handles execution, but intent and taste stay human. That’s where the real leverage is.
reply
godber
5 hours ago
[-]
“the cost of trying ideas had collapsed”

This is a critical observation of the vibocene.

reply
stevesearer
7 hours ago
[-]
Great project!

Vibe coding has really helped me explore skills outside of my comfort zone which can then be applied in combination with other existing skills or interests in new ways.

In the case of your project, I imagine that now that you can gather data such as books from an image of a bookshelf, you can do something similar in infinite other ways.

reply
pixelmonkey
6 hours ago
[-]
Digitizing my physical bookshelf was one of the first fun “vibe coding” projects I did with ChatGPT4o in 2024.

First, I took photographs of all my physical books simply by photographing the bookshelves such that the book spines were visible.

Then passed the photographs with a prompt akin to, "These are photographs of bookshelves. Create a table of book title and book author based on the spines of the books in these photographed shelves." ChatGPT4’s vision model handled this no problem with pretty high accuracy.

I then vibe-coded a Python program with ChatGPT4 to use the Google Books API (an API key for that is free) to generate a table, and then a CSV, of: book title, book author, and isbn13. Google Books API lets you look up an ISBN based on other metadata like title and author easily.

Finally, I uploaded the enriched CSV into a free account of https://libib.com. This is a free SaaS that creates a digital bookshelf and it can import books en masse if you have their ISBNs. You can see the result of this here for my bookshelf:

https://www.libib.com/u/freenode-fr33n0d3

There are some nice titles in there for HN readers! My admin app for Libib (the one at https://libib.com) is more full-featured than the above public website showcases. It's basically software for running small lending libraries. But, in my case, the “lending library” is just my office’s physical bookshelf.

I also added a Libib collection there that is a sync of my Goodreads history, since I read way more Kindle books than physical books these days. That was a similarly vibe-coded project. But easier since Goodreads can export your book collection, including isbn13, to a file.

As for my actual physical bookshelf, it is more a collection of books I either prefer in print, or that are old, or out-of-print, or pre-digital & never-digitized.

I liked the Libib software so much I end up donating to it every year. I originally discovered it because it is used for Recurse Center’s lending library in the Recurse Center space in Brooklyn, NY (https://recurse.com).

Also, Libib has a Android, iPhoneOS, and iPadOS apps -- these are very basic but they do allow you to add new books simply by scanning their ISBN barcode, which is quite handy when I pick up new items.

I did enjoy reading the OP writeup, it’s a fun idea to vibe-code the actual digital bookshelf app, as well!

reply
stanrunge
6 hours ago
[-]
Very very cool. It's surprisingly difficult to find applications for organizing reading material, and also to actually read them. My current "good enough" solution is just Apple Books, but I've been meaning to make a similar application for this :)
reply
TheChelsUK
5 hours ago
[-]
Please add an rss feed to your website/writing
reply
stevesearer
7 hours ago
[-]
Sometimes when I’m vibe coding I feel like Ender from Ender’s Game and even though I’m making a stupid web app, I’m actually somehow actually winning a battle across the universe.
reply
Shadowmist
7 hours ago
[-]
Nobody tell him
reply
mihaibalint
7 hours ago
[-]
I Love the fact that the browser search functionality works in the bookshelf.
reply
necromanc
7 hours ago
[-]
This is a brilliant project—small, practical, and high-leverage.
reply
damnitbuilds
5 hours ago
[-]
I am glad more people think book spines are important.

I wish book archive sites like archive.org scanned and stored the book spines as well as the covers, but AFAICT none do.

reply
asasidh
6 hours ago
[-]
“Claude handled implementation. I handled taste.”

This is the right mindset.

reply
ikamm
6 hours ago
[-]
This is an LLM-ism.
reply
troupo
7 hours ago
[-]
Speaking of SerpAPI: Why we’re taking legal action against SerpApi’s unlawful scraping https://news.ycombinator.com/item?id=46329109

SerpAPI provides a very valuable programmatic access to search that Google are hell bent on never properly providing

reply
kaizenb
4 hours ago
[-]
"Execution keeps getting cheaper. Taste still does not." yes yes yes!
reply
guluarte
5 hours ago
[-]
one thing i use the models for is shopping, do my shopping list in a .txt, copy it and send it to gpt/claude and tell it to organize by shelves, get out of the store in less than 10 minutes lol
reply
kingkongjaffa
7 hours ago
[-]
This is lovely, claude code is a great tool for creating software for a user of 1. Personal software that runs locally (or on your own website in your case) and works exactly you want without it doing anything you don't want.

One-off scripts and single page html/css/js apps that run locally are fantastically accessible now too.

As someone who doesn't code for a living, but can write code, I would often go on hours/day long side quests writing these kind of apps for work and for my personal life. I know the structure and architecture but lack the fluency for speedy execution since I'm not writing code everyday. Claude code fills that speed gap and turned my days/hours long side quests into minutes for trivial stuff, and hours for genuinely powerful stuff at home and at work.

reply
xnx
6 hours ago
[-]
Sounds like the author did thing the hard way when he probably could've uploaded a few seconds of video to Gemini and said "make a virtual bookshelf app" https://fedi.simonwillison.net/@simon/111971103847972384
reply