> This isn’t about one person copying one idea. It’s about the fundamental economics of software changing.
That "this isn't x, it's y" really is a strong tell
Which means people either can't tell, or don't mind.
I am sure that whatever work was put into actually trying to implement that, was crucial in order to instruct Claude what to do. System design doesn't come by itself.
I am also building some agents. It is almost hands off at this point.
> provides none
I'm pro LLM/AI, but most of hype are just pure vibes. There's no evidence, there are only anecdotes.
All the hype-men that I follow either have a stake at it (they either work for LLM provider or have an AI startup) or post billions of examples and zero revenue.
> Stack Overflow, the site that defined a generation of software development, received 3,710 questions last month. That’s barely above the 3,749 it got in its first month of existence. The entire knowledge-sharing infrastructure we built our careers on is collapsing because people don’t need to ask anymore.
"Because people don't need to ask anymore."?!
Yeah, I wouldn't call it exaggerating, I think I would call it a fundamental misunderstanding.
I wanted to comment on the code examples he shared. But they're they're all closed source. Which is a decision given the premise of the whole article, err I mean ad, that implementations are free these days.
It's just they are asking there where they expect they'll reach a better answer faster than on SO.
But for the code where the hard part isn't making things designed separately work together, but getting the actual algorithm right. That's where I find LLMs still really fail. Finding that trick to take your approach from quadratic to N log N, or even just understanding what you mean after you found the trick yourself. I've had little luck there with LLMs.
I think this is mostly great, because its the hard stuff that I have always found fun. Properly architecting these CRUD apps, and learning which out of the infinite set of ways to do this are better, was fun as a matter of craftsmanship. But that hits at a different level from implementing a cool new algorithm.
That's the execution part of creating a successful business and it's still entirely missing.
People still argue that distribution is the real bottleneck now. But when the product itself is trivial to build and change, the old dynamics break down. Historically, sales was hard because you had to design and refine a sales motion around a product that evolved slowly and carried real technical risk. You couldn’t afford to pour resources into distribution before the product stabilized, because getting it wrong was expensive.
That constraint is gone. The assumptions and equations we relied on to understand SaaS no longer apply—and the industry hasn’t fully internalized what that means yet.
But nonfunctional requirements such as reliability, performance and security are still extremely hard to get right, because they not only need code but many correct organizational decisions to be achieved.
As customers connect these nonfunctional requirements with a brand, I don't see how big SaaS players will have a problem.
For new brands, it's as hard as ever to establish trust. Maybe coding is a bit faster due to AI, but I'm not yet convinced that vibe coders are the people on top of which you can build a resilient organization that achieves excellence in nonfunction requirements.
Brand means almost nothing when a competitor can price the software at 90% cheaper. Which is what we are going to see
Even on a technical level the interfaces with country-specific legacy software used all over the place are so badly documented the AI won't help you to shortcut these kind of integrations. There are not 10k stackoverflow posts about each piece of niche software to train from.
I think developers who have an inclination towards UI/UX and a good grip on the technical side are particularly well positioned right now.
while your statement is true - this is actually a very minor reason why sales is hard.
My bet: front end devs who need mocks to build something that looks nice get crowded out by UX designers with taste as code generation moves further into "good enough" territory.
Then those designers get crowded out as taste generation moves into "good enough" territory.
My decades of experience suggests that the opposite will happen. People will realize that the software industry is 100% moat and 0% castle.
People will build great software that nobody will use while a few companies will continue to dominate with vaporware.
that makes no sense. "Dominate" implies people use or buy you software. If you produce nothing ("vaporware") how can you dominate?
Except for the token cost maybe.
A lot of cost of mature SaaS products come from security, scaling, expensive sales teams, etc. For me, if I have something sandboxed, not available to the public, and only powerful enough to serve only _me_ as a customer, then I don't need to pay those extra costs and I can build something a lot simpler, while still maintaining the core feature that I need.
We already know the hard part of software engineering is designing and implementing code that is maintainable.
Can LLMs reliably create software and maintain it transparently without bringing in regressions ? How do people with no knowledge of software guide LLMs to build quality test suite to prevent regressions ?
Or is it the expectation that every new major release is effectively a rewrite from scratch ? Don't they have to maintain consistency with the UI, database and other existing artifacts.
Writing a formbuilder and saying you've replicated Typeform is like finishing a todo app and saying you've replicated Jira. Yes, in a way I guess...but there is way more to the product and that's usually where the hard parts are.
While I wouldn't say execution is necessarily "cheap" for everything, ChatGPT and Gemini helped me build out a little Spotify playlist generator [1] recently that scans my top 100 artists in the last 12 months then generate a playlist based on their bottom 50% of songs in terms of popularity with an option for 1 or 2 songs per artist.
Sadly the Spotify API limits will never allow me to offer it to more than 25 people at a time but I get so bored of their algorithm playing me the same top songs from artists it's a fun way for me to explore "lesser lights" and something I'd have absolutely never have been able to build before, let alone spin up in a couple of evenings.
It's quite liberating as a non-dev suddenly having these new tools available that's for sure.
It's way past the point of "just" doing MVPs or simple proof of concepts. I'm talking about user auth, dynamic input parsing, calendar views, tags, projects, history of events and more, given a few prompts.
Great ideas are rare.
"AI startups say the promise of turning dazzling models into useful products is harder than anyone expected":
https://www.wired.com/story/artificial-intelligence-startups...
This is not new. There is tech that enables new possibilities, but it's not a f---ing magic wand.
I was just coding a personal website the other day while waiting for our number to be called at the DMV. I couldn’t really review the code but it did give me a chance to test on mobile.
This is without doing anything special, just using one instance of Claude Opus 4.5 and exe.dev.
Ironically a lot of monotonous work that you were forced to do helped you immerse yourself in the problem domain and equipped you for the hard parts. Not just talking about AI btw, in general when people automate away the easy parts, the hard parts will suddenly seem more difficult, because there's no ramp-up.
While I know in some ways AI coding is helpful, the mode of work where you keep getting distracted while the agent works is much less productive when you just grind the problem.
I mean AI also helps you stay in the zone, but this 'casual' approach to work ultimately results in things not getting done, in my personal experience.
- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
Treat it rhetorically.
There can be no question that the cost coefficients of Ideas vs. Execution have changed, with LLMs
There's a hilarious thread on Twitter where someone "built a browser" using an LLM feedback loop and it just pasted together a bunch of Servo components, some random other libraries and tens of thousands of spaghetti glue to make something that can render a webpage in a few seconds to a minute.
This will eventually get better once they learn how to _actually_ think and reason like us - and I don't believe by any means that they do - but I still think that's a few years out. We're still at what is clearly a strongly-directed random search stage.
The industry is going through a mass psychosis event right now thinking that things are ready for AI loops to just write everything, when the only real way for them to accomplish anything is by just burning tokens over and over until they finally stumble across something that works.
I'm not arguing that it won't ever happen. I think the true endgame of this work is that we'll have personal agents that just do stuff for us, and the vast majority of the value of the entire software industry will collapse as we all return to writing code as a fun little hobby, like those folks who spend hours making bespoke furniture. I, for one, look forward to this.
I just hope that we retain some version of autonomy and privacy because no one wants the tech giants listening in every single word you utter because your agent heard it. No-one wants it but some, not many, care.
Agents deployed locally should be the goal.
LLMs make it a lot easier to build MVPs, but the hard work of VALIDATING problems and their solutions, which IMO was always >80% of the work for a successful founder, is harder than ever. With AI we now get 100 almost-useful solutions for every real problem.
Nothing replaces making simple UX instead of complicated kitchen sink products.
It’s easy to make stuff. It’s harder to make stuff people want.
I am thankful for the increase in product velocity and I also recognize that a lot of stuff people make isn’t what people want.
Product sense and intuition still matter.
And especially some folks keep claiming that one just needs to get better at prompting and describe a detailed spec.
Wanna know what a detailed spec is called? An unambiguous one? It's called code.
LLMs still feel like a very round-about way of re-inventing code. But instead of just a new language, it's a language that nondeterministically creates "code" or a resemblance thereof.
And I am aware that this is currently not a popular opinion on HN, so keep the downvotes coming.
If you use LLMs outside the popular Github languages, it will fail hard on you. It is glorified text-completion, that's what it is.
Have they iterated on user feedback? Have they fixed obscure issues? Made any major changes after the initial version?
More importantly, can the author claim with a straight face that they no longer need to read or understand the code that has been produced?
Just another one of those “look, I built a greenfield pet project over the weekend, software engineering is dead.”
There is also the matter of having ideas that are good and knowing how to make them into good software, not something that simply "technically works". LLMs are not enough to overcome this barrier, and the author's examples seem to prove the point. The "working products with test suites, documentation, and polish" that are just another batch of LLM front-ends are frankly unimpressive. Is this the best that AI can offer?
> easily
reproducible now?
I mean, sometimes the hard work is creating object number 1. There are a crapload of inventions that we look back on and go "why did it take so long for us to make the first one", then after that whatever object/idea it was explodes over the planet because of the ease of implementation and the useful practical application.
I think this statement is marred by the our modern sensibilities that say everything must be profitable or it's a bad idea.
Yes.
> LLMs don't change the equation.
No. They make more things easily replicable.
"We made this and all it took was 500 juniors working for a year" used to be a reasonable business moat of effort. Now it's not.