What is going on right now?
299 points
13 hours ago
| 56 comments
| catskull.net
| HN
aeon_ai
7 hours ago
[-]
AI is a change management problem.

Using it well requires a competent team, working together with trust and transparency, to build processes that are designed to effectively balance human guidance/expertise with what LLM's are good at. Small teams are doing very big things with it.

Most organizations, especially large organizations, are so far away from a healthy culture that AI is amplifying the impact of that toxicity.

Executives who interpret "Story Points" as "how much time is that going to take" are asking why everything isn't half a point now. They're so far removed from the process of building maintainable and effective software that they're simply looking for AI to serve as a simple pass through to the bottom line.

The recent study showing that 95% of AI pilots failed to deliver ROI is a case study in the ineffectiveness of modern management to actually do their jobs.

reply
grey-area
6 hours ago
[-]
Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?
reply
sim7c00
6 hours ago
[-]
you are not wrong. the only 'sane' approaches ive seen with vibe coding is making a PoC to see if some concept works. then rewrite it entirely to make sure its sound.

besides just weird or broken code, anything exposed to user input is usually severly lacking sanity checks etc.

llms are not useless for coding. but imho letting llms do the coding will not yield production grade code.

reply
bbarnett
6 hours ago
[-]
Koko the gorilla understood language, but most others of her ilk simlpy make signs because a thing will happen.

Move hand this way and a human will give a banana.

LLMs have no understanding at all of the underlying language, they've just seen that a billion times a task looks like such and such, so have these tokens after them.

reply
jedwards1211
47 minutes ago
[-]
There’s been a lot of criticism that Koko’s language abilities were overblown and her expressions were overinterpreted as well.
reply
SirHumphrey
6 hours ago
[-]
What does it matter if they have understanding of the underlying language or not? Heck, do humans even have the "understanding of the underlying language". What does that even mean?

It's a model. It either predicts usefully or not. How it works is mostly irrelevant.

reply
ryandrake
4 hours ago
[-]
I think that more often than we'd like to admit, we humans are also just not thinking that much about or understanding what we are communicating, and just outputting the statistically most likely next word over and over.
reply
shagmin
2 hours ago
[-]
Defining what that means exactly is one endeavor. But it's important to the how, because whatever it may mean implies a drastically limited set of capabilities, a ceiling, etc., compared to whatever it may mean - if it weren't the case.
reply
sim7c00
4 hours ago
[-]
interesting take. i dont know a lot about grammarz yet in my own language i can speak fairly ok...

all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

copilot told me to cast an int to str to get rid of an error.

thanks copilot, it was on kernel code.

glad i didnt do it :/. just closed browser and opened man pages. i get nowhere with these things. it feels u need to understand so much its likely less typing to write the code. code is concise and clear after all, mostly unambiguous. language on the other hand...

i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

reply
anuramat
4 hours ago
[-]
Nobody knows what intelligence is, yet somehow everyone has a strong opinion on what it isn't; after all, how could piecewise affine transformations/markov chains/differential equations EVER do X?
reply
Piskvorrr
5 hours ago
[-]
In which case...what good is a model that predicts semi-randomly? Oh.

("But it works - when it works" is a tautology, not a useful model)

reply
anuramat
5 hours ago
[-]
What does "semi-random" even mean? Are humans not "semi-random" in the same sense?
reply
A4ET8a8uTh0_v2
6 hours ago
[-]
POC approach seems to work for me lately. It still takes effort to convince manager that it makes sense to devote time to polishing it afterwards, but some of the initial reticence is mitigated.

edit: Not a programmer. Just a guy who needs some stuff done for some of the things I need to work on.

reply
michaeldoron
5 hours ago
[-]
A team of 9 people made Base44, a product for vibe-coding apps, and sold it for $80M within 6 months.

https://techcrunch.com/2025/06/18/6-month-old-solo-owned-vib...

reply
piva00
5 hours ago
[-]
That's just an example of surfing on the incestuous hype, they created a vibe-coded tool that was bought by Wix to help vibe-code other stuff.

Is there any example of successful companies created mostly/entirely by "vibe coding" that isn't itself a company in the AI hype? I haven't seen any, all examples so far are similar to yours.

reply
bubblyworld
6 hours ago
[-]
As always, two things can be true. Ignore both the hucksters and the people loudly denigrating everything LLM-related, and somewhere in between you find the reality.

I'm in a tiny team of 3 writing b2b software in the energy space and claude code is a godsend for the fiddly-but-brain-dead parts of the job (config stuff, managing cloud infra, one-and-done scripts, little single page dashboards, etc).

We've had much less success with the more complex things like maintaining various linear programming/neural net models we've written. It's really good at breaking stuff in subtle ways (like removing L2 regularisation from a VAE while visually it still looks like it's implemented). But personally I still think the juice is worth the squeeze, mainly I find it saves me mental energy I can use elsewhere.

reply
datadrivenangel
6 hours ago
[-]
I've seen small teams of a few people write non-trivial software services with AI that are useful enough to get users and potentially viable as a business.

We'll see how well they scale.

reply
deepburner
6 hours ago
[-]
I'm rather tired of this ai apologism bit where every downside is explained away as "it would've happened anyways". AI destroying people's brains and causing paychosis? They would've gone psychotic anyways! AI causing company culture problems? The company was toxic anyways!

Instruments are not inculpable as you think they are.

reply
anuramat
4 hours ago
[-]
What's your point? We should be more careful with it? This is the "trial and error" part
reply
dingdingdang
6 hours ago
[-]
This so many times over, using/introducing AI in an already managerially dysfunctional organisation is like giving automatic weapons to a band of vikings - it will with utmost certitude result in a quickening of their demise.

A demise that in the case of a modern dysfunctional organisation would otherwise often be arriving a few years later as a results of complete and utter bureaucratic failure.

My experience is that all attempts to elevate technology to a "pivotal force" for the worse is always missing the underlying social and moral failure of the majority (or a small, but important, managerial minority) to act for the common good rather than egotistic self-interest.

reply
TremendousJudge
4 hours ago
[-]
> Executives who interpret "Story Points" as "how much time is that going to take"

aside, but I have yet to meet a single person (dev, qa, pm, exec) who doesn't do this.

reply
Quarrelsome
3 hours ago
[-]
so nobody understands why we use "story points" instead of time estimates? I feel like some people do appreciate that its not about the number of points but the quantitive difference between the items up for work.
reply
davedx
6 hours ago
[-]
I saw that study, it was indeed about pilots. When do you ever expect a pilot to immediately start driving big revenue increases? The whole thing is a strawman
reply
xnorswap
6 hours ago
[-]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

reply
clickety_clack
5 hours ago
[-]
Ugh. I worked with a PM who used AI to generate PRDs. Pretty often, we’d get to a spot where we were like “what do you mean by this” and he’d respond that he didn’t know, the AI wrote it. It’s like he just stopped trying to actually communicate an idea, and replaced it with performative document creation. The effect was to basically push his job of understanding requirements down to me, and I didn’t really want to interact with someone who couldn’t be bothered figuring out his own thoughts before trying to put me to work implementing them so I left the team.
reply
nradov
2 hours ago
[-]
Well that's when you escalate the concern (tactfully and confidentially) to your resource manager and/or the Product Manager's resource manager. And if they don't take corrective action then it's time to look for a new job.
reply
clickety_clack
41 minutes ago
[-]
If I was stuck there I probably would have pushed it, but I had better options than setting out on an odyssey to reform a product team.

It got me thinking that in general, people with options will probably sort themselves out of those situations and into organizations with like-minded people who use AI as a tool to multiply their impact (and I flatter myself to think that it will be high ability people who have those options), leaving those more reliant on AI to operate at the limit of what they get from OpenAI et al.

reply
siva7
5 hours ago
[-]
What the heck, the universal job description of a PM is to genuinely understand the requirements of their product. I'm always baffled how such people stay in those roles without getting fired.
reply
BitwiseFool
6 hours ago
[-]
>"It didn't help that the LLM was confidently incorrect."

Has anyone else ever dealt with a somewhat charismatic know-it-all who knows just enough to give authoritative answers? LLM output often reminds me of such people.

reply
SamBam
5 hours ago
[-]
That’s a great question — and one that highlights a subtle misconception about how LLMs actually work.

At first glance, it’s easy to compare them to a charismatic “know-it-all” who sounds confident while being only half-right. After all, both can produce fluent, authoritative-sounding answers that sometimes miss the mark. But here’s where the comparison falls short — and where LLMs really shine:

(...ok ok, I can't go on.)

reply
ryandrake
5 hours ago
[-]
Most of the most charismatic, confident know-it-alls I have ever met have been in the tech industry. And not just the usual suspects (founders, managers, thought leaders, architects) but regular rank-and-file engineers. The whole industry is infested with know-it-alls. Hell, HN is infested with know-it-alls. So it's no surprise that one of the biggest products of the decade is an Automated Know-It-All machine.
reply
flatb
3 hours ago
[-]
Thereby self correcting perhaps.
reply
mwigdahl
4 hours ago
[-]
Perfect! You really got to the core of the matter! The only thing I noticed is that your use of the em-dash needs to not be bracketed with spaces on either end. LLMs—as recommended by most common style guides—stick to the integrated style that treats the em-dash as part of the surrounding words.
reply
matt_kantor
3 hours ago
[-]
It bums me out that LLMs are ruining em dashes. I like em dashes and have used them for decades, but now I worry that when I do people will assume my writing is LLM output.

What's next—the interrobang‽

reply
lcnPylGDnU4H9OF
1 hour ago
[-]
I'm hoping it's not the semi-colon; I use that a lot.
reply
mvdtnz
1 hour ago
[-]
This isn't funny or clever. Stop it.
reply
bigfishrunning
6 hours ago
[-]
If those people are wrong enough times, they are either removed from the organization or they scare anyone competent away from the organization, which then dies. LLMs seem to be getting a managerial pass (because the cost is subsidized by mountains of VC money and thus very low (for now)) so only the latter outcome is likely.
reply
DamnInteresting
5 hours ago
[-]
Colloquially known as "bullshitters."[1]

[1] https://dictionary.cambridge.org/us/dictionary/english/bulls...

reply
XxiXx
5 hours ago
[-]
There's even a name for such person: Manager
reply
SoftTalker
5 hours ago
[-]
Yes, they have been around forever, they are known as bullshitters.

The bullshitter doesn't care whether what he says is correct or not, as long as it's convincing.

https://en.wikipedia.org/wiki/On_Bullshit

reply
fluoridation
5 hours ago
[-]
I'm pretty sure I'm that guy on some topics.
reply
BitwiseFool
8 minutes ago
[-]
>"I'm pretty sure I'm that guy on some topics."

The use of 'pretty sure' disqualifies you. I appreciate your humility.

reply
pmarreck
5 hours ago
[-]
Sounds like every product manager I've ever had, lol (sorry PM's!)
reply
duxup
3 hours ago
[-]
Agreed on bad human code > bad llm code.

Bad human code to me is at least more understandable in what it was trying to do. There's a goal you can figure out and fix it. It generally operates within the context of larger code to some extant.

Bad LLM code can be broken from start to finish in ways that make zero sense. Even worse when it re-invents the wheel and replaces massive amounts of code. Human aren't likely just make up a function or methods that don't exist and deploy it. That's not the best example as you'd likely find that out fast, but it's the kind of screw up that indicates the entire chunk of LLM code you're examining may in fact be fundamentally flawed beyond normal experience. In some cases you almost need to re-learn the entire codebase to truly realize "oh this is THAT bad and none of this code is of any value".

reply
ryandvm
4 hours ago
[-]
I had an experience earlier this week that was kind of surreal.

I'm working with a fairly arcane technical spec that I don't really understand so well so I ask Claude to evaluate one of our internal proposals on this spec for conformance. It highlights a bunch of mistakes in our internal proposal.

I send those off to someone in our company that's supposed to be an authority on the arcane spec with the warning that it was LLM generated so it might be nonsense.

He feeds my message to his LLM and asks it to evaluate the criticisms. He then messages me back with the response from his LLM and asks me what I think.

We are functionally administrative assistants for our AIs.

If this is the future of software development, I don't like it.

reply
xeonmc
4 hours ago
[-]
In your specific case, I think it’s likely an intentionally pointed response to your use of LLM.
reply
ryandvm
1 hour ago
[-]
I'm certain it wasn't in this particular case, but yeah, that's definitely going to happen as we all become more annoyed by people shoveling AI-generated crap in our faces and asking us to think about it for them.
reply
euroderf
4 hours ago
[-]
> "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

Making the implicit explicit is a task for your documentation team, who should also be helping prep inputs for your LLMs. If foo and Frob are the same, have the common decency to tell the LLM...

reply
esoterae
2 hours ago
[-]
The driver of this output is a uniform lack of comprehension all the way down.
reply
chasd00
1 hour ago
[-]
i gave a ppt of 4-5 slides laying out an approach to implementing a business requirement to a very junior dev. I wanted to make sure they understood what was going on so i asked them to review the slides and then explain it back to me as if i'm seeing them for the first time. What i got back was the typical overly verbose and articulate review from chatgpt or some other llm. I thought it was pretty funny that they thought it would work let alone be acceptable to do that. When i called them and asked, "now do it for real" i ended up answering a dozen questions but hung up knowing they actually did understand the approach.
reply
clutchdude
5 hours ago
[-]
I'm seeing the worst of both worlds where a human support engineer just blindly copies and pastes whatever internal LLM spit out.
reply
jjice
5 hours ago
[-]
It's so upsetting to see people take the powerful tool that is an LLM and pretend like it's a solution for everything. It's not. They're awesome at a lot of things, but they need a user that has context and knowledge to know when to apply or direct it in a different way.

The amount of absolutely shit LLM code I've reviewed at work is so sad, especially because I know the LLM could've written much better code if the prompter did a better job. The user needs to know when the solution is viable for an LLM to do or not, and a user will often need to make some manual changes anyway. When we pretend an LLM can do it all, it creates slop.

I just had a coworker a few weeks ago produce a simple function that wrapped a DB query in a function (normal so far), but wrote 250 lines of tests for it. All the code was clearly LLM generated (the comments explaining the most mundane of code was the biggest give away). The tests tested nothing. It mocked the ORM and then tested the return of the mock. We were testing that the mocking framework worked? I told him that I don't think the tests added much value since the function was so simple and that we could remove them. He said he thought they provided value, with no explanation, and merged the code.

Now fast forward to the other day and I run into the rest of the code again and now it's sinking in how bad the other LLM code was. Not that it's wrong, but it's poorly designed and full of bloat.

I have no issue with the LLM - they can do some incredible things and they're a powerful tool in the tool belt, but they are to be used in conjunction with a human that knows what they're doing (at least in the context of programming).

Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand. And the thing that will decide which direction a code base goes in will be the engineers involved.

reply
chasd00
1 hour ago
[-]
> Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand.

this offshoring all over again. At first, every dev in the US was going to be out of a job because of how expensive they were compared to offshore devs. Then the results started coming back and there was some very good work done offshore but there was tons and tons of stuff that had to be unwound and fixed with onshore teams. Entire companies/careers were made dedicated to just fixing stuff coming back from offshore dev teams. In the end, it took a mix of both to realize more value per dev $

reply
SoftTalker
5 hours ago
[-]
> I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash

Technical debt at a payday loan interest rate.

reply
pmarreck
5 hours ago
[-]
This is why [former Codeium] Windsurf's name is so genius.

Windsurfing (the real activity) requires multiple understandings:

1) How to sail in the first place

2) How to balance on the windsurfer while the wind is blowing on you

If you can do both of those things, you can go VERY fast and it is VERY fun.

The analogy to the first thing is "understanding software engineering" (to some extent). The analogy to the second thing is "understanding good prompting while the heat of deadlines is on you". Without both, you are just creating slop (falling in the water repeatedly and NOT going faster than either surfing or sailing alone). Junior devs that are leaning too hard on LLM assistance right off the bat are basically falling in the water repeatedly (and worse, without realizing it).

I would at minimum have a policy of "if you do not completely understand the code written by an LLM, you will not commit it." (This would be right after "you will not commit code without it being tested and the tests all passing.")

reply
siva7
5 hours ago
[-]
That's why some teams have the rule that the PR author isn't allowed to merge but only one of the approvers
reply
whalesalad
6 hours ago
[-]
A notion comment on a story the other day started with, "you're absolutely right" and that is when I had to take a moment outside for myself.
reply
gjsman-1000
5 hours ago
[-]
I swear that in 3 years, managers are going to realize this constant affirmation… causes staff to lose mental tolerance for anything not clappy-happy. Same with schools.
reply
wiseowise
5 hours ago
[-]
Already is. Anything that isn’t cheerful, fake salesman is interpreted as hostile.
reply
slipperydippery
5 hours ago
[-]
Yeah, the safest professional tone to adopt is now something like what you'd use talking to someone else's very-stupid dog than to a co-worker you respect. It's gross.
reply
HankStallone
6 hours ago
[-]
It's annoying when it apologizes for a "misunderstanding" when it was just plain wrong about something. What would be wrong with it just saying, "I was wrong because LLMs are what they are, and sometimes we get things very wrong"?

Kinda funny example: The other day I asked Grok what a "grandparent" comment is on HN. It said it's the "initial comment" in a thread. Not coincidentally, that was the same answer I found in a reddit post that was the first result when I searched for the same thing on DuckDuckGo, but I was pretty sure that was wrong.

So I gave Grok an example: "If A is the initial comment, and B is a reply to A, and C a reply to B, and D a reply to C, and E a reply to D, which is the grandparent of C?" Then it got it right without any trouble. So then I asked: But you just said it's the initial comment, which is A. What's the deal? And then it went into the usual song and dance about how it misunderstood and was super-sorry, and then ran through the whole explanation again of how it's really C and I was very smart for catching that.

I'd rather it just said, "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data, and that happened to be bad data. That's just how I work; don't take anything for granted."

reply
redshirtrob
5 hours ago
[-]
Ummm, are you saying that C is the grandparent of C, or do you have a typo in your example? Sure, the initial comment is not necessarily the grandparent, but in your ABCDE example, A is the grandparent of C, and C is the grandparent of E.

Maybe I'm just misreading your comment, but it has me confused enough to reset my password, login, and make this child comment.

reply
HankStallone
3 hours ago
[-]
Yes, it was a typo; I meant to say I asked it the grandparent of E. Thanks for catching that.
reply
SideburnsOfDoom
3 hours ago
[-]
> I'd rather it just said ...

Yes, but why would it? "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data" isn't in the training data. Yet.

So it can't come out of the LLM: There's no actual introspection going on, on any of these rounds. Just using training data.

reply
ivanjermakov
6 hours ago
[-]
> I was getting back chat GPT output

I would ask them for an apple pie recipe and report to HR

reply
japhyr
5 hours ago
[-]
I get that this is a joke, but the bigger issue is that there's no easy fix for this because other humans are using AI tools in a way that destroys their ability to meaningfully work on a team with competent people.

There are a lot of people reading replies from more knowledgeable teammates, feeding those replies into LLMs, and pasting the response back to their teammates. It plays out in public on open source issue threads.

It's a big mess, and it's wasting so much of everyone's time.

reply
ivanjermakov
5 hours ago
[-]
As with every other problem with no easy fix, if it is important - it should be regulated. It should not be hard for a company to prohibit LLM-assisted communication, if management believes that it is inherently destructive (e.g. feeding generated messages into message summarizers).
reply
chasd00
1 hour ago
[-]
> I would ask them for an apple pie recipe and report to HR

i do this sometimes except i reply asking them to rephrase their comment in the form of a poem. Then screenshot the response and add it as an attachment before the actual human deletes the comment.

reply
wildzzz
2 hours ago
[-]
I had a QA inspector asking me a question on teams about some procedure steps before we ran a test. I answered them and he replied back with this message absolutely dripping in AI slop. I was expecting "ok thanks I'll tell them" and instead got back "Thank you. I really appreciate your response. I'll let them know and I'm sure they will feel relieved to know your opinion." Like wtf is that. I had to make sure I was talking to the right guy. This guy definitely doesn't talk like that in-person. It's not my opinion and I highly doubt anyone was worried to the point they'd feel relief to hear my clarification.
reply
ivanjermakov
2 hours ago
[-]
Fun part that it's immediately obvious to everyone who worked with LLMs. I wonder what future "enhancements" big tech would come up with to make slop speech less robotic/recognizable.

And it's unfortunate that many people would start associating long texts as generated by default. Related XKCD: https://xkcd.com/3126/

reply
seethishat
5 hours ago
[-]
Ignorant confidence is the best kind of confidence ;)
reply
ChrisMarshallNY
6 hours ago
[-]
> So what good are these tools? Do they have any value whatsoever?

In my case, yes, but I think I use it differently from the way that most do.

First, for context, I'm a pretty senior developer, and I've been doing code since 1983, but I'm currently retired, and most of my work is alone (I've found that most young folks don't want to work with people my age, so I've calibrated my workflow to account for that).

I have tried a number of tools, and have settled on basically just using ChatGPT and Perplexity. I don't let them write code directly, but I often take the code they give me, and use it as a starting point for implementation. Sometimes, I use it wholesale, but usually, I do a lot of modification (often completely rewriting).

I have found that they can get into "death spirals," where their suggestions keep getting worse and worse. I've learned to just walk away, and try something else, when that happens. I shudder to think of junior engineers, implementing the code that comes from these.

The biggest value, to me, is that I can use them as an "instant turnaround" StackOverflow, without the demeaning sneers. I can ask the "stupidest" question; one that I could easily look up, myself, but it's faster to use ChatGPT, and I'll usually get a serviceable answer, almost instantly. That's extremely valuable, to me.

I recently spent a few weeks, learning about implementing PassKeys in iOS. I started off "cold," with very little knowledge of PKs, in general, and used what ChatGPT gave me (server and client) verbatim, then walked through the code, as I learned. That's usually how I learn new tech. It's messy, but I come out of it, with a really solid understanding. The code I have now, is almost unrecognizable, from what I started with.

reply
kelipso
3 hours ago
[-]
This is exactly how I use it. I’m almost done with a personal project where I didn’t even know how to begin to start, but now I have a decent handle on the codebase.
reply
DrillShopper
4 hours ago
[-]
Your use case is one that I have been gently leaning into for small projects at work and personal projects - the LLM is used to give me an initial ideal and code. In a sense, the LLM writes "the one I throw away", and by exploring the limitations of what it produces I both get a better idea of the problem domain and more confidence in actually implementing something useful.
reply
franciscop
6 hours ago
[-]
This is probably why I love using Zed for my hobby dev, it doesn't try to be too clever about AI. It's still there and when I do want some AI stuff it can be seamlessly prompted, but for normal day-to-day the AI steps back and I can just code. In contrast, using AI at work with VSCode I feel like the tools get too much in the way, particularly in 2 categories:

- Fragile interaction. There's popups on VSCode everywhere, and they are clickable. Too often I try to hover on a particular place and end up clicking on one of those. The AI autocomplete also feels way too intrusive, press the wrong key combination and BAM I get a huge amount of code I didn't intend to get.

- Train of thought disruption. Since some times the AI long auto-complete is useful (~1/3th of times), I do end up reading it and getting distracted from my original "building up" thinking and now change to "explore thinking", which kind of dismantles the abstraction castle.

I haven't seen either of those issues on Zed. It really brought me back the joy of programming on my free time. I also think both of these issues are about the implementation more than the actual feature.

reply
anymouse123456
6 hours ago
[-]
AI has been great for UX prototypes.

Get something stood up quickly to react to.

It's not complete, it's not correct, it's not maintainable. But it's literal minutes to go from a blank page to seeing something clickable-ish.

We do that for a few rounds, set a direction and then throw it in the trash and start building.

In that sense, AI can be incredibly powerful, useful and has saved tons of time developing the wrong thing.

I can't see the future, but it's definitely not generating useful applications out of whole cloth at this point in time.

reply
criddell
6 hours ago
[-]
For me it's useful in those areas I don't venture into very often. For example I needed a powershell script recently that would create a little report of some registry settings. Claude banged out something that worked perfectly for me and saved me an hour of messing around.
reply
chasd00
1 hour ago
[-]
yeah they work pretty well for scripts. I use claude to create scripts i need for .csv transforms and things. Like read all these files in a directory, merge them together on some key and convert this to that and all kinds of things and then output a csv with the following header row. For things like that they work pretty well.
reply
humpty-d
6 hours ago
[-]
It’s useful for almost any one-off script I write. It can do the work much faster than me and produce nicer looking output than I’d ever bother to spend time to write myself. It can also generate cli args and docs I’d never "waste time" on myself, which I’d later waste even more time fumbling without

They’re insanely useful. I don’t get why people pretend otherwise, just because they aren't fulfilling the prophesies of blowhards and salesmen

reply
SeasonalEnnui
6 hours ago
[-]
Yes, totally agree. The 2nd thing I found it great for was to explain errors, it either finds the exact solution, or sparked a thought that lead to the answer.
reply
mexicocitinluez
6 hours ago
[-]
It's the height of absurdity to me that this is possible and devs will still say outrageous shit like "These tools have no use"
reply
phba
4 hours ago
[-]
> We do that for a few rounds, set a direction and then throw it in the trash and start building.

Unfortunately PMs tend to forget the throw-it-in-the-trash part, so the prototype still ends up in prod.

But good for you, if you found a way to make it work.

reply
cck9672
6 hours ago
[-]
Can you elaborate on your process and tools here? This use case may actually be valuable for me and my team.
reply
waterproof
6 hours ago
[-]
Tools that can build you a quick clickable prototype are everywhere. Replit, claude code, cursor, ChatGPT Pro, v0.app, they're all totally capable.

From there it's the important part: discussing, documenting, and making sure you're on the same page about what to actually build. Ideally, get input from your actual customers on the mockup (or multiple mockups) so you know what resonates and what doesn't.

reply
stanrivers
7 hours ago
[-]
I’m scared for what happens ten years from now when none of the junior folk ever learned to write code themselves and now think they are senior engineers…
reply
bmurphy1976
6 hours ago
[-]
This trend started long before AI. Everybody needs 10+ years experience to get a job anywhere. As an industry we've been terrible at up-leveling the younger generations.

I've been fighting this battle for years in my org and every time we start to make progress we go through yet another crisis and have to let some of our junior staff go. Then when we need to hire again it's an emergency and we can only hire more senior staff because we need to get things done and nobody is there to fill the gaps.

It's been a vicious cycle to break.

reply
popcorncowboy
6 hours ago
[-]
I can second this cycle. Agentic code AI is an accelerant to this fire that sure looks like it's burning the bottom rungs of the ladder. Game theory suggests anyone already on the ladder needs to chop off as much of the bottom of the ladder as fast as possible. The cycle appears to only be getting.. vicious-er.
reply
jihadjihad
6 hours ago
[-]
Ten years? They'll be staff, obviously. Three years of experience is senior now, did you get that memo?
reply
kykat
6 hours ago
[-]
that's of course because nobody wants to hire junior and every job posting wants senior, so now everyone is "senior"
reply
ivanjermakov
5 hours ago
[-]
There will always be software written by people who know what they are doing. Unless LLM generated code is perfect, there will always be a demand for high quality code.
reply
discordance
5 hours ago
[-]
Vibe coding as a concept started out last December. That was only 9 months ago. I doubt any people will be writing or maintaining code in a couple of years.
reply
pydry
6 hours ago
[-]
Hopefully theyll all become plumbers or schoolteachers or something.

There's a glut of junior dev talent and not enough real problems out there which a junior can apply themselves to.

This means that most of them simply arent able to get the kind of experience which will push them into the next skill bracket.

It used to be that you could use them to build cheap proofs of concept or self contained scripting but those are things AI doesnt actually suck too badly at. Even back then there were too many juniors and not enough roles though.

reply
bigfishrunning
5 hours ago
[-]
There's a glut of "talent", but most of them are attracted by the inflated paycheck and aren't actually talented. In the past, they would either get promoted to middle management where they can't do any damage, or burn out and find another career. Now they can fake it long enough to sink your company, and then move on (with experience on their resume!) to their next victim. Things are gonna be really ugly in 10 years.
reply
slipperydippery
5 hours ago
[-]
I'm fairly confident there's going to be no shortage of work for programmers who actually halfway know what they're doing for the remainder of my career (another 20ish years, probably), at least. So that's nice.

Though cleaning up garbage fires isn't exactly fun. Gonna need to raise my rates.

reply
6LLvveMx2koXfwn
7 hours ago
[-]
Dude, you're basically describing my career - no LLMs necessary!
reply
RyanOD
6 hours ago
[-]
For me, AI is just a tool. I'm not a high-level developer, but when I'm coding out a personal project and I'm stuck. I present my ideas to AI and ask it for feedback. Then, I take that feedback and move forward. What I do NOT do is ask AI to write code for me. Again, these are my own projects so I can develop them any way I like.

Having AI write code for me (other than maybe simple boilerplate stuff) goes entirely against why I write code in the first place which is the joy of problem solving, building things, and learning.

Edit: Typo

reply
busssard
6 hours ago
[-]
for my last project i could not have finished it without AI doing coding. it set up the entire repo for me, it wrote bad code and the PoC worked. I dont have experience in Django JS or webdev. now i have a working thing that i can slowly go through, improve and understand.
reply
RyanOD
5 hours ago
[-]
That's one way. The approach I take is more of a "Hey, AI, here is what I'm trying to accomplish and here is the approach I plan to take. What do you think of this approach? Does it support my broader goals? Etc."

But, much of what I spend my time on are already solved problems (building retro video game clones) so AI has a LOT of high-quality content to draw upon. I'm not trying to create new, original products.

reply
donatj
5 hours ago
[-]
I was reviewing a coworkers code recently. It was this convoluted multidimensional array manipulation that was shuffling, sorting, and filtering all at the same time. It had a totally generic name like "prepareData". I asked for an explanation of what the function was doing and he snapped back that I should ask an LLM instead of wasting his time.

It's been a couple weeks, but I am still irritated.

I am asking you, the person who supposedly wrote this, what it does. When you submit your code for review, that's among the most basic questions you should be prepared to answer.

reply
tossandthrow
5 hours ago
[-]
Ask the LLM, attribute the LLMs response to him, and just post the feedback you have after 20 messages back and forth on the PR.

When he gets back and don't understand the feedback, then you can conveniently ask him to ask an LLM and not waste your time.

reply
yanis_t
6 hours ago
[-]
Hypothetically, AI is going to move us from development to validation. Think about writing more unit tests, integration tests, e2e tests. Spend more time verifying, really carefully reading these pull requests.

Development is moving towards quality assurance. Because that's what matter eventually. You have a product that works reliably and fast, and you can quickly get it to the market. You don't really care how the code is written.

Of course some people will continue to write "better software" than AI, more readable, or more elegant, bringing some diminishing marginal value to the table, the market doesn't really care about.

I don't think AI is there yet, but realistically speaking, it's gonna get there in 5 to 10 years. Some of us will adjust, some not. Reaction is real.

reply
xtracto
5 hours ago
[-]
When LLMs write 100% of the code and we humans are only tasked with validating and verifying its function, programming languages won't be needed (prog langs are for people).

I wonder if at some point we will have an LLM that basically understands English and say, Java bytecode or V8 bytecode. So in goes English descriptions and comments and out goes program bytecode implementing the required functionality.

Also for LRMs.. why use English for the reasoning part? Could there be a more succinct representation? Like Prolog?

reply
const_cast
5 hours ago
[-]
The next evolution is you don't need applications at all. Applications are for automation speed, nothing else.

Prior to computers, processes were completed by human to human communication. Hard to scale, impossible to automate. So then we had applications, which force fairly strict processes into a funnel.

But they're extremely restrictive and hard to make.

If you already have God AI, you just don't need an application. I don't go to an airlines website and book a flight. No, I ask my assistant to book me a flight, and then I have a flight.

The assistant might talk to the airline, or maybe hundreds of other AI. But it gets it done instantly, and I don't have to interface with a webpage. The AI have a standard language amongst themselves. It might be English, it might not be.

reply
rafterydj
2 hours ago
[-]
That sounds horrendously inefficient, no?

Is creating a giant wobbly world of unreliable AIs all talking to each other in effort to get their own tasks accomplished, is that leaving us much better off than humans doing everything themselves?

Better yet, if you have an application that does exactly what you want, why would you (or an AI representing you) want to do anything other than use that application? Sure you could execute this binary and get what you want, OR you could reach to the AI-net and make some other AI do it from scratch every time. With inherently less reliable results.

reply
const_cast
1 hour ago
[-]
Sorry I should of specified, this is assuming a world with perfect AIs.

The world right now is pretty strict just because of how software has to be, which has a lot of upsides and downsides. But there's some wobblyness because of bugs, which break contracts.

But I think then in the future you have AI which doesn't make mistakes and you also have contracts.

Like the airline agent booking your flight (human or AI) has a contract - they can only do certain things. They can't sell you a ticket for one dollar. Before applications we just wrote these contracts as processes, human processes. Human often break processes. Perfect AI won't.

And to us, humans, this might even be completely transparent.

Like in the future I go to a website because I want to see fancy flight plans or whatever and choose something.

Okay, my AI goes to the airline and gets the data, then it arranges it into a UI on the fly. Maybe I can give it rules for how I typically like those UI presented.

So there's no application. It works like an executive assistant at a job. Like if I want market research, I don't use an application for that. I ask my executive assistant. And then, one week later, I have a presentation and report with that research.

That takes a week though, perfect AI can do it instantly.

And for companies, they don't make software or applications anymore. They make business processes, and they might have a formal way for specifying them. Which is similar to programming in a way. But it's much higher level. I identify the business flow and what my people (or AI) are allowed to do, and when, and why.

reply
calvinmorrison
5 hours ago
[-]
disagree. Programming languages are useful at minimizing context for humans as well as AI. Much easier to call preg_replace rather than implement a regex engine.
reply
thomgo
6 hours ago
[-]
> Engineers are burning out. Orgs expect their senior engineering staff to be able to review and contribute to “vibe-coded” features that don’t work.

It’s not just engineering folks that are being asked to do more vibe-coding with AI. Designers, product folks, project managers, marketers, content writers are all being asked to vibe code prototypes, marketing websites, internal tools, repros, etc. I’m seeing it first hand at the company I work at and many others. Normal expectations of job responsibilities have been thrown out of the window.

Teams are stretched thin as a result, because every company is thinking that if you’re not sprinting towards AI you’ll be left behind. And the truth is that these folks actually deliver impact through their AI usage.

reply
slipperydippery
5 hours ago
[-]
My wife's seen this at multiple non-tech companies, and it's a disaster every single time. Most (like... 95+% of) folks can't use these things very well, being forced to use them kills morale because they're frustrating as fuck (and managers largely have no idea what they're actually capable of doing productively, so expectations are all over the place and often fantastical) and dealing with the output of co-workers who can't use them well is even more frustrating than the LLMs themselves are. It's sometimes coupled with attempts to realize the increased "efficiency" before it's even proven it exists, by firing significant chunks of staff at the same time as adopting "AI processes", further stressing out remaining employees and ruining ability to actually get things done.

It's trashing whole departments. The come-down from this high (which high is being experienced pretty much only by the C-suite and investors) is gonna be rough.

reply
karthikeayan
1 hour ago
[-]
> I can’t know for sure, but I’d be willing to put money down that my exact question and the commit were fed directly into an LLM which was then copy and pasted back to me. I’m not sure why, but I felt violated. It felt wrong.

I don't know from where you are but I am from India, exactly same as your age and exactly dealing with problem! Its really, annoying!

reply
alexpotato
5 hours ago
[-]
Recent fascinating experience with hiring and AI.

- DevOps role

- Technical test involves logging into a server (via sadservers.com who are awesome)

- We tell the candidates: "The goal is to see if you can work through a problem on a Linux based system. It's expected you'll see some things you may never have seen before. Using Google and ChatGPT etc is fine if you get stuck. We just ask that you share your screen so we can see your search and thought processes."

- Candidate proceeds to just use ChatGPT for EVERY SINGLE STEP. "How do I list running processes?", "How do I see hidden files?", copy and pasted every error message into ChatGPT etc

Now, I had several thought about this:

1. Is this common?

2. As one coworker joked "I knew the robots were coming, just not today"

3. We got essentially zero signal on his Linux related debugging skills

4. What signal was he trying to send by doing this? e.g. I would assume he would realize "oh, they are hiring for people who are well versed in Linux and troubleshooting"

5. I know some people might say "well, he probably eventually got to the answer" but the point is that ChatGPT doesn't always have the answer.

reply
fidotron
6 hours ago
[-]
About 15 years ago I was introduced to an environment where approximately a hundred developers spent their lives coaxing a classic style expert system ( https://en.wikipedia.org/wiki/Expert_system ) into controlling a build process to adjust the output for thousands of different output targets. I famously described the whole process as "brain damaging", demonstrated why [1], and got promoted for it.

People that spend their lives trying to get the LLMs to actually write the code will find it initially exhilarating, but in the long run they will hate it, learn nothing, and end up doing something stupid like outputting thousands of different output targets when you only need about 30.

If you use them wisely though they really can act as multipliers. People persist in the madness because of the management dream of making all the humans replaceable.

[1] All that had happened was the devs had learned how to recognize very simple patterns in the esoteric error messages and how to correct them. It was nearly trivial to write a program that outperformed them at this.

reply
mensetmanusman
6 hours ago
[-]
Not being a programmer, I have a question.

Can any program be broken down into functions and functions of functions that have inputs and outputs so that they can be verified if they are working?

reply
jon-wood
6 hours ago
[-]
In theory, yeah. In many ways that's what test driven development is, you keep breaking down a problem into a function that you can write a unit test for, write the test, write the implementation, move on. In practice writing the functions and verifying their inputs and outputs isn't the hard bit.

The hard bit is knowing which functions to write, and what "valid" means for inputs and outputs. Sometimes you'll get a specification that tells you this, but the moment you try to implement it you'll find out that whoever was writing that spec didn't really think it through to its conclusion. There will be a host of edge cases that probably don't matter, and will probably never be hit in the real world anyway, but someone needs to make that call and decide what to do when (not if) they get hit anyway.

reply
chasd00
1 hour ago
[-]
What you're describing is basically perfect unit testing. There was even a trend called TDD ( test driven development ) that tried to make the tests the driving force behind building the software in the first place. It works but it has to be perfect and, inevitably, your tests need testing and so you're back to square one. Regardless, it's tedious and time consuming and shortcuts get taken then value of the whole thing falls apart and running the unit tests just becomes like a ritual with no real meaning/impact.
reply
Arainach
6 hours ago
[-]
Not without extraordinary cost that no one (save NASA, perhaps) is willing to pay.

Even if you can formally verify individual methods, what you're actually looking for is if we can verify systems. Because systems, even ones made of of pieces that are individually understood, have interactions and emergent behaviors which are not expected.

reply
cyberpunk
6 hours ago
[-]
Pretty much, yes. But what i think you’re talking about (formal verification of code) is a bit of a dark art and barely makes it out of very specialised stuff like warhead guidance computers and maybe some medical stuff etc.
reply
chasd00
1 hour ago
[-]
Also, if you're going to formally verify your code then the compiler better have been formally verified. If the compiler has been verified then the ASM better be formally verified and so on all the way down to the actual circuit and clock.

...then a bit flips because of a stray high energy particle or someone trips over the metaphorical power cord and it all crashes anyway.

reply
timschmidt
6 hours ago
[-]
Most people don't bother with formal verification because it costs extra labor and time. LLMs address both. I've been enjoying working with an LLM on Rust projects, especially for writing tests, which aren't the same as formal verification, but it's in the same ballpark.
reply
cryptonym
6 hours ago
[-]
Vibe-coding tests is nowhere near formal verification.
reply
drdrey
6 hours ago
[-]
not even close to being in the same ballpark
reply
pentamassiv
5 hours ago
[-]
In theory you cannot even say for all programs and all inputs if the program will finish the calculation [0]. In practice you often can break it down but the number of combinations of input is what makes it impossible to test everything. Most developers try to keep individual functions as small as possible to understand them easier. You can use math to do formal verification, but that gets difficult with real programs too.

[0] https://en.wikipedia.org/wiki/Halting_problem

reply
mkleczek
5 hours ago
[-]
No, it is not possible, not only in practice but - more importantly - in theory as well:

https://pron.github.io/posts/correctness-and-complexity

reply
Quarrelsome
3 hours ago
[-]
functional code because it intentionally always takes input and returns output. However not all code is functional and testing has a side effect of making change harder. So if you write a lot of half-useless tests that break when you change anything then you've just made your code harder to change. Even with an AI doing that automatically the damage is contextual, which tests should be removed after a given change and which kept? It requires a decent amount of thought.

Outside of functional code, there's a lot out there which requires mutable state. This is much harder to test which is why user interface testing on native apps is always more painful and most people still run manual QA or use an entirely different testing approach.

reply
taco_emoji
6 hours ago
[-]
Long story short: no.

Long story: yes, but it'd take centuries to verify all possible inputs, at least for any non-trivial programs.

reply
black_knight
6 hours ago
[-]
Proofs of correctness is a thing. If you prove something correct you don’t have to test every input. It just takes a big effort to design the program this way. And must be done from the beginning.
reply
yodsanklai
6 hours ago
[-]
There are many implications to this question! TLDR; in theory yes, in practice no.

Can a function be "verified", this can mean "tested", "reviewed", "proved to be correct". What does correct even mean?

Functions in code are often more complex than just having input and output, unlike mathematical functions. Very often, they have side effects, like sending packets on network or modifying things in their environment. This makes it difficult to understand what they do in isolation.

Any non-trivial piece of software is almost impossible to fully understand or test. These things work empirically and require constant maintenance and tweaking.

reply
mfenniak
6 hours ago
[-]
Not really.

If a program is built with strong software architecture, then a lot of it will fit that definition. As an analogy, electricity in your home is delivered by electrical outlets that are standardized -- you can have high confidence that when you buy a new electrical appliance, it can plug into those outlets and work. But someone had to design that standard and apply it universally to the outlets and the appliances. Software architecture within a program is about creating those standards on how things work and applying them universally. If you do this well, then yes, you can have a lot of code that is testable and verifiable.

But you'll always have side-effects. Programs do things -- they create files, they open network connections, they communicate with other programs, they display things on the screen. Some of those side-effects create "state" -- once a file is created, it's still present. These things are much harder to test because they're not just a function with an input and an output -- their behavior changes between the first run and the second run.

reply
black_knight
6 hours ago
[-]
No. Not every program can be broken down so. If you want that kind of certainty, this consideration needs to be part of the development process from the very beginning. This is what functional programming is all about.
reply
hartator
1 hour ago
[-]
I think the key here is no-one wants to read someone else ChatGPT.

Even when it’s been vetted by another engineer via actual tests, there is something that in my flesh not me want to read that output?

reply
therobots927
7 hours ago
[-]
LLMs are already unleashing more chaos within tech companies than a vigilante hacker ever could.
reply
KronisLV
5 hours ago
[-]
> Here’s an experiment for you: stop using “AI”. Try it for a day. For a week. For a month.

Sooner or later, this will be akin to asking people to stop be able to do their job, the same way how I might not recall how to work with a Java enterprise codebase purely through the CLI, because all of the run profiles in JetBrains IDEs have rotten my own brain, except apply it to developing any software at all.

Maybe not the best analogy because running things isn't too hard when you can reference a Dockerfile, but definitely stuff around running tests and passing the plethora of parameters that are needed to just stand the damned thing up correctly with the right profiles and so on.

That's also like asking people to stop using Google for a while, upon which I bet most of us are reliant to a pretty large degree. Or StackOverflow, for finding things that people wouldn't be able to discover themselves (whether due to a lack of skills, or enough time).

I think the reliance on AI tools will only go upwards.

Here's my own little rant about it from a bit back: https://blog.kronis.dev/blog/ai-artisans-and-brainrot

reply
roncesvalles
6 hours ago
[-]
I love how OP doesn't reach the most obvious conclusion that (record scratch, freeze frame) he just landed himself at a trash-tier company.

The only thing LLMs are doing here is surfacing the fact that your company hired really bad talent.

And no, basing your team in Arkansas or Tbilisi or whatever, not doing Leetcode interviews, and pinky-promising you're the Good Guys™ (unlike evil Meta and Google *hmmphh*) doesn't exempt you from the competitive forces of the labor market that drive SWE salaries well into the mid six figures, because tbh most people don't really mind grinding Leetcode, and most people don't really mind moving to the Bay Area, and most people don't really mind driving in to work 5 days a week, and they definitely don't give a shit whether their company's mission is to harvest every newborn's DNA to serve them retina-projected cigarette ads from the age of 10, or if it's singing privacy-respecting kumbaya around the FOSS bonfire.

You only get what you pay for, not a lick more.

LLMs are going to put these shitty companies in a really bad state while the spade-sellers laugh their way to the bank. I predict within 24 months you're going to see at least one company say they're enforcing a total ban on LLM-generated code within their codebase.

reply
stpedgwdgfhgdd
6 hours ago
[-]
An AI tool like CC requires lots of experience to be used effectively. With today’s technology it still & also requires a lot of old fashioned coding experience. In the hands of a power user it can give a big productivity boost.

That said, I wonder how many dormant bugs get introduced at this moment by the less talented. Just a matter of time…

reply
pxtail
5 hours ago
[-]
I think what's going on is a paradigm shift to treat code as completely throwaway piece, not meticulously analyzed, inspected and manually reviewed but completely replaceable without caring too much about internals - if outputs are correct for set of inputs, perf is ok then its passed as "good enough". This approach is still evolving but some developers are already picking it up, and since students and new developers are learning to work like this then future for new generation of coders in inevitable - new position is coming to life: vibe coding engineer
reply
SeasonalEnnui
6 hours ago
[-]
Good blog post, I recognise much of that.

The positions of both evangelists and luddites seems mad to me, there's too much emotion involved in those positions for what amounts to another tool in the toolbox that should only be used in appropriate situations.

reply
shusson
6 hours ago
[-]
The majority of software engineers today, (mostly in big tech) are not interested in software engineering. They studied it to make money. This happened before LLMs. Add the fact that software development isn’t deterministic. And you have a perfect storm of chaos.

But our discipline has been through similar disruptions in the past. I think give it a few years then maybe we’ll settle on something sane again.

I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)

reply
suddenlybananas
6 hours ago
[-]
>I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)

That certainly isn't true if what this article suggests is true.

reply
donperignon
6 hours ago
[-]
this is a symptom of larger issue. some people are just proxies to other entities. before it was stackoverflow, now they channel the responses from the LLM. this people will be the ones losing their jobs, they are simply not adding any value, for me is not that developers will suffer, but this type of developer will be gone soon, and let me tell you, if i want to ask chatgpt i go to openai, i dont need anyone to type for me and act as a human but replying with a bullet point list full of emojis.
reply
chiliada
6 hours ago
[-]
I have to admit it comes across as deeply ironic to see a SWE complain of feeling 'violated' when they reach out expecting human help/interaction and are instead met with some thoughtless automation which actually wastes more of their time. The very abominations SWEs have helped inflict on customers everywhere for decades. Maybe things are just coming full circle ;)
reply
chasd00
58 minutes ago
[-]
heh it is pretty funny, SWEs created their own "torment nexus".
reply
yencabulator
4 hours ago
[-]
If you, a human, copy-paste me an LLM response without any "cover letter" about the content, I will not talk to you after that and instead ask the LLM directly.

In my mind, anyone willing to opt out of providing value is welcome to do so.

reply
mikewarot
4 hours ago
[-]
>We can say our prayers that Moore’s Law will come back from the dead and save us.

Let's say that happens, how would it actually help? What's the mechanism?

reply
TuringNYC
5 hours ago
[-]
I read your post nodding in agreement at times. Keep assured the market takes care of these things. In the medium term, productive uses allow winners to win in the marketplace. Unproductive uses cause losers to lose money and fall behind.
reply
1970-01-01
5 hours ago
[-]
LLMs are the duct tape of the world. A little here and there does help and works great as a short term solution, but using it everywhere or for a permanent fix is a recipe for disaster.
reply
patrickwalton
7 hours ago
[-]
I worked with an accounting firm that assigned me a new controller. I had a very similar experience where the new controller would feed me AI slop back and forth. I called them out, asked them to stop, and was treated with a brief period of what seemed to be terse, resentful responses before they went back to AI responses. We found another accounting firm after that.
reply
bfrog
6 hours ago
[-]
AI is hot garbage being used as if its a hammer and everything is a nail. The sooner this house of cards collapses the sooner real value can be gained from it by using it as a tool to appropriately assist.

But we're in the mega hype phase still (1998/1999) and haven't quite crested over to the reality phase.

reply
danielbln
1 hour ago
[-]
If the dotcom bubble is the playbook, then we'll see a burst soon, and in 10-15 years a proliferation that absolutely dwarfs anything we're seeing today.
reply
yanis_t
5 hours ago
[-]
Here’s an experiment for you: stop using the word "AI".

Let's just all call it LLM models, which they are. It's just a tool, that increase your productivity when used right.

reply
littlecranky67
6 hours ago
[-]
Feel like inventing my own programming language and have AI built a compiler for it, so that in every new project we have a new programming language that the AI knows shit about.
reply
boesboes
6 hours ago
[-]
What is going on? We are blaming poor management and coaching/training on a tool again. It doesn't work, just as much as tools are never the answer to cultural problems. But blaming (or fixing) the tech is easy, that's why devops never really became more then increasingly complex shell scripting instead of having a real discussion on collaboration, shared goals and culture.

But it's a natural part of the cycle i think. Assembly language, compilers, scripting languages, application development frameworks... All lead to a new generation of programmers that "dont' understand anything!" and "it's just useful for the lazy!"..

I call BS. This is 100% a culture and management problem. I'd even go so far as to say, it is our responsibility as seniors to coach this new generation into producing quality and value with the tools they have. Don't get me wrong, I love shouting at clouds; i even mumble angrily at people in the streets sometimes and managers are mostly idiots; but we are the only ones that can guide them to the light so to speak.

Don't blame the tool, fix the people.

reply
dartharva
7 hours ago
[-]
What's going on is that your org's management is filled with morons. This is not in your control, and there is nothing you can do about it other than moving out.
reply
jcgrillo
3 hours ago
[-]
^^ This right here. The reason this is happening is your senior leadership has decided this is how they want it to be. Vote with your feet.
reply
SideburnsOfDoom
7 hours ago
[-]
Sadly, this time the nonsense is colonising orgs that were too sensible to fall for the last round of tech-scamming (Blockchain, Cryptocurrency, NTFs etc).

I don't want to move again and it's a terrible time to try, partly because of this nonsense. So where we are.

reply
OtherShrezzing
6 hours ago
[-]
I call this "flood the zone driven development".
reply
jmclnx
6 hours ago
[-]
To me, it this whole vibe/LLM/AI thing looks like yet another method pushed by MBAs to speed up development, save money and make things "easier". But in reality slows things down to a crawl.

Over the decades I have seen many of these things, this iteration seems to me a push on steroids. I stopped paying attention around the mid 90s and did things "my way". The sad thing is, seems these days a developer cannot hide in the shadows.

reply
blibble
6 hours ago
[-]
> In a recent company town-hall, I watched as a team of junior engineers demoed their latest work

> Championing their “success”, a senior manager goaded them into bragging about their use of “AI” tools to which they responded “This is four thousand lines of code written by Claude”. Applause all around.

this is no different to applauding yourself for outsourcing your own role to a low cost location

wonder if they'll still be clapping come layoffs

reply
bfrog
6 hours ago
[-]
The question to ask is... why have the junior be a chatgpt interface if this is the case.
reply
jerf
6 hours ago
[-]
That's a question every current junior should be asking themselves.

If you want to be well-paid, you need to be able to distinguish yourself in some economically-useful manner from other people. That was true before AI and AI isn't going to make it go away. It may even in some sense sharpen it.

In another few years there's going to be large numbers of people who can be plopped down in front of a code base and just start firing prompts at an AI. If you're just another one of the crowd, you're going to get mediocre career results, potentially including not having a career at all.

However, here in 2025 I'm not sure what that "standing out in the crowd" will be. It could well be "exceptional skill in prompting". It could be that deeper understanding of what the code is really doing. It could be the ability to debug deeply yourself with an old-school debugger when something goes wrong and the AI just can't work. It could be non-coding skills entirely. In reality it'll be more than just one thing anyhow and the results will vary. I don't know what to tell you juniors except to keep your eyes peeled for whatever this will be, and when you think you have an idea, don't let the cognitively-lazy appeal of just letting the AI do everything stop you from pursuing it. I don't know specifically what this will be, but you don't have to be right the first time, you have time to get several licks at this.

But I do know that we aren't going to need very many people who are only capable of firing prompts at an AI and blindly saying "yes" to whatever it says, not because of the level of utility that may or may not have, but because that's not going to distinguish you at all.

If all you are is a proxy to AI, I don't need you. I've got an AI of my own, and I've got lower latency and higher bandwidth to it.

Correspondingly, if you detect that you are falling into the pattern of being on the junior programmer end of what this article is complaining about, where you interact with your coworkers as nothing but an AI proxy, you need to course correct and you need to course correct now. Unfortunately, again, I don't have a recipe for that correction. Ask me in 2030.

"Just a proxy to an AI" may lead to great things for the AI but it isn't going to lead you anywhere good!

reply
enraged_camel
6 hours ago
[-]
Why was the title changed, and the word "hell" removed?
reply
ifyoubuildit
6 hours ago
[-]
It's simple: it's just a multiplier, like power tools or heavy equipment. You can use a giant excavator to do a ton of work quickly. You can also knock the house over by accident.

People probably said the same things when the first steam shovels came around. I for one like things that make me have to shovel less shit. But you'd also have the same problems if you put every person in the company behind the controls of a steam shovel.

reply
mikewarot
4 hours ago
[-]
It's all about bullshitting.

So I'm reading the article which initially appears to be a story about an engineering organization and I'm wondering where they came up with someone silly enough to trust an LLM with people's lives.

Then it dawned on me that it wasn't an engineering organization at all, just a bunch of programmers with title inflation. It follows as a consequence that since they and their management don't respect the title and skillset of an Engineer, they likely wouldn't respect or appreciate their programming staff either.

The meta of this is that an audience comfortable with giving the title Engineer to programmers shouldn't really be surprised by this outcome.

>We can say our prayers that Moore’s Law will come back from the dead and save us.

I'm working on that very problem. I expect results with 2 years.

reply
grey-area
6 hours ago
[-]
The heart of the article is this conclusion, which I think is correct from first-hand experience with these tools and teams trying to use them:

So what good are these tools? Do they have any value whatsoever?

Objectively, it would seem the answer is no.

reply
thinkingtoilet
6 hours ago
[-]
The main benefit I've gotten from AI that I see no one talking about is it dramatically lessens the mental energy required to work on a side project after a long day of work. I code during the day, it's hard to find motivation to code at night. It's a lot easier to say "do this", have the AI generate shitty code, then say, "you duplicated X function, you over complicated Y, you have a bug at Z" then have it fix it. On a good day I get stuff done quicker, on an average day I don't think I do. However, I am getting more done because the it takes a huge chunk out of the mental load for me and requires significantly less motivation to get something done on my side project. I think that is worth it to me. That said, I am just about to ban my junior engineers from using it at work because I think it is detrimental to their growth.
reply
Cyan488
5 hours ago
[-]
I agree with the side-project thing, where the code is only incidental to working on the real project. I recently wanted to organize thousands of photos my family had taken over decades and sprawled on a network drive, and in 5 minutes vibe-coded a script to recursively scan, de-dupe, rename with datetime and hash, and organize by camera from the EXIF data.

I could have written it myself in a few hours, with the Python standard docs open on one monitor and coding and debugging on the other etc, but my project was "organize my photos" not "write a photo organizing app". However, often I do side projects to improve my skills, and using an AI is antithetical to that goal.

reply
nyargh
4 hours ago
[-]
I've found a lot of utility in this. Small throw away utility apps where I just want to automate some dumb thing once or twice and the task is just unusual enough that I can't just grab something off the shelf.

I reached for claude code to just vibe a basic android ux to drive some rest apis for an art project as the existing web UI would be a PITA to use under the conditions I had. Worked well enough and I could spend my time finishing other parts of the project. It would not have been worth the time to write the app myself and I would have just suffered with the mobile web UI instead. Would I have distributed that Android app? God no, but it did certainly solve the problem I had in that moment.

reply
piva00
5 hours ago
[-]
Very much the same for me. I use some LLMs to do small tasks at work where I know they can be useful, it's about 5-10% of my coding work which itself is about 20% of my time.

Outside of work though it's been great to have LLMs to dive into stuff I don't work with, which would take me months of learning to start from scratch. Mostly programming microcontrollers for toy projects, or helping some artists I know to bring their vision to life.

It's absurdly fun to get kickstarted into a new domain without having to learn the nitty-gritty first but I eventually need to learn it, it just lowers the timeframe to when the project becomes fun to work with (aka: it barely works but does something that can be expanded upon).

reply
potsandpans
6 hours ago
[-]
I don't think you understand what the word "objectively" means.
reply
DrillShopper
4 hours ago
[-]
Objectively is like literally - they both also mean their own opposites (subjectively and figuratively) due to how people actually use them in real life writing and communication as opposed to literature
reply
boesboes
6 hours ago
[-]
It's from the post. And I agree, the author has no clue what objectively means.

Just another old-man-shouting-at-cloud blog post. you company culture sucks and the juniors need to be managed better. Don't blame the tools.

reply
catskull
6 hours ago
[-]
FWIW I’m 34 :)
reply
danielbln
5 hours ago
[-]
Take it from a 40 year old, to someone <30, 34 is old. To someone <20 you're basically walking dead.
reply
allknowingfrog
4 hours ago
[-]
I became an old man well before I reached my 30's. :)
reply
dlachausse
6 hours ago
[-]
AI tools absolutely can deliver value for certain users and use cases. The problem is that they’re not magic, they’re a tool and they have certain capabilities and limitations. A screwdriver isn’t a bad tool just because it sucks at opening beer bottles.
reply
ptx
6 hours ago
[-]
So what use cases are those?

It seems to me that the limitations of this particular tool make it suitable only in cases where it doesn't matter if the result is wrong and dangerous as long as it's convincing. This seems to be exclusively various forms of forgery and fraud, e.g. spam, phishing, cheating on homework, falsifying research data, lying about current events, etc.

reply
disgruntledphd2
4 hours ago
[-]
> So what use cases are those?

I think that as software/data people, we tend to underestimate the number of business processes that are repetitive but require natural language parsing to be done. Examples would include supply chain (basically run on excels and email). Traditionally, these were basically impossible to automate because reading free text emails and updating some system based on that was incredibly hard. LLMs make this much, much easier. This is a big opportunity for lots of companies in normal industries (there's lots of it in tech too).

More generally, LLMs are pretty good at document summarisation and question answering, so with some guardrails (proper context, maybe multiple LLM calls involved) this can save people a bunch of time.

Finally, they can be helpful for broad search queries, but this is much much trickier as you'd need to build decent context offline and use that, which (to put it mildly) is a non-trivial problem.

In the tech world, they are really helpful in writing one to throw away. If you have a few ideas, you can now spec them out and get sortof working code from an LLM which lowers the bar to getting feedback and seeing if the idea works. You really do have to throw it away though, which is now much, much cheaper with LLM technology.

I do think that if we could figure out context management better (which is basically decent internal search for a company) then there's a bunch of useful stuff that could be built, but context management is a really, really hard problem so that's not gonna happen any time soon.

reply
barbazoo
6 hours ago
[-]
Extracting structured data from unstructured text at runtime. Some models are really good at that and it’s immensely useful for many businesses.
reply
Piskvorrr
5 hours ago
[-]
Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

How do you fix that, when the process is literally "we throw an illegible blob at it and data comes out"? This is not even GIGO, this is "anything in, synthetic garbage out"

reply
barbazoo
4 hours ago
[-]
> Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

You gotta watch for that for sure but no that's not a issue we worry about anymore, at least not for how we're using it for here. The text that's being extracted from is not a "BLOB". It's plain text at that point and of a certain, expected kind so that makes it easier. In general, the more isolated and specific the use case, the bigger the chances of the whole thing working end to end. Open ended chat is just a disaster. Operating on a narrow set of expectations. Much more successful.

reply
disgruntledphd2
4 hours ago
[-]
> Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

I mean, this is much less common than people make it out to be. Assuming that the context is there it's doable to run a bunch of calls and take the majority vote. It's not trivial but this is definitely doable.

reply
mooseling
3 hours ago
[-]
I started a new job recently, and used ChatGPT tons to learn how to use the new tools: python, opencv, fastapi. I had questions that were too complex for a web search, which ChatGPT answered very coherently! I found it a very good tool to use alongside web search, documentation, and trawling through Stack Overflow.
reply
dlachausse
6 hours ago
[-]
I personally use it as a starting point for research and for summarizing very long articles.

I’m a mostly self taught hobbyist programmer, so take this with a grain of salt, but It’s also been great for giving me a small snippet of code to use as a starting point for my projects. I wouldn’t just check whatever it generates directly into version control without testing it and figuring out how it works first. It’s not a replacement for my coding skills, but an augmentation of them.

reply
mexicocitinluez
6 hours ago
[-]
I need you to tell me how when I just fed Claude a 40 page Medicare form and asked it to translate it to a print-friendly CSS version and uses Cottle for templating "objectivtely" was of no value to me?

What about 20 minuets ago when I threw a 20-line Typescript error in and it explained it in English to me? What definition of "objective" would that fall under?

Or get this, I'm building off of an existing state machine library and asked it to find any potential performance issues and guess what? It actually did. What universe do you live in where that doesn't have objective value?

Am I going to need to just start sharing my Claude chat history to prove to people who live under a rock that a super-advanced pattern matcher that can compose results can be useful???

Go ahead, ask it to write some regex and then tell me how "objectively" useless it is?

reply
swader999
6 hours ago
[-]
And a slew of tests too...
reply
DrillShopper
4 hours ago
[-]
I'm glad you asked! It's a complex and multifaceted issue.
reply
NoMoreNicksLeft
6 hours ago
[-]
>Am I going to need to just start sharing my Claude chat history to prove to people

I think we'll all need at least 3 of your anecdotes before we change our minds and can blissfully ignore the slow-motion train wreck that we all see heading our way. Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

reply
anthonylevine
5 hours ago
[-]
>Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

This sentence is proof you guys are some of the most absurd people on this planet.

reply
tossandthrow
5 hours ago
[-]
If I go into a PR that requires a lot of feedback, I will usually stop after 5 - 10 pieces of feedback and let the author know that it is not ready for review, and needs to be further developed.

I am OK with the author using AI heavily. But if I continue to see slop, I will continue to review less and send it back.

In the end, if the engineer is fiddling around for too long, they don't get any work in, which is a performance issue.

I am always available to help out the colleague to write understand the system and write code.

For me, the key is to not accept to review AI slop just like I do not accept reviewing other types of slop.

If something is recognized as slop, it is not ready to be reviews.

This puts an upwards pressure on developers to deliver better code.

reply
PaulHoule
5 hours ago
[-]
It's kinda funny that my experience is the opposite but then again I'm senior and working at a different scale.

I don't think my LLM-assisted programming is much faster than my unassisted programming but I think I write higher quality code.

What I find is that my assistant sometimes sees a simple solution that I miss. Going back and forth with an assistant I am likely to think things through in more detail than I would otherwise. I like to load projects I depend on into IntelliJ IDEA and use Junie to have a conversation with the code that helps me get a more thorough and complete understanding of it than I would get looking at it myself more quickly.

As a pro maintenance programmer there is always a certain amount of "let sleeping dogs lie", if something works but you don't understand it you may decide to leave it alone. With an assistant I feel empowered to put more effort into really understanding things that I can get away without understanding, fix minor problems I wouldn't have fixed otherwise, etc.

One bit of advice is that as the context grows assistants seem to break bad. Often I ask Junie to write something, then give it some feedback about things it got wrong, and early in the session I'm blown away. You might think you should keep the session going so it will remember the history but actually it doesn't have the ability to realize some of the history is stale because it is about the way the code was before [1] and the result is eventually it starts going in loops. It makes sense then to start a new session and possibly feed it some documentation about the last session that tells it what it needs to know going forward.

[1] a huge problem with "the old AI", ordinary propositional logic sees the world from a "god's eye" point of view where everything happens at once, in our temporal world we need temporal if not bitemporal logic -- which is a whole can of worms. On top of that there is modal logic ("it is possible that X is true", "it is necessary that X is true") and modeling other people's belief "John thinks that Mary thinks that X is true") It's possible to create a logic which can solve specific problems but a general-purpose commonsense logic is still beyond state of the art.

reply
jackdoe
5 hours ago
[-]
> Here’s an experiment for you: stop using “AI”. Try it for a day. For a week. For a month.

I did that, now I have at least 2-3 days a week where I dont use any AI, and its much better

there is something very strange when using AI a lot, and it is affecting me a lot and I cant quite put my finger on it

I wrote https://punkx.org/jackdoe/misery.html some time ago to try to explain it, but honestly recently I relized that I really hate reading tokens.

reply
rickreynoldssf
6 hours ago
[-]
I think a lot of the impressions of AI generating slop is a case of garbage in/garbage out. You need to learn HOW to ask for things. Just asking "write code to do X" is wrong in most cases. You have to provide some specifications and expectations just like working with a junior engineer. You also can't ask "write me a module that does X". You need to design the module yourself and maybe ask AI for help with each specific individual endpoint.

These juniors you're complaining about are going to get better in making these requests of AI and blow right past all the seniors who yell at clouds running AI.

reply
jon-wood
6 hours ago
[-]
How will these juniors get better at making those requests when it sounds like they're not interested in understanding what's happening and the implications of it? That requires a degree of introspection which doesn't appear to be taking place if they're just copy/pasting stuff back and forth to LLMs.
reply
NitpickLawyer
6 hours ago
[-]
> I think a lot of the impressions of AI generating slop is a case of garbage in/garbage out.

I've been coding for 25 years and what I feel reading posts & comments like in this thread is what I felt in the first few days of that black-blue/white-gold dress thing. I legitimately felt like half the people were trolling.

It's the same with LLM assisted coding. I can't possibly be getting such good results when all the rest are getting garbage, right? Impostor syndrome? Are they trolling?

But yeah, I agree fully with you. You need to actively try everything yourself, and this is what I recommend to my colleagues and friends. Try it out. See what works and what doesn't. Focus on what works, and put it in markdown files. Avoid what doesn't work today, but be ready because tomorrow it might work. Use flows. Use plan / act accordingly. Use the correct tools (context7 is a big one). Use search before planning. Search, write it to md files, add it in the repo. READ the plans carefully. Edit before you start a task. Edit, edit edit. Use git trees, use tools that you'd be using anyway in your pipelines. Pay attention to the output. Don't argue, go back to step1, plan better. See what works for context, what doesn't work. Add things, remove things. Have examples ready. Use examples properly. There's sooo much to learn here.

reply
thunderbong
6 hours ago
[-]
Reminds me of the early Google days when everyone was finding ways to search better!
reply
ElatedOwl
6 hours ago
[-]
This will get written off as victim blaming, but there’s some truth here.

I don’t use Claude code for everything. I’ve fallen off the bike enough times to know when I’ll be better off writing the changes myself. Even in these cases, though, I still plan with Claude, rubber duck, have it review, have it brainstorm ideas (“I need to do x, I’m thinking about doing it such and such way, can you brainstorm a few more options?”)

reply
ptsneves
6 hours ago
[-]
> These juniors you're complaining about are going to get better in making these requests of AI and blow right past all the seniors who yell at clouds running AI.

I agree with your comment up to a point, but this is pretty similar to pilots and autopilots. At the end of the day you still need a pilot to figure out non standard issues and make judgement calls.

The junior blowing past is as good as how long he will take to fix this issue that all the credits/prompts in the world are not solving. If the impact is long and costs enough your vibe coders will have good instantaneous speed but never reach the end.

I am optimist about AI usage as a tool to enhance productivity, but the workflows are still being worked out. It currently is neither fire all devs, nor No LLM allowed. It is definitely an exciting time to be a senior though :)

reply
tschellenbach
6 hours ago
[-]
AI is currently a very effective army of junior/senior engineers. It requires the oversight of a staff level engineer with some product understanding. When properly applied it can speed up development pace ~100-200x. Things that help achieve this outcome: staff engineer reviewing, typed language (it is so much better at Go compared to Python for instance), well structured project/code, solid testing best practices.

It sounds like the author's company doesn't have this setup well.

If you use AI wrong you get AI slop :)

reply
justlikereddit
6 hours ago
[-]
Young coders have finally simply adopted the mentality of the corporate and political leadership.

Congratulations everyone, you finally got what you deserve instead of what you need.

reply
ascendantlogic
6 hours ago
[-]
> Here’s the thing - we want to help. We want to build good things. Things that work well, that make people’s lives easier. We want to teach people how to do software engineering!

This is not what companies want. Companies want "value" that customers will pay for as quickly and cheaply as possible. As entities they don't care about craftsmanship or anything like that. Just deliver the value quickly and cheaply. Its this fundamental mismatch between what engineers want to do (build elegant, well functioning tools) and what businesses want to do (the bare minimum to get someone to give them as much money as possible) that is driving this sort of pulling-our-hair-out sentiment on the engineering side.

reply
Lauris100
6 hours ago
[-]
“The only way to go fast, is to go well.” Robert C. Martin

Maybe spaghetti code delivers value as quickly as possible in the short term, but there is a risk that it will catch up in the long term - hard to add features, slow iterations - ultimately losing customers, revenue and growth.

reply
ascendantlogic
6 hours ago
[-]
Anecdotally I'm already seeing this on a small scale. People who vibe coded a prototype to 1 mil ARR are realizing that the velocity came at the cost of immense technical debt. The code has reached a point where it is essentially unmaintainable and the interest payments on that technical debt are too expensive. I think there's going to be a lot of money to be made over the next few years un-fucking these sort of things so these new companies can continue to scale.
reply
_DeadFred_
6 hours ago
[-]
So basically the new version of the 1990's people's projects that grew to high ARR based on their random Visual Basic codebase? That's how software companies have been starting for 30 years.
reply
ascendantlogic
6 hours ago
[-]
Time is a flat circle and what is old is new again.
reply
busssard
6 hours ago
[-]
if i have 1mil ARR, i can hire some devs to remake my product from scratch. and use the Vibecoded Example as a design mockup.

If i manage to vibecode something alone that takes off, even without technical expertise, then you validated the AI usecase...

Before Claude i had to make a paper prototype or a figma, now i can make Slop that looks and somehow functions the way i want. i can make preliminary tests, and even get to some proof of concept. in some cases even 1million $ annual revenue...

reply
chasd00
49 minutes ago
[-]
> hire some devs

you're making an assumption these devs you hire actually know what they're doing and not just a proxy back to an LLM.

reply
ascendantlogic
6 hours ago
[-]
Yes, this is exactly where AI shines: PoCs and validating ideas. The problems come when you're ready to scale. And the "I can hire some devs to remake my product from scratch" part is the exact money making scenario some of my consulting friends are starting to see take shape in the market.
reply
const_cast
5 hours ago
[-]
But people say this about technology in software engineering time and time again.

VB? VBA macros in Excel? Delphi? Uhh... Wordpress? Python as a language?

Well you see these are just for prototypes. These are just for making an MVP. They're not the real product.

But they are the real product. I've almost never seen these been successfully used as just for prototyping or MVPs. It always becomes the real codebase and it's a hot fucking mess 99% of the time.

reply
disqard
5 hours ago
[-]
You're not wrong about that.

What ends up happening is that humans get "woven" into the architecture/processes, so that people with pagers keep that mess going even though it really should not be running at that scale.

"Throw one away" rarely happens.

reply
Workaccount2
5 hours ago
[-]
This is where the missmatch is, the future is not in scaled apps, the future is in everyone being able to make their own app.

You don't have to feature pack if you are making a custom app for your custom use case, and LLMs are great with slim narrow purpose apps.

I don't think LLMs will replace developers, but I am almost certain they will radically change how end users use computers, even if the tech plateaus right now.

reply
ascendantlogic
3 hours ago
[-]
> the future is in everyone being able to make their own app.

Everyone can do their own plumbing and electrical work in their homes too. For some people it works out, for others it's still better to pay someone else to do it for them.

reply
Workaccount2
2 hours ago
[-]
I don't think basic software apps have anywhere near the risk profile of electrical or plumbing work.

I'm pretty comfortable letting my mom vibecode a plant watering tracker. Not so much wiring up a distribution box.

reply
occz
5 hours ago
[-]
I guess that depends on how you get that ARR-figure. If more than all of it goes to paying your AI bills, then you can't really afford that much engineering investment.
reply
mrkeen
5 hours ago
[-]
> if i have 1mil ARR, i can hire some devs to remake my product from scratch

This assumes a pool of available devs who haven't already drunk the Koolaid.

To put it another way: the 2nd wave of devs will also vibe code. Or 'focus on the happy path'. Or the 'MVP', whatever it's called these days.

From their point of view, it will be faster and cheaper to get v2 out sooner, and 'polish' it later.

Does anyone in charge actually know what 'building it right' actually means? Is it in their vocabulary to say those words?

reply
jcgrillo
5 hours ago
[-]
You would only be able to hire me to do that job if you gave me every last dollar of that ARR. And I still might turn you down tbh..
reply
Piskvorrr
5 hours ago
[-]
By then, the startup will have folded, and the C-levels will have moved on to the next Idée Du Jour.
reply
const_cast
5 hours ago
[-]
This is true, but what I've come to realize is companies only prioritize the short term, no matter what, no exceptions. They take everything on as debt.

They don't care about losing customers 10 years later because they're optimizing for next quarter. But they do that every quarter.

Does this eventually blow up? Uh, yeah, big time. Look at GE, Intel, Xerox, IBM, you name it.

But you can get shockingly far only thinking about tomorrow over and over again. Sometimes, like, 100 years far. Well by then we're all dead anyway so who cares.

reply
gjsman-1000
6 hours ago
[-]
Or, you can be like many modern CTOs: AI will likely get better and eventually be capable of mostly cleaning up its own mess today. In which case, YOLO - your startup dies, or AI is sufficiently advanced enough by the time it succeeds. The objections about quality only matter if you think it’s going to plateau.
reply
Piskvorrr
5 hours ago
[-]
That is, literally, faith-based business management. "We suck, sure - but wait, a miracle will SURELY happen in version 5. Or 6. Or 789. It will happen eventually, have faith and shovel money our way."
reply
godshatter
3 hours ago
[-]
I suspect it's going to tank instead of getting better, no matter what they try to do with attention or agents or whatever, especially if it's training on AI-written code of which there will be more and more of as time goes on. I'm not an AI expert by any means, so take that with a grain of salt.
reply
SoftTalker
5 hours ago
[-]
If the AI gets that good, what value does your startup add?
reply
Quarrelsome
5 hours ago
[-]
the fundamental issue remains that there is no objective and reliable measure of developer productivity. So those who experience it (developers) and the business who are isolated from it; end up with different perspectives. This IMHO is going to be the most important factor that fuels "AI first" stories like these, that could dominate our industry over the coming decade.

I don't think the chasm is unbridgable, because ultimately everybody wants the same thing (for the company to prosper) but they fail to entirely appreciate the perspective of the other. Its up to a healthy company organisation to productively address the conflict between the two perspectives. However, I have yet to encounter such a culture of mutal respect and resource allocation.

I fear that agentic AI could erase all the progress we've made on culture in the past 25 years (e.g. agile) and drag us back towards 80s tech culture.

reply
gjsman-1000
5 hours ago
[-]
Progress? Agile, and the aftermath (the MVP!), it’s how we got here in the first place!
reply
Quarrelsome
5 hours ago
[-]
Seems like you don't remember the 80s, 90s or even early 2000s. Agile was a movement specifically designed to help represent the interests of development in organisations. Obviously business corrupted it over time but the industry before it was considerably worse.

MVPs exist to force business into better defining their requirements. Prior to Agile we'd spend years building something and then we'd deliver it, only for business to then "change their mind", because they've now just realised (now that they have it), that what they asked for was stupid.

reply
armada651
6 hours ago
[-]
While this is true, the push-pull between sales and engineering resulted in software that is built well enough to last without being over-engineered. However if both sales and the engineers start chasing quick short term gains over long term viability that'll result in a new wave of shitty low-quality software being released.

AI isn't good enough yet to generate the same quality of software as human engineers. But since AI is cheaper we'll gladly lower the quality bar so long as the user is still willing to put up with it. Soon all our digital products will be cheap AI slop that's barely fit for purpose, it's a future I dread.

reply
Workaccount2
5 hours ago
[-]
>AI isn't good enough yet to generate the same quality of software as human engineers

The software I have vibecoded for myself totally obliterates anything available on the market. Imagine a program that operates without any lag or hicupps. Opens and runs instantly. A program that can run without an internet connection, without making an account, without somehow being 12GB in size, without totally unintuitive UI, without having to pay $20/mo for static capabilities, without persistent bugs that are ignored for years, without any ability to customize anything.

I know you are incredulous reading this is, but hear me out

Bespoke narrow scope custom software is incredibly powerful, and well within the wheelhouse of LLMs. Modern software is written to be the 110-tool swiss army knife feature pack to capture as large of an audience as possible. But if I am just using 3 of those tools, an LLM can write a piece of software that is better for me in every single way. And that's exactly what my experience has been so far, and exactly the direction I see software moving in the future.

reply
DrillShopper
4 hours ago
[-]
I'll believe it when I see it with my own eyes, otherwise these words read more like sales copy than technological discovery.
reply
Workaccount2
3 hours ago
[-]
If you haven't seen an LLM output a functional 2K or even 5K LOC program at this point, you probably never will.

The problem space of average people problems that can be addressed with <5K LOC is massive. The only real barrier is having to go through an IDE, but that will almost certainly be solved in the near future, it already sort of is with Canvas features.

reply
gjsman-1000
6 hours ago
[-]
Well, in such a future, when people have been burned by countless vibecoded projects, congratulations - FAANG wins again! Who is going to risk one penny on your rapidly assembled startup?

Any startup that can come to the table saying “All human engineers; SOC 2 Type 2 certified; dedicated Q/A department” will inherit the earth.

reply
gjsman-1000
6 hours ago
[-]
Right; I discovered at the new company I joined, they want velocity more than anything. The sloppy code, risk of mistakes, it’s all priced in to the risk assessment of not gaining ground first. So… I’m shooting out AI-written code left and right and that’s what they want. My performance? Excellent. Will it be a problem in the future? Well, either the startup fails, or AI might be able to rewrite it in the future.

It’s not what I want… but at the same time, how many of our jobs do what we want? I could easily end up being the garbage man. I’m doing what I’m paid to do and I’m paid well to do it.

reply
twodave
6 hours ago
[-]
Ironically, this post reads like it was written with an LLM.
reply
arnorhs
6 hours ago
[-]
Not at all, imo
reply
wiseowise
5 hours ago
[-]
> I’m not sure why, but I felt violated. It felt wrong.

I usually avoid this, but what a snowflake.

So many people get their knickers in a twist over nothing.

Here’s a simple flowchart:

Junior gives you ai slop and ignores your remarks? a) ignore them b) send their manager a message

If neither works -> leave.

Also, get a life.

reply
josefritzishere
5 hours ago
[-]
I was recently told that a team intended to use automated AI to update user documentation in tandem with code updates. I am terrified of how badly that is going to go after seeing how bandly AI writes code.
reply
popcorncowboy
6 hours ago
[-]
This ends in Idiocracy. The graybeards will phase out, the juniors will become staff level, except.. software will just be "more difficult". No-one really understands how it works, how could they? More importantly WHY should they? The Machine does the code. If The Machine gets it wrong it's not my fault.

The TRUE takeaway here is that as of about 12 months ago, spending time investing in becoming a god-mode dev is not the optimal path for the next phase of whatever we're moving into.

reply
ivanjermakov
5 hours ago
[-]
I'm afraid we already in the phase where regular devs have no idea how things work under the hood. So many web devs fail on the simple interview question "what happens when user enters a url and presses enter?" I would understand not knowing the details of DNS protocol, but not understanding the basics of what browser/OS/CPU is doing is just unprofessional.

And LLM assisted coding apparently makes this knowledge even less useful.

reply
const_cast
5 hours ago
[-]
Met a dev who couldn't understand the difference between git, the program, and github, the remote git frontend.

I explained it a few times. He just couldn't wrap his head around that there were files on his computer and also on a different computer over the internet.

Now, I will admit distributed VCS can be tricky if you've never seen it before. But I'm not kidding - he legitimately did not understand the division of local vs the internet. That was just a concept that he never considered before.

He also didn't know anything about filesystems but that's a different story.

reply
ryandrake
2 hours ago
[-]
This seems like a common theme around very young computer users: Applications and operating systems have, for over a decade, deliberately tried to blur the line between "files on your computer" and "files in the cloud". Photo apps present you a list of photos, and deliberately hide what filesystem those photos are actually on. "They're just your photos. Don't think too much about where they are!" The end result is that the median computer user has no idea that files exist in some physical space and there is a difference between local and remote storage.

My kid struggles with this. She can't intuitively grasp why one app needs the Internet and another app does not. I try to explain it but it all goes over her head. The idea that you need the Internet when your app needs to communicate with something outside of the phone is just foreign to her: In her mind, "the app" just exists, and there's no distinction between stuff on the phone and stuff on the network.

reply
probably_wrong
1 hour ago
[-]
I don't think I agree with calling that question "simple". I could probably speak non-stop an entire hour before we even leave my local computer: electric impulses, protocol between keyboard and PC, BIOS, interruptions, ASCII and Unicode, OS, cache, types of local storage, CPU, GPU, window management and TCP stack, encryption... It's hard to come up with a computer-related field that's not somehow involved in answering that one question.

If anything, I always consider it a good question to assert whether someone knows when to stop talking.

reply
phba
4 hours ago
[-]
"Low code quality keeps haunting our entire industry. That, and sloppy programmers who don't understand the frameworks they work within. They're like plumbers high on glue." (https://simple.wikiquote.org/wiki/Theo_de_Raadt)

This phase has been going on for decades now. It's a shame, really.

reply
jckahn
3 hours ago
[-]
> So many web devs fail on the simple interview question "what happens when user enters a url and presses enter?"

Is the answer you're looking for along the lines of "the browser makes a GET request to the specified URL," or something lower-level than that?

reply
ivanjermakov
2 hours ago
[-]
I think it's one of those intentionally vague questions that helps in probing the knowledge depth. Interviewees are typically free to describe the process with as much detail as they can.
reply
GeoAtreides
2 hours ago
[-]
I would be looking for a fractal answer and judge on the depth
reply
probably_wrong
6 hours ago
[-]
I think that's only true if you assume that the AI bubble will never burst.

Bitcoin didn't replace cash, Blockchain didn't replace databases and NoSQL didn't make SQL obsolete. And while I have been wrong before, I'm optimistic that AI will only replace programmers the same way copy-pasting from StackOverflow replaced programmers back in the day.

reply
1970-01-01
4 hours ago
[-]
We've already seen the plateau forming[1]. GPT4.X vs GPT5 isn't exactly a revolution. It will become much cheaper, much faster, but not much better.

[1] https://news.ycombinator.com/item?id=44979107

reply
rrrrrrrrrrrryan
3 hours ago
[-]
I think this is mostly true, but when it gets cheaper and faster, it will be able to complete much larger tasks unsupervised.

larger != more complex

The widespread adoption of cheap agentic AI will absolutely be an economic revolution. Millions of customer support jobs will be completely eliminated in the next few years, and that's just the beginning.

Soon it'll be easy to give an AI all the same things you give a new employee: an email address, a slack username, a web browser, access to the company intranet, a GitHub account, a virtual machine with mouse and keyboard control, etc. and you'll be able to swap it out one-for-one with pretty much any low-level employee.

reply
mexicocitinluez
6 hours ago
[-]
> So what good are these tools? Do they have any value whatsoever?

> Objectively, it would seem the answer is no. But at least they make a lot of money, right?

Wait, what? Does the author know what the word "objectively" means?

I'd kill for someone to tell me how feeding a pdf into Claude and asking it to provide a print-friendly version for a templating language has "objectively" no value?

What about yesterday when I asked Claude to look write some reflection-heavy code for me to traverse a bunch of classes and register them in DI?

Or the hundreds (maybe thousands) of times I've thrown a TS error and it explained it in English to me?

I'm so over devs thinking they can categorically tell everyone else what is and isn't helpful in a field as big as this.

Also, and this really, really needs repeated: When you say "AI" and don't specify exactly what you mean you sound like a moron. "AI", that insanely general phrase, happens to cover a wide, wide array of different things you personally use day to day. Anytime you do speech-to-text you're relying on "AI".

reply
morkalork
6 hours ago
[-]
I feel that even though I'm getting older, LLMs make me feel younger. There's things I learned in university 10 years ago that I only hazily remember but I can easily interrogate an AI and refresh myself way faster than opening old books. Just as a device for recall alone that's been trained on every power point slide that's been uploaded on lecturers websites, it's useful.
reply
OutOfHere
6 hours ago
[-]
Although some of the author's concerns are valid, the author seems completely biased against LLMs, which makes their arguments trashworthy. The author is not seeking any sensible middle ground, only a luddite ground.
reply
bccdee
6 hours ago
[-]
The author is giving an account of his experience with LLMs. If those experiences were enough to thoroughly bias him against them, then that's hardly his fault. "Sensible middle ground" is what people appeal to when they are uncomfortable engaging with stark realities.

If someone told me that their Tesla's autopilot swerved them into a brick wall and they nearly died, I'm not going to say, "your newfound luddite bias is preventing you from seeking sensible middle ground. Surely there is no serious issue here." I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

reply
mdale
6 hours ago
[-]
The poorly named "Autopilot" is a good analogy. The LLMs can definitely help with the drudgery of stop and go traffic with little risk; but take your eye off the road for one second when your moving too fast and your dead.
reply
OutOfHere
3 hours ago
[-]
It isn't, because no one is dying from not looking at the LLM's output in the next second. One is free to look at one's preferred speed.
reply
orangecat
6 hours ago
[-]
I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

Sure, and that's very different from "the idea of self-driving cars is a giant scam that will never work".

reply
bccdee
5 hours ago
[-]
If, four years on, the primary thing a tool has done for me is waste my time, I think it's time to start looking at it through the lens of a scam. Even if it does have good use cases, that is not the main thing it does, at least not in the current market.
reply
mexicocitinluez
6 hours ago
[-]
> If someone told me that their Tesla's autopilot swerved them into a brick wall and they nearly died, I'm not going to say, "your newfound luddite bias is preventing you from seeking sensible middle ground. Surely there is no serious issue here." I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

What a horrible metaphor for a tool that can translate pdfs to text. lol. The anti-AI arguments are just as, if not more, absurd than the "AI can do everything" arguments.

reply
OutOfHere
3 hours ago
[-]
Indeed. This post is just a congregation of AI-hating luddites.
reply
bccdee
5 hours ago
[-]
Per the original post, it's a tool that will waste massive amounts of your time on slop. Don't pretend that there are no negative consequences to the proliferation of AI tools.
reply
anthonylevine
5 hours ago
[-]
> Per the original post, it's a tool that will waste massive amounts of your time on slop.

I'm gonna need you to know that just because some random dev who wrote a blog said something doesn't make it true. You know that, right?

> Don't pretend that there are no negative consequences to the proliferation of AI tools.

Wait, what? Who said otherwise?

I love how you compare this tool to a Tesla's Autopilot mode, then get called out on it, and are like "Are you saying these tools are perfect" lol.

reply
bccdee
4 hours ago
[-]
> Wait, what? Who said otherwise?

> What a horrible metaphor for a tool that can translate pdfs to text. lol.

You didn't say otherwise explicitly, but you're definitely downplaying the issues discussed in the blog post.

> I'm gonna need you to know that just because some random dev who wrote a blog said something doesn't make it true.

That's not really a satisfying response. If you disagree with the post, you'll have to mount a more substantial rebuttal than, "well what if he's wrong, huh? Have you considered that?"

reply
OutOfHere
5 hours ago
[-]
The fact is that LLMs are extremely useful tools for code generation, but using them correctly for this purpose requires expertise. It's not for those with underdeveloped brains, and most definitely not for those who aren't even open to it due to a personal vendetta. Such people will rapidly find themselves without a job.
reply
citizenkeen
6 hours ago
[-]
This person understands the future of microwaves.
reply
esafak
6 hours ago
[-]
There is an attribution problem because we don't see the interaction the engineer had with the AI before submitting the work. Maybe the prompt was good, and there was a back-and-forth as the user found mistakes and asked for corrections. Maybe not. We are left to guess. A user who does not steer the AI adds no value at best, and likely makes things worse by creating work for others. There is a product opportunity here for coding agents to check in this work with something like `git notes`. This helps users claim the value they add.
reply