Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well.
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.
Which means nobody understands it, beyond the external behaviour they've tested.
I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.
But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.
Source code is often written for other humans first and foremost.
I'd much rather wade through AI slop than minified code, which may have previously been AI slop.
But I think larger point being, it's not always feasible for humans to understand every line of code that runs in their software.
Loongarch kernel, first paragraph, the lord Linus said, in all his wisdom: /* Hardware capabilities */ unsigned int elf_hwcap __read_mostly EXPORT_SYMBOL_GPL(elf_hwcap)
What a world when we’re playing Would you rather with people’s property and information.
We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much.
E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered.
Reliability matters in lots of areas that aren't war. Ignoring obvious ones like medicine/healthcare and driving, I want my banking app to be reliable. If they charge me $100 instead of $1 because their LLM didn't realize their currency was stored in floating point dollars and not cents, then I may not die but I'd be pretty upset!
As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link.
I gotta disagree with you there! Code that isn't read doesn't do anything. Code must be read to be compiled, it must be read to be interpreted, etc.
I think this points to a difference in our understanding of "read" means, perhaps? To expand my pithy "not gonna read if you didn't write" bit: The idea that code stands on its own is a lie. The world changes around code and code must be changed to keep up with the world. Every "program" (is the git I run the same as the git you run?) is a living document that people maintain as need be. So when we extend the "not read / didn't write" it's not using the program (which I guess is like taking the lessons from a book) it's maintaining the program.
So I think it's possible that I could derive benefit from someone else reading an llm's text output (they get an idea) - but what we are trying to talk about is the work of maintaining a text.
But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.
And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."
For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.
How do you know it ever does the job?
I don't know either for most code that I use, but I do have reason to trust that the author does know. I don't really trust any code itself, only the people and processes (organizational, not computer) that generated it.
I have no reason to trust that ai generated code is doing the correct thing. I know enough about the way code works to know that merely observing it seem to work in a test case means absolutely nothing at all. Multiply that zero by a million more test cases and it's the same zero.
The only thing I trust is that someone actually understood a problem they were trying to solve, and cares about avoiding edge cases, and tries to develop logic to make unintended outcomes impossible etc...
It's not possible for an ai to do any of that regardless what the prompts are. But what they can do is emit stuff that some person once wrote which did exhibit these qualities, and so looks ok, and causes idiots to think they found the cheat code to life, and worse, foist that shit off on everyone else.
My mom does not have my awareness that any of this is going on. She's just out there in the world running into this crap blindly as an unwitting end-user who has no idea how badly she's being served these days when she uses basically any app or service. Thanks for that vibe coders of the world.
Because the part of the job it automates is simple, and can be tested. I cannot overstate how simple the tools I am thinking of are. Think tipping calculator. Neither new nor creative nor complex. The real value here is being familiar with the problem.
You are missing the point here. I am talking about people who were not served at all by software developers. The alternative is not craftsmanship, but at best duct taping wordpress plugins together.
That's how I perceive vibe programming. A small one-off job that it would literally take me longer to write than to have generated? Perfectly fine. For anything else, there are professionals who get it done.
It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).
Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.
And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.
This is something I like about the LLM future. I get to spend my time with users thinking about their needs and how the product itself could be improved. The AI can write all the CSS and sql queries or whatever to actually implement those features.
If the interesting thing about software is the code itself - like the concepts and so on, then yeah do that yourself. I like working with CRDTs because they’re a fun little puzzle. But most code isn’t like that. Most code just needs to move some text from over here to over there. For code like that, it’s the user experience that’s interesting. I’m happy to offload the grunt work to Claude.
So we need to pay attention to every detail that doesn't have a single obviously correct answer, and keep the volume of code we're producing to a manageable enough level that we actually can pay attention to those details. In cases where one really is just literally moving data from here to there, then we should use reliable, deterministic code generation on top of a robust abstraction, e.g. Rust's serde, to take care of that gruntwork. Where that's not possible, there are details that need our attention. We shouldn't use unreliable statistical text generators to try to push past those details.
I really, really wish that were the case. But look at the modern web. Look at iOS apps. Look at how long discord takes to launch on a modern computer. Look how big and slow everything is. Most end user applications released today do not pay attention to those small details. Definitely not in early versions of the software. And they're still successful. At least, successful enough.
I'd love a return to the "good old days" where we count bytes and make tight, fast software with tiny binaries that can perform well even on 20 year old computers. But I've been outvoted. There aren't enough skilled programmers who care about this stuff. So instead our super fast computers from the future run buggy junk.
Does claude even make worse choices than many of the engineers at these companies? I've worked with several junior engineers who I'd trust a lot less with small details than I trust claude. And thats claude in 2026. What about claude in 2031, or 2036. Its not that far away. Claude is getting better at software much faster than I am.
I don't think the modern software development world will make the sort of software that you and I would like to use. Who knows. Maybe LLMs will be what changes that.
The main issue is that we have a lot of good tech that are used incorrectly. Each components are sound, but the whole is complex and ungainly. They are code chimeras. Kinda like using a whole web browser to build a code editor, or using react as the view layer for a TUI, or adding a dependency just to check if a file is executable.
It's like the recently posted project which is a lisp where every function call spawn a docker container.
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
Agreed.
I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.
"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.
The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).
I wonder if this is a major differentiator between AI fans and detractors. I dislike and actively avoid anything closed source. I fully agree with the premise of the submission as well.
I got Claude to make a test suite the other day for a couple RFCs so I could check for spec compliance. It made a test runner and about 300 tests. And an html frontend to view the test results in a big table. Claude and I wrote 8500 lines of code in a day.
I don’t care how the test runner works, so long as it works. I really just care about the test results. Is it finding real bugs? Well, we went though the 60 or so failing tests. We changed 3 tests, because Claude had misunderstood the rfc. The rest were real bugs.
I’m sure the test runner would be more beautiful if I wrote it by hand. But I don’t care. I’ve written test runners before. They’re not interesting. I’m all for beautiful, artisanal code. I love programming. But sometimes I just want to get a job done. Sometimes the code isn’t for reading. It’s for running.
Huh this is a thought provoking question.
I think there's a few reasons. In a test suite:
- I don't care about performance.
- I don't care (as much) about reliability. My users aren't affected by crashes and other failures in my tests.
- I don't care (as much) about correctness. Erroneously failing tests will get human attention. Tests that erroneously pass are a bigger problem, but my test suite is not the last line of defence against bugs reaching users.
- If I had infinite time, I'd love every line of code to be a mathematically beautiful work of art. But I don't. Writing this test suite by hand would have taken me about 3 weeks. Instead, I did it in 1 day with claude. This let me spend 14 productive days working on other things. I would rather have a good-enough test suite and 14 days of productive work than an excellent test suite and nothing else. I could spend those 14 days fixing all the bugs it found. Or writing more tests. Or getting claude to write more tests. Or going outside with my friends. These are all better uses of my time.
If I was writing the core of a new game engine, the scheduler of an operating system or the data storage engine for a new database, then I would think hard about every line of code. But not all code is like that. We must adapt ourselves to the project at hand. Some lines of code matter a lot more than others. Our workflow should take that into account.
When your boss (assuming you have one) tells you to do something, do you just ignore it?
Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."
this is the literary equivalent of compiling and running the code.
Okay but it is probably not going to be a tool that will be reliable or work as expected for too long depending on how complex it is, how easily it can be understood, and how it can handle updates to libraries, etc. that it is using.
Also, what is our trust with this “tool”? E.g. this is to be used in a brain surgery that you’ll undergo, would you still be fine with using something generated by AI?
Earlier you couldn’t even read something it generated, but we’ll trust a “tool” it created because we believe it works? Why do we believe it will work? Because a computer created it? That’s our own bias towards computing that we assume that it is impartial but this is a probabilistic model trained on data that is just as biased as we are.
I cannot imagine that you have not witnessed these models creating false information that you were able to identify. Understanding their failure on basic understandings, how then could we trust it with engineering tasks? Just because “it works”? What does that mean and how can we be certain? QA perhaps but ask any engineer here if companies are giving a single shit about QA while they’re making them shove out so much slop, and the answer is going to be disappointing.
I don’t think we should trust these things even if we’re not developers. There isn’t anyone to hold accountable if (and when) things go wrong with their outputs.
All I have seen AI be extremely good at is deceiving people, and that is my true concern with generative technologies. Then I must ask, if we know that its only effective use case is deception, why then should I trust ANY tool it created?
Maybe the stakes are quite low, maybe it is just a video player that you use to watch your Sword and Sandal flicks. Ok sure, but maybe someone uses that same video player for an exoscope and the data it is presenting to your neurosurgeon is incorrect causing them to perform an action they otherwise would have not done if provided with the correct information.
We should not be so laissez-faire with this technology.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.
AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.
The definition of slop is poor taste. By that definition a lot of human work is also slop.
But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
I've actually started having a different view on this. After getting over the "glancing instead of reading llm suggestions" phase I started noticing that even for simple or boilerplate tasks, LLMs all too often produce quite wasteful results regardless the setting or your subscription. They are OK to get you going but in the last weeks I haven't accepted one Claude, devstral or gpt suggestion verbatim. Nevertheless, I often throw them boilerplate tasks even though I now know that typically I'll end up coding 6 out of 10 myself and only use the other four as skeletons. But just seeing the "naive" or "generic" implementation and deciding I don't like it is a plus as it seems to compress the time of thinking about it by a good part.
And what are you basing that claim on? What are your sources? Your arguments?
It is not about the author and it is in not about the effort. It is about the quality.
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
People have been saying this about Show HNs for time eternal. There have been an insane number of poorly thought out, poorly considered, often Get-Rich-Quick type of creations, long before AI. Things where the submitter clearly doesn't understand the industry they're targeting, doesn't provide any sort of solution, etc. Really strange if people actually think this is a new phenomenon.
Indeed, a recent video that I rather loved touches on this - https://www.youtube.com/watch?v=Km2bn0HvUwg
Its subject is "Everything was Already AI", the point being that everyone is quantizing and simplifying and reflecting everyone else and the consensus, in such a fashion that people acting like AI ruined everything...yeah, it was already ruined. We already have furry artists drawing furry art just like countless other furry artists, declaring it an outrage that someone used AI to draw furry art, and so on. As the video covers, the whole idea of genres is basically people just cloning each other.
Be right back, going to put on a cowboy hat and denim and sing in a drawl about pickups and exes.
I don't personally consume furry art but I am a fan of Studio Ghibli and the anime medium in general. And even within that medium, certain artists have a very different style than others. I can usually tell Makoto Shinkai's style vs Hayao Miyazaki's style vs Akira Toriyama's style. I don't think any of them ever claimed to have copied each other. But they have all worked thousands of hours to perfect their craft.
With AI, you get people like me, who can't draw stick figures, tell Chatgpt or nano banana to make an anime version of themselves and then voilà! You get something that could probably pass as Miyazaki's in a minute.
No artist has a claim or monopoly on a genre, but they do have a claim on their own art style. With AI being trained on artists' styles, the artists whose works literally trained the AIs are now being inundated with low effort copycats of their creations.
That being said, I wrote in another thread comment that AI is an accelerator of what already exists. In a codebase, if you have crappy code patterns, AI will just accelerate that.
In business, like you said, people who had crappy ideas have always been able to submit crappy business ideas. Only a few of them actually tried to execute on them. With AI, more of them can execute on them.
I think this "boringness" the article is talking about always existed. It just becomes more prevelant because AI reduces the barrier to entry.
I’ve been partaking in my fair share, but more and more I’m just feeling sad for my fellow coders ‘cause a lot of what I’m hearing is about bad local choices and burdensome tech stacks.
Sure, it’s kinda hilarious watching a bunch of fashion obsessed front-end devs discover bash, TDD, and that, like, specifications, like, can really be useful, you know, for building stuff or whatever.
But then I think about a version of me who came up a bit later, bit into some reasonable sounding orthodoxy about React or Node as my first production language and who would be having the same ‘profound’ revelations. I never would have learned better. I wouldn’t be as empowered from having these system programming concepts hammered into me. LLMs would be more ‘magic’, I’d extrapolate more readily…
I’ve found myself thinking a lot of thoughts tantamount to “why don’t you dummies just use Haskell, or Lisp, or OCaml, or F#, or Kotlin for that?!”, and from their PoV I’m seeing a broken ladder. A ladder that was orthodoxy and well-documented when I was coming up.
LLMs should ideally bring SICP and Knuth and emacs to the masses. Fingers crossed.
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.
I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.
But AI does the code. Well... usually.
People call my project creative. Some are actually using it.
I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.
You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).
I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.
I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.
> I made it because at that point in my career I simply didn't know that ansible existed
Channels Mark Twain. "Sorry for such a long letter, i didn't have the time to make it shorter."
I write software when the scripts are no longer suitable.
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
Every single time I have vibe coded a project I cared about, letting the AI rip with mild code review and rigorous testing has bit me in the ass, without fail. It doesn't extend it in the taste that I want, things are clearly spiraling out of control, etc. Just satisfying some specs at the time of creation isn't enough. These things evolve, they're a living being.
I started with chatgpt, I told it to make me a road map of game features.
Then I use that road map to guide my LLM (I use codex 5.3), with the specification — when working on tasks, if you learn anything that may be out of scope, add it to the road map.
There's a bit more to it than that, but so far I've got a playable game, and at some point the requirement of adding an admin dashboard for experiments got added to the road map, and that got implemented pretty well too.
At first I did review a lot of its code, but now I just let it rip and I've been happy with it thus far.
At work I use AI heavily but obviously since I'm responsible for whatever code I push I do actually review and test and understand, but mostly I just need to tweak some small things before it's good enough to ship.
When I'm working on a greenfield project that I intend to build out further (which is what I am currently doing), I find that there's not a lot of work that fits those criteria. I expect that can change drastically when you're working on something that is either more mature, or more narrowly scoped (and thus won't need to be extended too much, meaning poor architectural decisions are not a big issue).
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
Since the issue was due to the intersection of k8s and effect, I don't think reading a bunch of docs would have really helped.
Of course I'm sure there's plenty of people who don't care about understanding the bugs and just want to fix things fast. But understanding these bugs helps me prompt/skill the LLM to prevent them in the future.
This is just it, you didn't learn anything here. In 3 months, you will only remember that AI fixed some issue for you. You will have none of the knowledge and experience that struggling and thinking and googling and trying things out until it works provides. You may as well have asked some other person to fix it, and they would at least have learned something.
Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt. Which is much easier to find than someone who actually knows the what and how and why of the code. But hey, you fixed the bug, it ships, boss makes money, the company wins. But you sure don't...
That's not exclusive to AI. I've solved plenty of bugs pre-AI that I would go down similar rabbit holes to fix again without AI. I've spent days hunting down bugs like this in the past while only remembering that I spent days on the bug, not anything meaningful. It's not something I enjoy repeating.
> Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt.
Maybe. The reality is that my coworkers of varying experience levels do attempt to vibecode/debug and are never happy with the results. I don't know what they're prompting, but it just goes to show that it's just as easy as just "typing into a prompt."
> you fixed the bug, it ships, boss makes money
Yeah, that's how it's always been, no? Boss doesn't care how it got fixed as long as it got fixed, and added points if it got fixed quickly so I can work on other features. I may not win long-term if I do use AI, but I certainly don't win short-term if I don't use AI, because I can't afford to spend days to fix a bug that AI can fix in an hour.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
variable assumption "wrong"
switch $assumption {
wrong { puts "your assumption is wrong" }
default { puts "perl, erlang, Tcl" }
}For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
Who cares if some idiot makes some ai shit and doesn’t learn anything? That same person has had access to a real computer which they’ve wasted just as effectively until now.
Nobody cares, that’s the point. Yet they still share them on Show HN.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
Like all we need to do is decouple “I made this” from “I can compose all parts in my mind”, which were never strongly coupled anyway. Is the thing that is being shown neat? Cool! Does it matter if it was a person or 20 people or a robot? I don’t think so, unless it’s special pleading for humans.
There is a proliferation of frameworks and libraries supplying all kinds of mundane needs that developers have; is it wrong for people Showing HN to use those? Do libraries and frameworks not lower the barrier to entry? There have been many cases of 'I threw this together over a weekend using XYZ and ABC', haven't there? What's interesting is how they understand the domain and how they address the problems posed by it - isn't it? Sure, the technical discussion can be interesting too but unless some deep technical problem is being solved, I don't care too much if they used Django or Flask, and which database backend they chose, unless these things have a significant impact on the problem space.
> the barrier of entry has been completely obliterated
I was very interested in 3D graphics programming back in the DOS days before GPUs were a commodity, and at that time I felt the same about hardware accelerated rendering - if no-one needs to think about rasterisation and clever optimisation algorithms, and it's easy to build a 3D engine, I thought, then everyone and their dog will make a game and we'll drown in crappy generic-looking rubbish. Turns out that lowering barriers to entry doesn't magically make everything easier, but does allow a lot more people to express their creativity who otherwise would lack the knowledge and time to do so. That's a good thing! Pre-made engines like Godot remove an absolute ton of the work that goes into making a game, and are a great benefit to the one-man-bands and time-strapped would-be game designers out there whose ideas would otherwise die in the dark.
I am having to repeat the beginning of my previous comment:
>>The trigger for the [original] post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders.
The topic is: The drop in quality of post-AI Show HN. It is specifically about this community. Please read the context the OP has referenced in their own post:
Is Show HN Dead? No, but it's Drowning
https://www.arthurcnops.blog/death-of-show-hn/
Instead of adressing the specifics of that post you seem to ignore the points that were made there and seem to prefer to talk about why vibe-coding solutions should be interesting to pre-AI programmers. Ok, let's go there.
>if no-one needs to think about rasterisation and clever optimisation algorithms, and it's easy to build a 3D engine, I thought, then everyone and their dog will make a game and we'll drown in crappy generic-looking rubbish. [Turns out that's not the case.]
Here in this context, you are confusing "easy" with "non-human". Specifically, when people here decry the banality and tediousness of perusing and reviewing vibe-coded solutions by "everyone and their dog" the emphasis is on and their dog. Let's be clear, a non-deterministic non-human entity that is coding something by approximating the intentions of a human is not the same thing as a human developing a 3D engine or SDK end-to-end with human intentionality no matter how "easy" coding a 3D engine has become. So it leaves it to the HN reader to figure out what level of ownership the human poster has over their 90% vibe-coded solution. It's no surprise that HN readers, when alerted to the possibility via a Show HN post, would rather just vibe-code a solution themselves if they are interested in the problem space instead of engaging with the Show HN post itself. When hard-pressed, I can think of very few instances where programmers would not prefer to vibe-code there own solution instead of test-running and reviewing someone else's AI slop. Some of the casual statistics that the original posters have bothered to look at seem to bear this out.
You asserted that
> pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves...
and latterly
> ... HN readers, when alerted to the possibility via a Show HN post, would rather just vibe-code a solution themselves if they are interested in the problem space
and my point is that I disagree - the implementation of an idea in terms of the actual coding is far less interesting to me (and my assertion is: by extension, less interesting to the average reader) than the implementation in terms of the behaviour of the thing. Perhaps you're concerned about someone opening Claude Code and typing "Write me an application that does XYZ" but it's pretty obvious that so far that doesn't produce anything useful, and I think is more of a problem for sites like Stack Overflow where an answer is a small singular thing rather than an entire system.
There is a spectrum between 'writing it all yourself' and 'YOLO vibe-coding' and if you're only arguing about the latter end of the spectrum then, sure, those tend to suck, but I don't think we're really at risk of being drowned in those projects; that's a kind of slippery-slope argument. This is why I talked about 3D graphics; I earlier feared the 'YOLO 3D game' projects taking over, and that just hasn't happened. I believe we (humans) had similar discussions around the time that typewriters and the printing press were invented - 'if you're not handwriting your ideas then you're not really thinking!' but the ideas are the point, not the process of writing them down.
Also: Actually made the thing. I know how to code, but have 0 Show-HNs because my ideas and fun side-projects never get to that stage.
Now, "making the thing" is not a good proxy either
It’s all “I can’t think anymore” or “software bad now” followed by a critique of the industry circa 2015.
Most of the people making cool stuff with LLMs are making it, not writing blog posts hoping to be a thought leader.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
I assume this is satire.
We do this because we were impressed that one time the stars aligned and the output was decent. So we write just one more prompt bro in the hope it'll will be better than the latest 10, which ended up a waste of time.
We do this because $boss has been successfully spitting out 7 PowerPoints a day with it, which nobody reads but makes them feel productive, therefore this must be the future, therefore AI use shall be mandated until team productivity improves.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
If you aren't willing to put in the time to verify it works than it is indeed, no more useful than anyone else doing the same task on their own.
I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).
I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.
It was especially unfortunate because to do its thing, the code required a third party's own personal user credentials including MFA, which is a complete non-starter in server-side code, but apparently the manager's LLM wasn't aware enough to know that.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
Of course you do, that’s why there are so many different types and sizes of paintbrushes, so you can exert exactly as much fine control as you want/need. Learning the craft is to learn to pick and use your tools to get the desired result. Being unable to microscopically predict where each bristle lands is not the same as not wanting to. Some times you’ll pick a more haphazard brush because the small amount of randomness is a feature (e.g. when emulating nature) and other times you’ll use a fine grained tool, maybe even a toothpick instead of a brush because you need it to be precise.
There are plenty of times in which people will prefer the technically inferior or less aesthetically pleasing output because of the story accompanying it. Different people select different intention to value, some select for the intention to create an accurate depiction of a beautiful landscape, some select for the intention to create a blurry smudge of a landscape.
I can appreciate the art piece made my someone who only has access to a pencil and their imagination more than someone who has access to adobe CC and the internet because its not about the output to me its about the intention and the story.
Saying I made this drawing implies that you at least sat down and had the intention to draw the thing. Then revealing that you actually used AI to generate it changes the baseline assumption and forces people to re-evaluate it. So its not "finding a creative result that they value, but they retroactively devaluing it if it’s not created by a process that they consider artistic
There are a lot of reasons why the intention of the artist is a bad metric for artistic value and there’s a ton of important literature about this
The first obvious point is that the meaning of communication is defined by its endpoint. If I send a message that says “I love you” and somehow the message gets garbled in transmission and ends up reading “I hate you,” then the message that I’ve sent is “I hate you” regardless of my intentions. You can take this a step further: if you want to write an essay attacking capitalism, but everyone who reads it comes out thinking more highly of capitalism and your essay is successfully used for years to help defend of capitalism from critiques, then what you’ve written is a defense of capitalism. This is the main gist behind what’s called Reader Response Theory: the meaning is generated by the reader (or in between the reader and the text) and not by the writer.
As a communications problem, this is even more relevant for art because art is indirect communication by its very nature. Storytelling, for example doesn’t ever actually try to communicate any single thing. The storyteller creates many fictional people, each of whom have their own messages they want to get across, and creates a web of relationships/events between them. It’s an ecosystem at heart. Without any clear/direct message, the margin for error rapidly increases. The artist obviously has to know that this is the case when they choose to make art. If they wanted to get across a single message or intention, then why did they choose a medium that’s so notoriously bad at getting across a single intention? Obviously some artists are just delusional and don’t accept the reality of their medium, but that doesn’t change the facts
Imagine a hypothetical scenario where a storyteller writes a story with a narrator that clearly handholds the audience and explicitly says what the artist means, but the audience doesn’t agree with the narrator. In that case, how many readers will praise the storyteller for their interesting use of an unreliable narrator? Art functions this way on its own, and this is another reason why intentionality is a bad metric: the artist has to make the art work, and that functionality has properties of its own that supersede the artists intentions. This was the main argument of an historically important essay entitled The Intentional Fallacy by Whimsat and Beardsley: Primarily, the story must work. The meaning comes secondarily from trying to understand why it works. We forget this, but the art that we engage with is always art that has been pre-selected by the demands of the art form itself, which no single artist has control over. We engage with art through survivorship bias.
Where I think most people get tripped up is that one of the recent and most popular demands of art has been Conceptual Art, which focuses on the idea or intention rather than the object itself. This is an outgrowth of an individualistic art movement that, honestly, is popular because of political motives. The CIA straight up funded it. I’m not saying that’s bad. Honestly I love any government that funds the arts. I’m just saying it’s not the entirety of art and we can’t be subservient to it and the ideology it represents. You don’t need to justify your enjoyment of a blurry image because it has a story behind it. Moreover, it doesn’t make sense to ignore the image and argue that the story is the meaning or the value of the art. Art that uses backstories effectively can just be redefined as multimedia art that combines the art medium with storytelling, and now suddenly what you thought was the intention of the artist is just the quality of the output again
Sometimes you do, which is why there’s not only a single type of brush in a studio. You want something very controllable if you’re doing lineart with ink.
Even with digital painting, there’s a lot of fussing with the brush engine. There’s even a market for selling presets.
There’s no gatekeeping in the processes of these works, no secrecy, not even really whatever you’re talking about. These works would in fact be utterly diminished by being produced by an LLM because they’re trying to capture the stories of real, existing people who had real, painful experiences. I have no empathy with a machine but I have all the empathy of a man who loved a man whose family hated him so much when he died they wouldn’t even leave his lover with anything more than a box fan and so he decided to declare the box fan to be art.
While people do think like this, it misses the point.
Yes, all forms of art is FULL of randomness and people copying each other. The thing that makes it special is that it took people going out and living and having experiences to create it. They have to actively absorb prior art, learn about it, analyze it, generally be influenced by it. You have to seek out paints, clay, musical instruments, etc etc and at least somewhat learn how to use them. It's not about being difficult to do (although it's certainly impressive but not part of the emotional takeaway), but everyone's process is different and their experiences go into what they create. When I see a photograph of a tree, I think: "Someone went to where that tree is!" and that's part of the feeling and excitement of a really artful photo.
Now, someone who has only ever heard the term "free jazz" can sit in their parents' basement and type out "make me a free jazz song" and shit out the result onto the internet. It's really not the same thing at all.
Rest of the world: "No, we're gatekeeping because we think the result isn't good."
If someone can cajole their LLM to emit something worthwhile, e.g. Terence Tao's LLM generated proofs, people will be happy to acknowledge it. Most people are incapable of that and no number of protestations of gatekeeping can cover up the unoriginality and poor quality of their LLM results.
Big opinions there. A large amount of art that you think comes from individual expression is often not. There are countless examples of artists that secretly used algorithmic processes. A great example is Vermeer: https://youtu.be/94pCNUu6qFY?si=M6UQ-XuHNtoj2-3a.
This is what I mean about how this individualistic philosophy of creativity actually just results in artistic gatekeeping and manipulation of the audience
It’s very common for artists to add on individual expression narratives at the end of the process just so they can market the art, and the reality is that the individualism was never there to begin with. It’s just manipulation and advertising, and it sucks because the success of advertising like this actually undermines the quality of the art world. Because audiences are so susceptible to advertising narratives, artists are forced to spend more time on advertising more than art
> Art for the most part has always been the expression of an individual, even art tightly bound to a cultural context.
This is also not true. This idea mostly comes from the Romantic period. Modern day versions of it are often really just referencing a single book from the 1930s called The Principles of Art by a guy named R.G. Collingwood. It’s a very recent way of seeing art. Historically, art was connected to religion, and therefore thought to be valuable because it was universal rather than individualistic and personal
> Historically, art was connected to religion, and therefore thought to be valuable because it was universal rather than individualistic and personal
If that were actually the case, we wouldn't be able to identify the style of individual artists and artisans, and yet we can of course, regardless of their intent. Giotto's only intent may have been to glorify god in his work, but of course, inevitably, his work is also a reflection of who he was.
This is precisely why AI art is so hideous and anti-humanistic - it can never been a singular reflection of the individual.
I don’t necessarily ascribe to their views, but I bring it up because you said art has always been this way and it hasn’t always been this way
First off -- are you an artist? As in, are you making your argument with skin in the game for something you _need_ to do, not just a pastime that makes dayjobs livable?
Not gatekeeping! Trying to see if you are formulating your position as a creator or a consumer.
If the latter, hate to say it, but your opinion is kind of irrelevant. Ultimately, only artists really understand what's involved in creating real art. Not what's good or bad, but what's at stake and how to tell if somebody's for real.
If you're a creator I'm a little puzzled. Are you really worried that AI is so freaking great that the horrible luddites at bandcamp et al are going to "gatekeep" us away from incredible AI art? This is NOT something that keeps me up at night.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
No one in their right mind would use one.
Using the wrong tool for the job results in disaster.
It's like watching a guy bang rocks together to "vibe build" a house. Good luck.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.
And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."
I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.
What is your hoped for outcome here man? To come off like enough of a jerk or obtuse enough that people just abandon the thread and you can declare victory?
I don't dispute the quality decline on Show HN or the need for some kind of intervention, but this particular argument about how AI interacts with "Show HN" is in fact introducing a new and significant element of gatekeeping to it.
Show HN is not in fact a craftspersons forum! Craft can be one of the things it's about, but it's not the only thing.
And we are going to need more curration so goddamned badly....
For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.
Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.
Wouldn't the masses of Show HN posts that have gotten no interest pre-AI refute that?
The single one I can think of is someone who (I quote) "accidentally created the fastest CSV parser ever using SIMD". This person had no interest in researching prior art themselves, and thus incorrectly claimed credit for "coming up" with this approach - and they didn't even do that.
It's not only the prose that's the problem if submitters are determined not to think.
> AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
Why? What's the difference? I'm genuinely curious about your perspective on this. Lots of people can't articulate themselves well, especially if they don't natively speak the language they're writing in. I have my problems with LLM generated text, but you seem to be taking an extreme approach here.
I would argue good ideas are not so easy to find. It is harder than it seems to fit the market, and that is why most of apps fail. At the end of the day, everyone is blinded by hubris and ignorance... I do include myself in that.
The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice
It's a fantastic editor!
What is that? Do you think PhDs have some special way of talking about things?
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
AI is almost the exact opposite. It's verbose fluff that's only superficially structured well. It's worse than average
(waiting for someone to reply that I can tell the AI to be concise and meaningful)
"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.
The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.
The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."
It's entirely involuntary, I am just unable to care. It's almost always justified because the text in question is always painfully bloated, and repetitive.
The LLM-text you posted could have been (given I didn't read it carefully):
"Skill issue. Iterate on the output, never accept what you receive on the first pass"
Instead we get the standard:
- Agree with the user
- Lackluster simile
- Actual content
- Not X, Y. X, not Y.
Me too. I don't know about the eye movements, but there are probably dozens of us being unable to focus on a LLM-text: https://news.ycombinator.com/item?id=46630931
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
LLMs and agents work the same way. They’re power tools. Skill and judgment determine whether you build more, or lose fingers faster.
If they don't care enough to improve themselves at the task in the first place then why would they improve at all? Osmosis?
If this worked then letting a world renown author write all my letters for me will make me a better writer. Right?
Who cares if you're a "good writer?" Are you "easy to understand" is the real achievement.
No one finds AI-assisted prose/code/ideas boring, per se. They find bad prose/code/ideas boring. "AI makes you boring" is this generation's version of complaining about typing or cellular phones. AI is just a tool; it's up to humans how to use it.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.
The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.
I do appreciate the feedback though and will take it into consideration.
How then is it different from the Wikipedia page you linked?
As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.
Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.
It's not just AI generated -- it's human slop. And here's the kicker: people believe it works.
The reality is simple. People try to sound smart so they can sell something. They don't know how to write. Don't care. Don't have time. They use the tool at hand. They copy-paste.
The result? Devastating.
(this comment guaranteed written by a human)
I use AI, mainly Perplexity AI, as a replacement for search engine because they all suck right now.
AI makes homelab more fun and therefore I learn more. Homelab is my main hobby, using AI to ASSIST me with one thing or another, always end-up mentioning something else that I never heard before.
I wish I could clone myself due to the amount of topics and projects I have noted down.
AI is like money, money doesn't make you a bad or good person, money only enhance what you already are. AI doesn't automatically make things boring, they way how you use it is what make things boring or more exciting with you jumping from forum to forum, new topic to new topic.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
We have an "influencer" recommendation engine that is decent at finding such thoughtful folks, but it varies by industry. But yeah, it's not easy and I wish there were more thoughtful posters on LinkedIn.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
When I write with AI assistance, I spend MORE time editing, questioning, and restructuring — not less. The AI gives me a faster first draft, but I'm pickier about the result because the baseline is higher. The thinking doesn't disappear; it shifts from "how do I phrase this" to "is this actually what I mean."
The real risk isn't that AI makes you boring — it's that lazy usage of AI makes lazy people more visibly lazy. The same person who would have written a generic email before AI now writes a generic AI email. The tool didn't change the person.
What I've noticed in practice: the people who produce the most interesting AI-assisted work are the ones who were already interesting thinkers. They use AI as a sparring partner, not a ghostwriter. They argue with it, redirect it, and use its output as raw material.
The boring output people complain about is a prompting problem, not an AI problem.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.
At least this CEO gets it. Hopefully more will start to follow.
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
I don't follow. If you have the ideas and a structure to give to an AI you already have a working draft. Just start revising that. What would an AI add other than turn into the replacement for thinking described in your negative example?
Online ecosystem decay is on the horizon.
AI is a bicycle, not a motorcycle.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
/me whispers They don't know they're boring
I've had to unfollow multiple people I've followed for donkey's years since they pivoted to full-time AI* (LLM) posting, as they just don't know what's interesting and what isn't any more.
A link to something you said you've vibe coded is now at about the level of a link to your dream journal, or Suno, in terms of clickability.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
In my workflow the difference is whether I force a second pass where I delete half of the model output and rewrite the core argument in my own words. If I skip that pass, everything converges to the same polished-but-generic tone.
So for me AI is best as a draft generator and counterexample machine, not the final author.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)
https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
That is for sure the word of the year, true or not. I agree with it, I think
it's at 10 now. note: the article does not say "taste" once
Then prove it. Otherwise, you're just assuming AI use must be good, and making up things to confirm your bias.
derivative work might be useful, but it's not interesting.
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
It’s so dumb.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
That's the same approach as vibe coding. Not "asking Claude to make a CRUD app.", but using it to cheaply explore solution spaces that an expert's priors would tell you aren't worth trying. The wind tunnel didn't do the thinking for the Wrights, it just made thinking and iterating cheap. That's what LLMs do for code.
The blog post's argument is that deep immersion is what produces original ideas. But what history shows is that deeply immersed experts are often totally wrong and the outsiders who iterate cheaply and empirically take the prize. The irony here is that LLM haters feel it falls victim to the Einstellung effect [1]. But the exact opposite is true: LLMs make it so cheap to iterate on what we thought in the past were suboptimal/broken solutions, which makes it possible to cheaply discover the more efficient and simpler methods, which means humans uniquely fall victim to the Einstellung effect whereas LLMs don't.
The blog's actual point isn't some deference to credentials straw man you've invented, it's that stuff lazily hashed together that's got to "good enough" without effort is seldom as interesting as people's passion projects. And the Wright brothers' application of their hardware assembly skills and the scientific method to theory they'd gone to great lengths to get sent to Dayton Ohio is pretty much the antithesis of getting to "good enough" without effort. Probably nobody around the turn of the century devoted more thinking and doing time to powered flight
Using AI isn't necessary or sufficient for getting to "good enough" without much effort (and it's of course possible to expend lots of effort with AI), but it does act as a multiplier for creating passable stuff with little thought (to an even greater extent than templates and frameworks and stock photos). And sure, a founding tenet of online marketing (and YC) from long before Claude is that many small experiments to see if $market has any takers might be worth doing before investing thinking time in understanding and iterating, and some people have made millions from it, but that doesn't mean experiments stitched together in a weekend mostly from other people's parts aren't boring to look at or that their hit rate won't be low....
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.
There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
I've been ramping my use since the start of this month, and have already made serious progress on a number of projects, many of which have been on ice for years, and have also built out a stable of supporting tools. And I've found it generally exhilarating; making good progress where there was none before is pretty heady.
One of the more memorable experiences I have was a problem that I had to deal with for years, had thought hard about, and then in the end Claude resolved it cleanly with about 1/3 a screen of code (would share the link, but there's no "share" in the mobile app):
=============== I often use [snoop](https://github.com/alexmojaki/snoop) because I like the features and output formatting, but it doesn't support async. I have found [ASnooper](https://github.com/LL-Ling/ASnooper), but it only supports async. I'd like to add ASnooper's async feature to snoop.
I'm thinking of creating a project that imports both, and patches the relevant parts of snoop with the async functionality from ASnooper. I have some notes from looking into the snoop code: - A `ConfiguredTracer` object is exposed in `builtins` (configuration.py, line 142). - Whenever the `ConfiguredTracer` object is called with an async function, it raises (tracer.py, line 166). - The trace event is passed to a formatter, and then a writer (tracer.py, line 280-281). ... and in the ASnooper code: - An `ASnooper` object is exposed (__init__.py). - Whenever the `ASnooper` object is called on an async function, it creates formatter and writer objects (core.py, line 69-75). The approach I'm considering is to wrap `ConfiguredTracer.__call__` with a method that delegates async function tracing to `ASnooper`, and replace the `OutputFormatter` and `OutputWriter` classes with versions that delegate to `snoop`'s implementation. I've prepared a repo for the solution: """ . ./README.md ./pyproject.toml ./tests ./tests/__init__.py ./src ./src/async_patched_snoop ./src/async_patched_snoop/__init__.py """
I've also extracted excerpts from both projects, as unified diff contexts (attached). Any issues, questions or suggestions? Or a better approach that doesn't involve forking the targeted projects? ===============
I prompted Claude with this along with a file containing the diffs (constructed with the help of another prompted tool), Claude informed me of the issues in my approach and suggested a couple alternate solutions, I selected the one 1 preferred, Claude asked for some more context and then did the implementation, and now I have async support in snoop. Probably took me over an hour to construct that initial prompt, and under 5 minutes of conversion to completed solution. I really don't think that that's boring.
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
As determined by whom?
> conversing with AI models and thinking things through with them certainly decreased my blandness
Again, determined by whom?
I’m being genuine. Are those self-assessments? Because those specific judgement are something for other people to decide.
Definitely at a certain threshold it is for others to decide what is boring and not, I agree with that.
In any case, my simple point is that AI can definitely raise the floor, as the other comment more succinctly expressed. Irrelevant for people at the top, but good for the rest of us.
Yes, to an extent. You can, for example, evaluate if you’re sensitive or courageous or hard working. But some things do not concern only you, they necessitate another person, such as being interesting or friendly or generous.
A good heuristic might be “what could I not say about myself if I were the only living being on Earth?”. You can still be sensitive or hard working if you’re alone, but you can’t be friendly because there’s no one else to be friendly to.
Technically you could bore yourself, but in practice that’s something you do to other people. Furthermore, it is highly subjective, a D&D dungeon master will be unbearably boring to some, and infinitely interesting to others.
> I know I am unfortunately less ambitious, driven and outgoing than others
I disagree those automatically make someone boring.
I also disagree with LLMs improving your situation. For someone to find you interesting, they have to know what makes you tick. If what you have to share is limited by what everyone else can get (by querying an LLM), that is boring.
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.