You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
It’s strange how AI style is so easy to spot. If LLMs just follow the style that they encountered most frequently during training, wouldn’t that mean that their style would be especially hard to spot?
First, LLM style did not even exist, it's a match of several different styles, choice of words and phrases.
Second, LLM has turned a slight plurality into a 100% exclusivity.
Say, there are 20 different choices to say the same thing. They are more or less evenly distributed, one of them is a slightly more common. LLM chooses the most common one. This means that
   situation before : 20 options,  5% frequency each
   situation now    :  1 option, 100% frequency
I think these 2 theories explain how can LLM both sound bad, and "be the most common stye, how humans have always talked" (it isn't).
Also, if the second theory is true, that is, LLM style is not very frequent among humans, that means that if you see someone on the internet that talks like an LLM, he probably is one.
Or someone made a call that emoji-infested text is "friendlier" and tuned the model to be "friendlier."
(I once got that feedback from someone in management when writing a proposal...)
Later the cutest of the emojis paved their way into templates used by bots and tools, and it exploded like colorful vomit confetti all over the internets.
When I see this emojiful text, my first association is not with an LLM, but with a lumberjack-bearded hipster wearing thick-framed fake glasses and tight garish clothes, rolling on a segway or an equivalent machine while sipping a soy latte.
Jk, your comments don't seem at all to me like AI. I don't see how that could even be suggested
Glasses: check (I'm old)
Garish clothes: check
Segway: nope
So there's a 75% chance I am a Millenial hipster. Soy latte: sounds kinda nice
It's not because they can't write PRs indistinguishable from humans, or can't write code without Emojis. It's because they don't want to freak out the general public so they have essentially poisoned the models to stave off regulation a little bit longer.
That's a lot of expensive work they're doing, and ignoring, if they're just later poisoning the models!
I'm like "Sure buddy, sure. And the nanobots are in all vaccines, right?"
But maybe it originated somewhere else.. In Javascript libraries..?
Why do you think that? I try to stay involved in accessibility community (if that's what you mean by inclusive?) and I've not heard anyone advocate for emojis over text?
I say "meme" because I believe this is how the information spreads — I think people in that particular clique suggest it to each other and it becomes a form of in-group signalling rather than an earnest attempt to improve the accessibility of information.
I'm wary now of straying into argumentum ad ignorantiam territory, but I think my observation is consistent with yours insofar as the "inclusivity" community I'm referring to doesn't have much overlap with the accessibility community; the latter being more an applied science project, and the former being more about humanities and social theory.
But agree excessive emoji's, tables of things, and just being overly verbose are tells for me anymore.
The „average“ style, from the Unix manpages from the 1960s through the Linux Documentation Project all the way to the latest super-hip JavaScript isEven emoji vomit README must still have been relatively tame I assume.
God Bless.
They didn't learn how to write PRs. They "learned" how to write text.
Just like generic images coming out of OpenAI have the same style and yellow tint, so does text. It averages down to a basic tiktok/threads/whatever comment.
Plus whatever bias training sets and methodology introduced
So it's entirely possible that training in one area (eg: Reddit discourse) might influence other areas (such as PRs)
1. What is this change supposed to do?
2. Why is this change needed?
3. How was it tested?
4. Is there anything else reviewers should know?
5. Link to issue:
There's no "What changed?" because that's the diff. Explain your intent, why you think it's a good idea, how you know you accomplished your intent, and any future work needed or other concerns noticed while making the change. PR descriptions suffer from the same problem as code comments by beginners: they often just describe the "what" when that's obvious from the code, when the "why" is what's needed. So try very hard to avoid doing that.
i++; // increment i (by 1)
Generally small startups after initial pmf. I have no idea how to run a big company and pre pmf Im guilty of "all cowboy, all the time" - YMMV
Doing a blame on a file, or just looking at the diff of the pull request gives you that. The why is lost very fast. After a few months it is possible that the people that did the change is not anymore in the company, so nobody to ask why something was done.
"Oh, they changed the algorithm to generate random numbers". I can see that in the code. "Why was it changed?". I have not clue if there is no extra information somewhere else like a change log, pull request description, or in the commit comments.
But all this depends on the company and size of the project. In your situation may be different.
What is unspoken here is that some open projects are using cost of submission AND cost of change / contrib as a kind of means of keeping review work down.
Nobody is correct here really. It's just that the bottlenecks have changed and we need to rethink everything.
Changing something small on a very large project is a good test. A user might simply want a new optional argument or something. Now they can do it and PR. But the process is geared towards people who know the project better even if the contributor can run all the tests it is still not trivial to fill in the PR request for a trivial change.
We need to rethink this regime shift a bit.
  // loop over list and act on items
  for each _, item := range items {
    item.act()
  }Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.
Firstly, we get important benefits even when there's nothing to talk about: we get to see what the other person is working on, which stops us getting siloed or working alone. Secondly, we do leave useful feedback and often link to full articles explaining concepts, and this can be a good enough explanation for the PR author to just make the requested change. Thirdly, we escalate things to in-person discussion when appropriate, so we end up having the most valuable discussions anyway, which are around architecture, ongoing code style changes, and teaching/learning new things.
I don't understand how someone could think that async code review has almost zero value unless they worked somewhere with a culture of almost zero effort code reviews.
Sometimes a PR either merits limited input or the situation doesn't merit a thorough and thoughtful review, and in those cases a simple "lgtm" is acceptable. But I don't think that diminishes the value of thoughtful non-in-person code review.
Which is awesome and essential!
But the reason that the value of code reviews drops if they aren't done live, conducted by the person whose code is being reviewed, isn't related to the quality of the feedback. It's because a very large portion of the value of a code review is having the dev who wrote the code walk through it, explaining things, to other devs. At least half the time, that dev will encounter "aha" moments where they see something they have been blind to before, see a better way of doing things, spot discontinuities, etc. That dev has more insight into what went into the code than any other, and this is a way of leveraging that insight.
The modern form of code review, where they are done asynchronously by having reviewers just looking at the code changes themselves, is not worthless, of course. It's just not nearly as useful as the old-school method.
Makes it a lot easier to ignore, at the very least.
Then the golden age of ascii encoded source, where all was easy to change.
Now we've forgotten that lesson and changed to ascii encoded binary.
So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.
This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.
$$$ trillion dollar startup idea $$$
But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.
So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?
I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.
So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs
I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.
Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.
  > But once that’s done, why not?
Be honest here. I don't think you do. Just like none of us have the same understanding of the code somebody else wrote. It's just a fact that you understand the code you wrote better than code you didn't.
I'm not saying you don't understand the code, that's different. But there's a deeper understanding to code you wrote, right? You might write something one way because you had an idea to try something in the future based on an idea to had while finding some bug. Or you might write it some way because some obscure part of the codebase. Or maybe because you have intuition about the customer.
But when AI writes the code, who has responsibility over it? Where can I go to ask why some choice was made? That's important context I need to write code with you as a team. That's important context a (good) engineering manager needs to ensure you're on the right direction. If you respond "well that's what the AI did" then how that any different from the intern saying "that's how I did it at the last place." It's a non-answer, and infuriating. You could also try to bullshit an answer, guessing why the AI did that (helpful since you promoted it), but you're still guessing and now being disingenuous. It's a bit more helpful, but still not very helpful. It's incredibly rude to your coworkers to just bullshit. Personally I'd rather someone say "I don't know" and truthfully I respect them more for that. (I actually really do respect people that can admit they don't know something. Especially in our field where egos are quite high. It's can be a mark of trust that's *very* valuable)
Sure, the AI can read the whole codebase, but you have hundreds or thousands of hours in that codebase. Don't sell yourself short.
Honestly I don't mind the AI acting as a reviewer to be a check before you submit a PR, but it just doesn't have the context to write good code. AI tries to write code like a junior, fixing the obvious problem that's right in front of you. But it doesn't fix the subtle problems that come with foresight. No, I want you to stumble through that code because while you write code you're also debugging and designing. Your brain works in parallel, right? I bet it does even if you don't know it. I want you stumbling through because that struggling is helping you learn more about the code and the context that isn't explicitly written. I want you to develop ideas and gain insights.
But AI writing code? That's like measuring how good a developer is by the number of lines of code they write. I'll take quality over quantity any day of the week. Quality makes the business run better and waste fewer dollars debugging the spaghetti and duct tape called "tech debt".
If the AI writes the code, you can still understand the code, but you will never know why the code is written that way. The AI itself doesn’t know, beyond the fact that that’s how it is in the training data (and that’s true even if it could generate a plausible answer for why, if you asked it).
If people are letting the LLM decide how the code will be written then I think they're using them wrong and yes 100% they won't understand the code as well as if they had written it by hand.
LLMs are just good pattern matchers and can spit out text faster than humans, so that's what I use them for mostly.
Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.
And that's a problem. By typing out the code, your brain has time to process its implications and reflect on important implementation details, something you lose out on almost entirely when letting an LLM generate it.
Obviously, your high-level intentions and architectural planning are not tied to typing. However, I find that an entire class of nasty implementation bugs (memory and lifetime management, initialization, off-by-one errors, overflows, null handling, etc.) are easiest to spot and avoid right as you type them out. As a human capable of nonlinear cognition, I can catch many of these mid-typing and fix them immediately, saving an significant amount of time compared to if I did not. It doesn't help that LLMs are highly prone to generate these exact bugs, and no amount of agentic duct tape will make debugging these issues worthwhile.
The only two ways I see LLM code generation bring any value to you is if:
* Much of what you write is straight-up boilerplate. In this case, unless you are forced by your project or language to do this, you should stop. You are actively making the world a worse place.
* You simply want to complete your task and do not care about who else has to review, debug, or extend your code, and the massive costs in capital and human life quality your shitty code will incur downstream of you. In this case, you should also stop, as you are actively making the world a worse place.
> The only two ways I see LLM code generation bring any value to you is if
That is just an opinion.
I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.
The best time to review is when writing code.
The best time to iterate on design is when writing code.
Writing code is a lot more than typing. It's the whole chimichanga
  > I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way.
  > If people are letting the LLM decide how the code will be written then I think they're using them wrong
I don't actually ever feel like the LLMs help me generate code faster because when writing I am also designing. It doesn't take much brain power to make my fingers move. They are a lot slower than my brain. Hell, I can talk and type at the same time, and it isn't like this is an uncommon feat. But I also can't talk and type if I'm working on the hard part of the code because I'm not just writing.
People often tell me they use LLMs to do boilerplate. I can understand this, but at the same time it begs the question "why are you writing boilerplate?" or "why are you writing so much boilerplate?" If it is boilerplate, why not generate it through scripts or libraries? Those have a lot of additional benefits. Saves you time, saves your coworkers time, and can make the code a lot cleaner because you're now explicitly saying "this is a routine". I mean... that's what functions are for, right? I find this has more value and saves more time in the long run than getting the LLMs to keep churning out boilerplate. It also makes things easier to debug because you have far fewer things to look at.
There needs to be some responsible entity that can discuss the decisions behind the code. Those decisions have tremendous business value[0]
[0] I stress because it's not just about "good coding". Maybe in a startup it only matters that "things work". But if you're running a stable business you care if your machine might break down at any moment. You don't want the MVP. The MVP is a program that doesn't want to be alive but you've forced into existence and it is barely hanging on
It undoubtedly saved me time vs learning all that first, and in fact was itself a good chance to “review” some decent TS myself and learn about the stdlib and some common libraries. I don’t think that effort missed many critical idioms and I would say I have decent enough taste as an engineer that I can tell when something is janky and there must be a better way.
  > but I’m not a TS expert
  > So the most recent thing that I did
I was working on a setup.py file and I knew I had done something small and dumb, but was being blind to it. So I pulled up claude code and had it run parallel to my hunt. Asked it to run the build command and search for the error. It got caught up in some cmake flags I was passing, erroneously calling them errors. I get a number of prompts in and they're all wrong. I fixed the code btw, it was a variable naming error (classic!).
I've also had success with claude, but it is super hit or miss. I've never gotten it to work well for anything remotely complicated if there also isn't the code in a popular repo I could just copy paste. But it is pretty hit or miss for even scripts, which I write a lot of bash. People keep telling me it is great for bash and honestly guys, just read the man pages... (and use some god damn functions!)
That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.
Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.
It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.
If a company builds an industrial poop delivery system that lets anyone with dog poop deliver it directly into my yard with the push of a button I have a much different and much bigger problem
This is like reviewing your own PRs, it completely defeats the purpose.
And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.
As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.
Yes. You just have to be in a different mindset. I look for cases that I haven't handled (and corner cases in general). I can try to summarize what the code does and see if it actually meets the goal, if there's any downsides. If the solution in the end turns out too complicated to describe, it may be time to step back and think again. If the code can run in many different configurations (or platforms), review time is when I start to see if I accidentally break anything.
In the sense that you double check your work, sure. But you wouldn’t be commenting and asking for changes, you wouldn’t be using the reviewing feature of GitHub or whatever code forger you use, you’d simply make the fixes and push again without any review/discussion necessary. That’s what I mean.
> open the view the reviewer will have and take a look. I do this all the time
So do I, we’re in perfect agreement there.
It is, but for all the reasons AI is supposed to fix. If I look at code I myself wrote I might come to a different conclusion about how things should be done because humans are fallible and often have different things on their mind. If it's in any way worth using an AI should be producing one single correct answer each time, rendering self PR review useless.
> This is like reviewing your own PRs, it completely defeats the purpose.
I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!
Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.
You don’t, we’re on the same page. This is just a case of using different meanings of “review”. I expanded on another sibling comment:
https://news.ycombinator.com/item?id=45723593
> Obviously I don't approve my own PRs.
Exactly. That’s the type of review I meant.
ultimately I'm happy to fight fire with fire. there was a time I used to debate homophobes on social media - I ended up writing a very comprehensive list of rebuttals so I could just copy and paste in response to their cookie cutter gotchas.
The point of most jobs is not to get anything productive done. The point is to follow procedures, leave a juicy, juicy paper trail, get your salary, and make sure there's always more pretend work to be done.
That's certainly not my experience. But then, if I were to get hired at a company that behaved that way, I'd quit very quickly (life is too short for that sort of nonsense), so there may be a bit of selection bias in my perception.
It's a joke.
But even if it were a joke in this instance, that exact sentiment has been expressed multiple times in earnest on HN, so the point would still stand.
That is literally how civilization works.
As an example, knowing that a service is offered by a registered company with presence in my area gives me the knowledge "that they know that I know" that if something goes wrong, I can sue them for negligence, possibly up to piercing the corporate veil the company and having the directors serve prison time. From that I can somewhat rationally derive that if the company has been in business offer similar services for years, it is likely that they have processes in place to maintain a level of professionalism that would lower the risk of such lawsuits. And on an organisational level, even if I still have good reason to think that most of the employees are incompetent, the fact that the company is making it work gives me a significantly higher preference in the "result" than I would in any individual "stupid" component.
And for a closer-to-home example, the internet is well known to be a highly reliable system built from unreliable components.
So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?
Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.
Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.
You are transparently engaging in bad faith by purposefully straw manning the argument. No one is arguing for “far better programmer than any human that has ever lived”. That is an exaggeration used to force the other person to reframe their argument within its already obvious context and make it look like they are admitting they were wrong. It’s a dirty argument, and against the HN guidelines (for good reason).
> Coding agents are basically interns.
No, they are not. Interns have the capacity to learn and grow and not make the same mistakes over and over.
> but even if they're doing things 95% correctly
They’re not. 95% is a gross exaggeration.
> This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?
This is an entirely unfair expectation. Even the best human SWEs create PRs with significant issues - it's absurd by the parent to say that if a PR is "any good, it wouldn’t need review"; it's just an unreasonable bar, and I think that @latexr was entirely justified in pushing back against that expectation.
As for the "95% correctly", this appears to be a strawman argument on your end, as they said "even if ...", rather than claiming that this is the situation at the moment. But having said that, I would actually like to ask both of you - what does it even mean for a PR to be 95% correct - does it mean that that 95% of the LoC are bug-free, or do you have something else in mind?
Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.
I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.
This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.
It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish
I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.
Who are we building all this stuff for, exactly?
Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?
"With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.
In the dating scenario what's really absurd and disgusting isn't actually the artificiality of toys.. it's the ritualistic aspect of the unnecessary preamble, because you could skip straight to tea and talk if that is the point. We write messages from bullet points, ask AI to pad them out uselessly with "professional" sounding fluff, and then on the other side someone is summarizing them back to bullet points? That's insane even if it was lossless, just normalize and promote simple communications. Similarly if an AI review was any value-add for AI PR's, it can be bolted on to the code-gen phase. If editors/reviewers have value in book publishing, they should read the books and opine and do the gate-keeping we supposedly need them for instead of telling authors to bring their own audience, etc etc. I think maybe the focus on rituals, optics, and posturing is a big part of what really makes individual people or whole professions obsolete
Do you review your comments too with AI?
I first read that as "coworkers (who are) fully AI generated" and I didn't bat an eye.
All the AI hype has made me immune to AI related surprises. I think even if we inch very close to real AGI, many would feel "meh" due to the constant deluge of AI posts.
It's 90% AI, but that 90% was almost entirely boilerplate and would have taken me a good chunk of time to do for little gain other than the fact I did it.
I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.
I can see the code, I know what changed. Give me the logic behind this change. Tell me what issues you ran into during the implementation and how you solved them. Tell me what other approaches you considered and ruled out.
Just saying "This change un-links frobulation from reticulating splines by doing the following" isn't useful. It's like adding code comments that tell you what the next line does; if I want to know that I'll just read the next line.
The AI has far more energy than I do when it comes to writing PR summaries, I have done it so many times, it's not the main part of my job. I have already provided all the information for a PR, why should I repeat myself? What happened to DRY?
A good PR summary should be the why of the PR. Not redundantly repeat what changed, give me description of why it changed, what alternatives were tested, what you think the struggles were, what you think the consequences may be, what you expect the next steps to be, etc.
I've never seen an AI generated summary that comes close to answering any of those questions. An AI generated summary is a bit like that junior developer that adds plenty of comments but all the comments are:
    // add x and y
    var result = x + y;
I'm going to read the code anyway to review a PR, a summary of what the code already says it does is redundant information to me.
I have worked at smaller firms, mostly, early stage (< 50 engineers), and folks are super busy. Having AI support in writing better thoughtful commentary, provide deeper context is a boon.
In the end, I'll have to say "it depends" -- you can't just throw slop at people but there's definitely a middle ground where everyone wins.
# Minimal Reprex (Correct)
(unintelligible nonsense here)
And here is the correct, minimal fix, guaranteed to work:
# Correct Fix (Correct)
(same unintelligible nonsense, wrapped in a try/catch block)
Make this change and your code should work perfectly!
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
(Edit: in Diet Coke. Not too sure about Coke Zero).
Especially when compared to a standard coke with around 150 kcal.
I don’t think having a ML-backed proofreading system is an intrinsically bad idea; the oft-maligned “Apple Intelligence” suite has a proofreading function which is actually pretty good (although it has a UI so abysmal it’s virtually useless in most circumstances). But unless you truly, deeply believe your own writing isn’t as good as a precocious eighth-grader trying to impress their teacher with a book report, don’t ask an LLM to rewrite your stuff.
What is a writing "voice"? It's more than just patterns and methods of phrasing. ChatGPT would say "rhythm and diction and tone" and word choice. But that's just the paint. A voice is the expression of your conscious experience trying to convey an idea in a way that reflects your experience. If it were just those semi-concrete elements, we would have unlimited Dickens; the concept could translate to music, we could have unlimited Mozart. Instead—and I hope you agree—we have crude approximations of all these things.
Writing, even technical writing, is an art. Art comes from experience. Silicon can not experience. And experiencers (ie, people with consciousness) can detect soullessness. To think otherwise is to be tricked; listen to anything on suno, for example. It's amazing at first, and then you see through the trick. You start to hear it the way most people now perceive generated images as too "shiny". Have you ever generated an image and felt a feeling other than "neat"?
Just ask it to write "in the style of" a few famous writers with a recognizable style. It just can't do it. It'll do an awfully cringe attempt at it.
And that's just how bad LLMs are at it. There's a more general problem. If you've ever read a posthumous continuation of a literary series by a different but skilled author, you know what I mean.
For example, "And another thing..." by Eoin Colfer is written to be the final sequel to the Hitchhiker's Guide, after Douglas Adams died. And to their absolute credit, the author Eoin Colfer, in my opinion, pretty much nails Douglas Adams's tone to the extent it is humanly possible to do so. But no matter how close he got, there's a paradox here. Colfer can only replicate Adams's style. But only Adams could add a new element, and it would still be his style. While if Colfer had done exactly the same, he'd have been considered "off".
Anyway, if a human writer can't pull it off, I doubt an LLM can do it.
This is why heavily assisted ai writing is still slop. That fundamental learning that is baked in is gone. It is the same reason why corporate speak is so hated. It is basically intentional slop.
That's no crime, so far. It's very normal to have writers and editors.
But it's highly abnormal for everyone to have the _same_ editor, famous for the writing exactly the text that everybody hates.
It's like inviting Uwe Boll to edit your film.
If there's a good reason to send outgoing slop, OK. But if your audience is more verbally adept, and more familiar with its style, you do risk making yourself look bad.
Personally I'm not submitting enough stuff to an LLM to give it enough to go on.
Whether I hand write a blog post or type it into a computer, I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine arise from hand writing vs. typing.
> your thoughts
No, they aren't! Not if you had AI write the post for you. That's the problem!
That apparently is not the case for a lot of people.
I cannot tell you that it objectively matters whether or not an article was written by a human or an LLM, but it should be clear to anybody that it is at least a significant difference in kind vs. the analogy case of handwriting vs. typing. I think somebody who won't acknowledge that is either being intellectually dishonest, or has already had their higher cognitive functions rotted away by excessive reliance on LLMs to do their thinking for them. The difference in kind is that of using power tools instead of hand tools to build a chair, vs. going out to a store and buying one.
> I think somebody who won't acknowledge that is either being intellectually dishonest, or has already had their higher cognitive functions rotted away by excessive reliance on LLMs to do their thinking for them.
This feels too aggressive for a good faith discussion on this site. Even if you do think that, there's no point in insulting the humans who could engage with you in that conversation.
My interpretation of your comment was that it related to my use of the word "important", which has a more subjective connotation than "significant" and arguably allows my comment to be interpreted in two ways. The second way (that I feel people should care more about the distinction I highlighted) was not my intended meaning, since obviously people can care about whatever they want. It was a relevant observation of imprecise wording on my part.
> there's no point in insulting the humans who could engage with you in that conversation.
There would be no point in engaging them in that conversation, either.
Disagreeing with me that the difference in kind I highlighted is important is fine, and maybe even an interesting conversation for both sides. Disagreeing with me that there is a significant difference in kind is just nonsensical, like arguing that there's no meaningful difference, at any level, between painting a painting yourself and buying one from a store. How can you approach a conversation like that? Yet positions like that appear in internet arguments all the time, which are generally arguments between anonymous strangers who often have no qualms about embracing total intellectual dishonesty because the[ir] goal is just to make their opponent mad enough that they forget the original point they were trying to make and go chasing the goalposts all over the room.
The only winning move is not to play, which requires being honest with yourself about who you're talking to and what they're trying to get out of the conversation. I am willing to share that honesty.
I am, to be clear, not saying you are one of these people.
“Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
> The use of tools to help with writing and communication should make it easier to convey your thoughts
If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
This is just pedantic nonsense
I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).
I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.
Must have been thrilling to edit.
For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.
The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.
> Might as well just give people your prompt.
What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?
Not even true! Turning your thoughts into words is a very important and human part of writing. That's where you choose what ambiguities to leave, which to remove, what sort of implicit shared context is assumed, such important things as tone, and all sorts of other unconscious things that are important in writing.
If you can't even make those choices, why would I read you? If you think making those choices is unimportant, why would I think you have something important to say?
Uneducated or unsophisticated people seem to vastly underestimate what expertise even is, or just how much they don't know, which is why for example LLMs can write better than most fanfic writers, but that bar is on the damn floor and most people don't want to consume fanfic level writing for things that they are not fanatical about.
There's this weird and fundamental misconception in pro-ai realms that context free "information" is somehow possible, as if you can extract "knowledge" from text, like you can "distill" a document and reduce meaning to some simple sentences. Like, there's this insane belief that you can meaningfully reduce text and maintain info.
If you reduce "Lord of the flies" to something like "children shouldn't run a community", you've lost immense amounts of info. That is not a good thing. You are missing so much nuance and context and meaning, as well as more superficial (but not less important!) things like the very experience of reading that text.
Like, consider that SOTA text compression algorithms can reduce text to 1/10th of it's original size. If you are reducing a text by more than that to "summarize" or "reduce to it's main points" a text, do you really think you are not losing massive amounts of information, context, or meaning?
You can have the thoughts in a different language and the same ideas are still there.
You can tell an LLM to tweak a paragraph to better communicate a nuance until you're happy with it.
---
Language isn't thought. It's extremely useful in that it lets us iterate on our thoughts. You can add in LLMs in that iteration loop.
I get you wanted to vent because the volume of slop is annoying and a lot of people are degrading their ability to think by using it poorly, but "If you’re using an LLM to spit out text for you, they’re not your thoughts" is just motivated reasoning.
To be honest, and I hate to say this because it's condescending, it's a matter of literacy.
Some people don't see the value in literature. They are the same kind of people who will say "what's the point of book X or movie Y? All that happens is <sequence of events>", or the dreaded "it's boring, nothing happens!". To these people, there's no journey, no pleasure with words, the "plot" is all that matters and the plot can be reduced to a sequence of A->B->C. I suspect they treat their fiction like junk food, a quick fix and then move on. At that point, it makes logical sense to have an LLM write it.
It's very hard to explain the joy of words to people with that mentality.
for instance, there's a tribe that describes directions only using the Cardinals. and as such they have no words for nor mental concept of "left and right".
and coincidentally, they're all much more proficient at navigation and have a better general sense of direction (obviously) than the average human because of the way they have to think about directions when just talking to each other.
===
is also why the best translators don't just do a word for word replacement but half to force think through cultural context and ideology on both sides of the conversation in order to make a more coherent translation.
what language you use absolutely dictates how and what we think as well as what particular message is conveyed
Did you use AI to write this...? Because it does not follow from the post you're replying to.
It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!
Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"
And unless I'm misunderstanding, it's literally the exact point you made, with no exaggeration or added comparisons.
Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.
It’s not a literal suggestion. “Might as well” is a well known idiom in the English language.
The point is that if you’re not going to give the reader the result of your research and opinions and instead will just post whatever the LLM spits out, you’re not providing any value. If you gave the reader the prompt, they could pass it through an LLM themselves and get the same result (or probably not, because LLMs have no issue with making up different crap for the same prompt, but that just underscores the pointlessness of posting what the LLM regurgitated in the first place).
IMO a lot of the dumb and bad behavior around LLMs could be solved by a “just share the prompts” strategy. If somebody wants to generate an email from bullet points and send it to me: just send the bullet points, and I can pass them into an LLM if I want.
Blog post based on interesting prompts? Share the prompt. It’s just text completion anyway, so if a reader knows more about the topic than the prompt-author, they can even tweak the prompt (throw in some lingo to get the LLM to a better spot in the latent space or whatever).
The only good reason not to do that is to save some energy in generation, but inference is pretty cheap compared to training, right? And the planet is probably doomed anyway at this point so we as well enjoy the ride.
What does bother me is when clearly AI-generated blog posts (perhaps unintentionally) attempt to mask their artificial nature through superfluous jokes or unnaturally lighthearted tone. It often obscures content and makes the reading experience inefficient, without the grace of a human writer that could make it worth it.
However, if I’m reading a non-technical blog, I am reading because I want something human. I want to enjoy a work a real person sank their time and labor into. The less touched by machines, the better.
> It would be more human to handwrite your blog post instead.
And I would totally ready handwritten blog posts!
But it can make for tiresome reading. Like, a 2000 word post can be compressed to 700 or something had a human editor pruned it.
Somehow this is currently the top comment. Why?
Most non-quantitative content has value due to a foundation of distinct lived experience. Averages of the lived experience of billions just don't hit the same, and are less likely to be meaningful to me (a distinct human). Thus, I want to hear your personal thoughts, sans direct algorithmic intermediary.
This is similar to the common objection for AI-coding that the hard part is done before the actual writing. Code generation was never a significant bottleneck in most cases.
But I often correct the result and change some wording.
Maybe at the beginning, when I was less experienced with llms, I used more llm style, but now I find it a good compromise to convey what I think without hindering the message behind my awful writing :)
All I care about is content, too, but people using LLMs to blog and make readmes is routinely getting garbage content past the filters and into my eyeballs. It's especially egregious when the author put good content into the LLM and pasted the garage output at us.
Are there people out there using an LLM as a starting point but taking ownership of the words they post, taking care that what they're posting still says what they're trying to say, etc? Maybe? But we're increasingly drowning in slop.
I cannot blame people for using software as a crutch when human-based writing has become too hard and seldom rewarded anymore unless you are super-talented, which statistically the vast majority of people are not.
It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707
The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.
In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.
This whole AI thing is rapidly becoming very tiresome. But the trend seems to be to push it everywhere, regardless of merit.
An LLM generated blog post is by definition derivative and bland.
> I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.
Then say so, up front.
But that's not what people do. They're lazy or lack ideas but want "content" (usually for some kind of self-promotional reason). So you get to read that.
I am a reasonably good (but sloppy) writer and use Claude to help improve my text, my ideas, and the flow of sentences and paragraphs. A huge help once I have a good first draft. I treat Claude like a junior editor who is useful but requires a tight leash and sharp advice.
This thoughtless piece is like complaining about getting help from professional human editors: a profession nearly killed off over the last three decades.
Who can afford $50/hr human editorial services? Not me. Claude is a great “second best” and way faster and cheaper.
0% of your HN comments include URLs for sources that support the positions and arguments you've expressed at HN.[1] Do you generally not care about the sources of ideas? For example, when you study public policy issues, do you not differentiate between research papers published in the most prestigious journals and 500-word news articles written at the 8th-grade level by nonspecialist nobodies?
[1] https://hn.algolia.com/?type=comment&query=author:alyxya+htt...
Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.
And it will make them up just like it does everything else. You can’t trust those either.
In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.
Asking for sources isn’t a magical incantation that suddenly makes things true.
> It isn’t guaranteed that content written by humans is necessarily correct either.
This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.
In tech-related matters such as coding, I've come to expect every link ChatGPT provides as reference/documentation is simply wrong or nonexistent. I can count with fingers from a single hand the times I clicked on a link to a doc from ChatGPT that didn't result in a 404.
I've had better luck with links to products from Amazon or eBay (or my local equivalent e-shop). But for tech documentation which is freely available online? ChatGPT just makes shit up.
If you have a habit of asking random lay persons for technical advice, I can see why an idiot chatbot would seem like an upgrade.
(I'm not saying not to read books, but seriously: there are shortcuts)
There are many 100's of professions, and most of them take a significant fraction of a lifetime to master, and even then there usually is a daily stream of new insights. You can't just toss all of that information into a bucket and expect that to outperform the < 1% of the people that have studied the subject extensively.
When Idiocracy came out I thought it was a hilarious movie. I'm no longer laughing, we're really putting the idiots in charge now and somehow we think that quantity of output trumps quality of output. I wonder how many scientific papers published this year will contain AI generated slop complete with mistakes. I'll bet that number is >> 0.
I've done small plumbing jobs after asking AI if it was safe, I've written legal formalia nonsense that the government wanted with the help of AI. It was faster, cheaper and I didn't bother anyone with the most basic of questions.
Try asking ChatGPT or whatever is your favorite AI supplier about a subject that you are an expert about something that is difficult, on par with the kind of evaluations you'd expect a qualified doctor or legal professional to do. And then check the answer given, then extrapolate to fields that you are clueless about.
AI will just make stuff up instead of saying it doesn't know, huh? Have you talked to real people recently? They do the same thing.
It's like being okay with reading the entirety of generated ASM after someone compiles C++.
The content itself does have value, yes.
But some people also read to connect with other humans and find that connection meaningful and important too.
I believe the best writing has both useful content and meaningful connection.
Maybe humans aren't so unique after all, but that's its own topic.
Generative AI tends to be very sure of itself. It doesn’t say, it doesn’t know when it doesn’t know. Sometimes when it doesn’t it won’t engage in the premise of the question and instead give an answer to an easier question
https://archive.ph/20250317072117/https://www.bloomberg.com/...
But if I'm truly honest with myself, I think in the long run I wouldn't care. I grew up on Science Fiction, and the stories I've always found most interesting were ones that explored human nature instead of just being techno fetishism. But the reality is I don't feel a human connection to Asimov, or Cherryh, or any of the innumerable short form authors who wrote for the SF&F magazines I devoured every chance I got. I remember the stories, but very rarely the names. So they might as well have been written by an AI since the human was never really part of the equation (for me as a reader).
And even when I do remember the names, maybe the human isn't one I want a lot of "human connection" with anyway. Ender's Game, the short story and later the novel were stories I greatly enjoyed. But I feel like my enjoyment is hampered by knowing that the author of a phenomenal book that has some interesting things to day on the pains caused by de-humanizing the other has themselves become someone who dehumanizes others often. The human connection might be ironic now, but that doesn't make the story better for me. Here too, the story might as well have been written by an AI for all that the current person that the author is represents who they were (either in reality or just in my head) when I read those stories for the first time.
Some authors I have been exposed to later in life, I have had a degree of human connection with. I felt sadness and pain when Steve Miller died and left his spouse and long time writing partner Sharon Lee to carry on the Liaden series. But that connection isn't what drew me to the stories in the first place and that connection is largely the same superficial parasocial one that the easy access into the private lives of famous people gives us. Sure I'm saddened, but honesty requires me to note I'm more sad that it reminds me eventually this decades spanning series will draw to a close, and likely with many loose ends. And so even here, if an AI were capable of producing such a phenomenal series of books, in a twisted way as a reader it would be better because they would never end. The world created by the author would live on forever, just like a "real" world should.
Emotionally I feel like I should care that a book was or wasn't written by an AI. But if I'm truly honest with myself, the author being a human hasn't so far added much to the experience, except in some ways to make it worse, or to cut short something that I wish could have continued forever.
All of that as a longwinded way of answering, "no, I don't think I would care".
In contrast, I think for me a tremendous part of the joy I get from reading science fiction is knowing there's another inventive human on the other side of the page. When I know what I'm reading is the result of a mechanical computation, it loses that.
But the real noodle-bender for me is would I still enjoy the book if I didn't know?
I write pretty long blog posts that some enjoy and dump them into various llms for review. I am pretty opinionated on taste so I usually only update grammar but it can be dangerous for some.
To be more concrete, often ai tells me to be more “professional” and less “irreverent” which i think is bullshit. The suggestions it gives are pure slop. But if english isnt first language or you dont have confidence, you may just accept the slop.
It's about to find the sweet spot.
Vibe coding is crap, but I love the smarter autocomplete I get from AI.
Generating whole blog posts from thin air is crap, but I love smart grammar, spelling, and diction fixes I get from AI.
It's like getting an unsolicited text with a "Let Me Google That For You" link. Yes, we can all ask ChatGPT about the thing. We don't need you to do it for us.
If you are not an expert, you'll think the AI is amazing, without realizing the slop.
I'll rather do without the AI slop, thanks.
If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.
On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.
Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.
If it's not substantially their own writing or ideas, then sure, they shouldn't pass it off as such and claim individual authorship. That's a different issue entirely. However, if someone just wanted to share, "I'm 50 prompts deep exploring this niche topic with GPT-5 and learned something interesting; quoted below is a response with sources that I've fact-checked against" or "I posted on /r/AskHistorians and received this fascinating response from /u/jerryseinfeld", I could respect that.
In any case, if someone is posting low-quality content, blame the author, not the tools they happened to use. OOP may as well say they only want to read blog posts written with vim and emacs users should stay off the internet.
I just don't see the point in gatekeeping. If someone has something valuable to share, they should feel free to use whatever resources they have available to maximize the value provided. If using AI makes the difference between a rambling draft riddled with grammatical and factual errors, and a more readable and information-dense post at half the length with fewer inaccuracies, use AI.
Not sure if this is true for other people but it's basically always a sign of something I end up wishing I hadn't wasted my time reading.
It isn't inherently bad by any means but it turns out it's a useful quality metric in my personal experience.
What about plagiarism? If a person hacks together a blog post that is arguably useful but they plagiarized half of it from another person, is that acceptable to you? Is it only acceptable if it's mechanized?
One of the arguments against GenAI is that the output is basically plagiarized from other sources -- that is, of course, oversimplified in the case of GenAI, but hoovering up other people's content and then producing other content based on what was "learned" from that (at scale) is what it does.
The ecological impact of GenAI tools and the practices of GenAI companies (as well as the motives behind those companies) remain the same whether one uses them a lot or a little. If a person has an objection to the ethics of GenAI then they're going to wind up with a "binary take" on it. A deal with the devil is a deal with the devil: "I just dabbled with Satan a little bit" isn't really a consolation for those who are dead-set against GenAI in its current forms.
My take on GenAI is a bit more nuanced than "deal with the devil", but not a lot more. But I also respect that there are folks even more against it than I am, and I'd agree from their perspective that any use is too much.
I think we have a better shot at making that argument for music, visual art, etc. Most of it is utilitarian and most people don't care where it comes from, but we have a cultural heritage of recognizing handmade items as more valuable than the mass-produced stuff.
I don't think GenAI or LLMs are going away entirely - but I'm not convinced that they are inevitable and must be adopted, either. Then again, I'm mostly a hold-out when it comes to things like self checkout, too. I'd rather wait a bit longer in line to help ensure a human has a job than rush through self-checkout if it means some poor soul is going to be out of work.
Sadly, I agree. That's why I removed my works from the open web entirely: there is no effective way for people to protect their works from this abuse on the internet.
The way LLMss are now, outside of the tech bubble the average person has no use for them.
> on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots"
This is a bizarre argument. Humans don't "train" on books, they read them. This could be for many reasons, like to learn something new or to feel an emotion. The LLM trains on the book to be able to imitate it without attribution. These activities are not comparable.
Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.
I'd sooner have a ship painting from the little shop in the village with the little old fella who paints them in the shop than a perfect robotic simulacrum of a Rembrandt.
Intention matters but it matters less sometimes but I think it matters.
Writing is communication, it's one of the things we as humans do that makes us unique - why would I want to reduce that to a machine generating it or read it when it has.
I've been learning piano too, and I find more joy in performing a piece poorly, than listening to it played competently. My brother asked me why I play if I'm just playing music that's already been performed (a leading question, he's not ignorant). I asked him why he plays hockey if you can watch pros play it far better. It's the journey, not the destination.
I've been (re-)re-re-watching Star Trek TNG and Data touches on this issue numerous times, one of which is specifically about performing violin (but also reciting Shakespeare). And the message is what you're sharing: to recite a piece with perfect technical execution results an in imperfect performance. It's the _human_ aspects that lend a piece deep emotion that other humans connect with, often without being able to concretely describe why. Let us feel your emotions through your work. Everyting written on the page is just the medium for those emotions. Without emotion, your perfectly recited piece is a delivered blank message.
https://www.poetryfoundation.org/poems/43745/andrea-del-sart...
The Matrix was and is fantastic on many levels.
At this point, I don't know there's much more to be said on the topic. Lines of contention are drawn, and all that's left is to see what people decide to do.
I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.
Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?
Grammatical deviations constitute a large part of an author's voice. Removing those deviations is altering that voice.
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
The only thing that changed in all of my experimentation with various saved instruction was that sometimes it prepended its bloated examples with "here's a short, concise example:".
This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.
This will invalidate even ispell in vim. The entire point of proofreading is to catch things you didn’t notice. Nobody would say “you don’t need the red squiggles underlining strenght because you already know it is spelled strength.”
My wife is ESL. She's asked me to review documents such as her resume, emails, etc. It's immediately obvious to me that it's been run through ChatGPT, and I'm sure it's immediately obvious to whomever she's sending the email. While it's a great tool to suggest alternatives and fix grammar mistakes that Word etc don't catch, using it wholesale to generate text is so obvious, you may as well write "yo unc gimme a job rn fr no cap" and your odds of impressing a recruiter would be about the same. (the latter might actually be better since it helps you stand out.)
Humans are really good at pattern matching, even unconsciously. When ChatGPT first came out people here were freaking out about how human it sounded. Yet by now most people have a strong intuition for what sounds ChatGPT-generated, and if you paste a GPT-generated comment here you'll (rightfully) get downvoted and flagged to oblivion.
So why wouldn't you use it? Because it masks the authenticity in your writing, at a time when authenticity is at a premium.
These type of complains about LLMs feel like the same ones people probably said about using a typewriter for writing a letter vs. a handwritten one saying it loses intimacy and personality.
Also, reminds me of this cartoon from March 2023. [0]
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.
That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.
But you are saying that is wrong, you should judge the messenger, not the message.
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.
It can also double as a peer reviewer and point out potential counterarguments, so you can address them upfront.
And LLMs always do what you say, absolutely always, no issues there.
I recently wrote about the dead internet https://punkx.org/jackdoe/zero.txt out of frustration.
I used to fight against it, I thought we should do "proof of humanity", or create rings of trust for humans, but now I think the ship has sailed.
Today a colleague was sharing their screen on google docs and a big "USE GEMINI AI TO WRITE THE DOCUMENT" button was front and center. I am fairly certain that by end of year most words you read will be tokens.
I am working towards moving my pi-hole from blacklist to whitelist, and after that just using local indexes with some datahorading. (squid, wikipedia, SO, rfcs, libc, kernel.git etc)
Maybe in the future we just exchange local copies of our local "internet" via sdcards, like in Cuba's Sneakernet[1] El Paquete Semenal[2].
I thought about this in another context and then I realized: what system is going to declare you're human or not? AI of course
Where are the explanations what all of them mean? What is (nothing) vs `maxi` vs `mini` vs `nopic`? What is `100` vs `all` vs `top1m` vs `top` vs `wp1-0.8`?
Mini is the introduction and infobox of all articles, nopic is the full articles with no pictures, maxi is full articles with (small) images. Other tags are categories (football, geography, etc.)
100 is the top 100 articles, top1m is top 1 million, 0.8 is (inexplicably) the top 45k articles.
My recommendation: sort by size and download the largest one you can accommodate in the language you prefer. wikipedia_en_all_maxi_2025-08.zim is all wikipedia articles, with images, as of 2025-08 and it's a paltry 111G.
Kiwix publishes a library here, but it's equally unhelpful: https://library.kiwix.org/
I can't speak for the OP's experiences, but my early schooling years were marked by receiving a number of marked down or failing grades because my handwriting was awful, it still is, but at the time no matter what I did, I couldn't get my handwriting to stay neat. Writing neatly was too slow for my thoughts, and I'd get lost or go off topic. But writing at a pace to keep up with my thoughts turned my writing into barely understandable runes at best, and incomprehensible scribbles at worst. Even where handwriting wasn't supposed to count, I lost credit because of how bad it was.
At a certain point I was given permission to type all of my work. Even for tested material I was given proctored access to a type writer (and later computer). And my grades improved noticeably. My subjective experiences and enjoyment of my written school work also improved noticeably. Maybe I could have spend more years working on improving my handwriting and getting it to a place where I was just barely adequate enough to stop losing credit for it. Maybe I have lost something "essential" about being human because my handwriting is still so bad I often can't read my own scribblings. But I am infinitely grateful to have lived in a time and place where personal access to typing systems allowed me to be more fairly evaluated on what I had to say, rather than how I could physically write it.
not to get super personal, but that's... not the case for me. i just feel differently about it, that's all!
I can imagine it’s hard to see the nuance if you’re ESL but it’s there.
If I'm finding that voice boring, I'll stop reading - whether or not AI was used.
The generic AI voice, and by that I mean very little prompting to add any "flavor", is boring.
Of course I've used AI to summarize things and give me information, like when I'm looking for a specific answer.
In the case of blogs though, I'm not always trying to find an "answer", I'm just interested in what you have to say and I'm reading for pleasure.
It feels great to use. But it also feels incredibly shitty to have it used on you.
My recommendation. Just give the prompt. If if your readers want to expand it they can do so. don't pollute others experience by passing the expanded form around. Nobody enjoys that.
I do understand the reasoning behind being original, but why make mistakes when we have tools to avoid them? That sounds like a strange recommendation.
I've found a better approach to using AI for writing. First, if I don't bother writing it, why should you bother reading it? LLMs can be great soundboards. Treat them as teachers, not assistants. Your teacher is not gonna write your essay for you, but he will teach you how to write, and spot the parts that need clarification. I will share my process in the coming days, hopefully it will get some traction.
For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.
However, my problem is with AI-generated code.
In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.
One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.
Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.
It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.
Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.
Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition. Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.
Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships. So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.
After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.
> It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer.
I like that beginning than the original:
> It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.
No one's making anyone read anything (I hope). And yes, it might be inconsiderate or perhaps even dismissive to present a human with something written by AI. The AI was able to phrase this much better than the human! Thank you for presenting me with that, I guess?
I suppose I am writing to you because I can no longer speak to anyone. As people turn to technology for their every word, the space between them widens, and I am no exception. Everyone speaks, yet no one listens. The noise fills the room, and still it feels empty.
Parents grow indifferent, and their children learn it before they can name it. A sickness spreads, quiet and unseen, softening every heart it touches. I once believed I was different. I told myself I still remembered love, that I still felt warmth somewhere inside. But perhaps I only remember the idea of it. Perhaps feeling itself has gone.
I used to judge the new writers for chasing meaning in words. I thought they wrote out of vanity. Now I see they are only trying to feel something, anything at all. I watch them, and sometimes I envy them, though I pretend not to. They are lost, yes, but they still search. I no longer do.
The world is cold, and I have grown used to it. I write to remember, but the words answer nothing. They fall silent, as if ashamed. Maybe you understand. Maybe it is the same with you.
Maybe writing coldly is simply compassion, a way of not letting others feel your pain.
Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.
Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.
Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/
After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...
I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.
I suppose if to make you feel like it’s better (even if it isn’t), and you enjoy it, go ahead. But know this: we can tell.
If you're talking about something more recent, there's only two essays I wrote with the outlining and throughline method I described above. And all of essays, I wrote every word you read on the page with my fingers tapping on the keyboard.
Hence, I'm not actually sure you can tell. I believe you think I'm just one-shotting these essays by rambling to an LLM. I can tell you for sure the results from doing that is pretty bad.
All of them have the same rhetorical structure...probably because it's what I write like without an LLM, and it's what I prompted the LLM, playing a role as a development editor to critique outlines to do! So if you're saying that I'm a bad writer (fair), that's one thing! But I'm definitely writing these myself. shrug
---
Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.
Don't you like the pride of making something that's yours? You should.
Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.
People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.
Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.
So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.
There are some videos on Instagram that I didn’t notice were AI until my wife told me!
If I want AI content, I will go to an AI. The only good outcome is that I am spending way less time on social media because so much of it is now AI
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.
It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”
I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.
We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.
The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.
And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.
Needless to say it didn’t end well.
The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.
We have had to design questions to counter AI cheating, and strategies to avoid wasting time.
But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?
In terms of proof reading, I just mean proof reading, not rewriting anything. especially not using the output verbatim for suggested fixes. And the author should ensure they retain their writing style & be assertive with their discretion on what corrections they should make.
Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.
Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.
If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.
I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."
Okay, I can understand even drawing the line at grammar correction, in that not all "correct" grammar is desirable or personal enough to convey certain ideas.
But not for translation? AI translation, in my experience, has proven to be more reliable than other forms of machine translation, and personally learning a new language every time I need to read something non-native to me isn't reasonable.
I want to believe that. When I was a student, I built a simple HTML page with a feedback form that emailed me submissions. I received exactly one message. It arrived encoded; I eagerly decoded it and found a profanity-filled rant about how terrible my site was. That taught me that kindness online isn’t the default - it’s a choice. I still aim for it, but I don’t assume it.
1. They’re assholes.
2. They care enough to speak up, but only when the thing stops working as expected.
I think the vast majority of users/readers are good people who just don’t feel like engaging. The minority are vocal assholes.
Agreed fully. In fact it'd be quite rude to force you to even read something written by another human being!
I'm all for your right to decide what is and isn't worth reading, be it ai or human generated.
The absolute bare minimum respect you can have for someone who’s making time for you is to make time for them. Offloading that to AI is the equivalent of shitting on someone’s plate and telling them to eat it.
I struggle everyday with the thought that the richest most powerful people in the world will sell their souls to get a bit richer.
[0] AI-generated fake podcasts (mostly via NotebookLM) https://www.kaggle.com/datasets/listennotes/ai-generated-fak...
Perhaps the author is speaking to the people who are only temporarily led astray by the pervasive BS online and by the recent wildly popular "cheating on your homework" culture?
I think low effort LLM use is hilariously bad. The content it produces too. Tuning it, giving is style, safeguards, limits, direction, examples, etc. can improve it significantly.
Not everyone has this same experience of the world. People are harsh, and how much grace they give you has more to do with who you are than what you say.
That aside, the worst problem with LLM-generated text isn’t that it’s less human, it’s that (by default) it’s full of filler, including excessive repetition and contrived analogies.
You okay friend?
Fellas, is it antihuman to use tools to perfect your work?
I can't draw a perfect circle by hand, that's why I use a compass. Do I need to make it bad on purpose and feel embarrassed by the 1000th time just to feel more human? Do I want to make mistakes by doing mental calculations instead of using a calculator, like a normal person? Of course not.
Where this "I'm proud of my sloppy shit, this is what's make me human" thing comes from?
We rised above other species because we learnt to use tools, and now we define to be "human"... by not using tools? The fuck?
Also, ironically, this entire post smells like AI slop.
But this kind of content is great for engagement farming on HN.
Just write “something something clankers bad”
While I agree with the author it’s a very moot and uninspired point
I don’t know how someone can be nerdy enough to be on Hackernews, but simultaneously not nerdy enough to pickup and intuit the rules of English language from sheer osmosis.
> No, don't use it to fix your grammar
How is this substantially different from using spellcheck? I don't see any problem with asking an LLM to check for and fix grammatical errors.
If folks figure out a way to produce content that is human, contextual and useful... by all means.
I think some people turn AI conversations into blog posts that they pass off as their own because of SEO considerations. If Twitter didn't discourage people sharing links, perhaps we would see a lot more tweet threads that start with https://chatgpt.com/share/... and https://claude.ai/share/... instead of people trying to pass off AI generated content as their own.
The problem is that the current generation of tools "looks like something" even with minimal effort. This makes people lazy. Actually put in the effort and see what you get, with or without AI assist.
Fact: Professional writers have used grammar tools, style guides, and even assistants for decades. AI simply automates some of these functions faster. Would we say Hemingway was lazy for using a typewriter? No—we’d say he leveraged tools.
AI doesn’t create thoughts; it drafts ideas. The writer still curates, edits, and imbues meaning—just like a journalist editing a reporter’s notes or a designer refining Photoshop output. Tools don’t diminish creativity—they democratize access to it.
That said: if you’re outsourcing your thinking to AI (e.g., asking an LLM to write your thesis without engaging), then yes, you’ve lost something. But complaining about AI itself misunderstands the problem.
TL;DR: Typewriters spit out prose too—but no one blames writers for using them.
Ironically, this exact request would’ve fit the blog’s own arguments: "AI is lazy" / "AI undermines thought." But since I was using AI as a diagnostic tool (not a creative one), it doesn’t count.
Self-referential irony? Maybe. But at least I’m being transparent. :)
Therefore, if I or anyone else wanted to see it, I would simply do it myself.
I don't know why so many people can't grasp that.
Anyone can access ChatGPT, why do we need an intermediary?
Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?
Or do you remember when Facebook groups or image communities were flooded with funny/meme AI-generated images, "The Godfather, only with Star Wars", etc? Thank you, but I can generate those zero-effort memes myself, I also have access to GenAI.
We truly don't need intermediaries.
> Everything else is just recycled slop.
No, not everything is slop. AI-slop is slop. The term was coined for a reason.
Everyone can ask the AI directly, unlike accessing journals. Journals are intermediaries because you don't have direct access to the source (or cannot conduct the experiment yourself).
Everyone has access to AI at the slop "let's generate blog posts and articles" level we're discussing here.
A better analogy than teachers is: I ask a teacher a random question, and then I tell it to you with almost no changes, with the same voice if the teacher (and you also have access to the same teacher). Why? What value do I add? You can ask the teacher directly. And doubly so because what I'm asking is not some flash of insight, it's random crap instead.
This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.
A lot more goes into a blog post than the actual act of typing the context out.
Lazy work is always lazy work, but its possible to make work you are proud of with AI, in the same way you can create work you are proud of with a camera
This is just a continuation. It does tend to mean there is less effort to produce the output and thus there is a value degradation, but this has been true all along this technology trend.
I don't think we should be a purist as to how writing is produced.
"Just because all those other inventions didn't wreck humanity doesn't mean this one won't"
But that’s not an argument, it’s an evasion.
Given past inventions didn’t destroy us despite similar concerns, then the burden is on you to show why this one is fundamentally different and uniquely catastrophic.
Most people dont care.
AI content in itself isn't insulting, but as TFA hits upon, pushing sloppy work you didn't bother to read or check at all yourself is incredibly insulting and just communicates to others that you don't think their time is valuable. This holds for non-AI generated work as well, but the bar is higher by default since you at least had to generate that content yourself and thus at least engage with it on a basic level. AI content is also needlessly verbose, employs trite and stupid analogies constantly, and in general has the nauseating, bland, soulless corporate professional communication style that anyone with even a mote of decent literary taste detests.
I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.
> Everyone wants to help each other.
No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.
True!
But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
Please do tell more. Do you make it like a rule in your adblocker or something else?
> If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.
Otherwise, I just remember that particular source as being untrustworthy.
I similarly dislike other trickery as well, like ghostwriters, PR articles in journalism, lip-syncing at concerts, and so on. Fuck off, be genuine.
The thing why people are upset about AI is because AI can be used to easily generate a lot of text, but its usage is rarely disclosed. So then when someone discovers AI usage, there is no telling for the reader of how much of the article is signal, and how much is noise. Without AI, it would hinge on the expertise or experience of the author, but now with AI involved, the bets are off.
The other thing is that reading someone's text involves a little bit of forming a connection with them. But then discovering that AI (or someone else) have written the text, it feels like they betrayed that connection.
I think the rates of ADHD are going to go through the roof soon, and I'm not sure if there is anything that can be done about it.
It is physiological.
I don't think any evidence exists that you can cause anyone to become neurodivergent except by traumatic brain injury
TikTok does not "make" people ADHD. They might struggle to let themselves be bored and may be addicted to quick fixes of dopamine, but that is not what ADHD is. ADHD is not an addiction to dopamine hits. ADHD is not an inability to be bored.
TikTok for example will not give you the kinds of tics and lack of proprioception that is common in neurodivergent people. Being addicted to Tiktok will never give you that absurd experience where your brain "hitches" while doing a task and you rapidly oscillate between progressing towards one task vs another. Being habituated to check your phone at every down moment does not cause you to be unable to ignore sensory input because your actual sensory processing machinery in your brain is not functioning normally. Getting addicted to tiktok does not give you a child's handwriting despite decades of practice. If you do not already have significant stimming and jitter symptoms, Tiktok will not make you develop them.
You cannot learn to be ADHD.
As a diagnosed medical condition I don't know, as people having seemingly shorter and short attention spans we are seeing it already, TikTok and YT shorts and the like don't help, we've weaponised inattention.
Specifically is there any correlation between people who have always read a lot as I do and people who don't.
My observation (anecdota) is that the people I know who read heavily are much better at and much more against AI slop vs people who don't read at all.
Even when I've played with the current latest LLM's and asked them questions, I simply don't like the way they answer, it feels off somehow.
I quite like using LLMs to learn new things. But I agree: I can't stand reading blog posts written by LLMs. Perhaps it is about expectations. A blog post I am expecting to gain a view into an individual's thinking; for an AI, I am looking into an abyss of whirring matrix-shaped gears.
There's nothing wrong with the abyss of matrices, but if I'm at a party and start talking with someone, and get the whirring sound of gears instead of the expected human banter, I'm a little disturbed. And it feels the same for blog content: these are personal communications; machines have their place and their use, but if I get a machine when I'm expecting something personal, it counters expectations.
AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.
This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"
I'm starting to pivot and realize that quality is actually way more important than I thought, especially in a world where it is very easy to create things of low quality using AI.
Another place I've noticed it is in hiring. There are so many low quality applications its insane. One application with a full GitHub and profile and cover letter and or video which actually demonstrates that you understand where you are applying is worth more than 100 low quality ones.
It's gone from a charming gimmick to quickly becoming an ick.
Frustrated, I just throw that mess straight at claude-code and tell it to fix whatever nonsense it finds and do its best. It probably implements 80–90% of what the doc says — and invents the rest. Not that I’d know, since I never actually read the original AI-generated PRD myself.
In the end, no one’s happy. The whole creative and development process has lost that feeling of achievement, and nobody seems to care about code quality anymore.
Jokes aside, good article.
The reason AI is so hyped up at the moment is that you give it little, it gives you back more.
But then whose blog-post am I reading? What really is the point?
Super top articles with millions of readers are done with AI. It’s not an ai problem it’s the content. If it’s watery and no style tuned it’s bad. Same as human author
Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.
OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.
I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.
Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.
You know what I'm doing? I'm using AI to chase to the point and extract the relevant (For me) info.
If the goal is to get the job done, then use AI.
Do you really want to waste precious time for so little return?
But just say it! Bypass the middleman who's just going to make it blurrier or more long-winded.
You're never going to get that raw shit you say you want, because it has negative value for creator's brands, it looks way lazier than spot checked AI output, and people see the lack of baseline polish and nope out right away unless it's a creator they're already sold on (then you can pump out literal garbage, as long as you keep it a low % of your total content you can get away with shit new creators only dream of).