However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.
For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
I read somewhere that Thinking, Writing and Speaking engage different parts of your brain. Whatever the mechanism, I often resolve issues midway while writing a report on them.
I started publishing my writing recently and I too often fall back into "debugging my mental model" mode, which while extremely valuable for me, doesn't make for very good reading.
I guess the optimal sequence would be to spend a few sessions writing privately on a subject, to build a solid mental model, then record a few talks to learn to communicate it well.
-- Similarly, journaling on paper and with voice memos seems to give me a different perspective on the same problem.
If you really feel the need, you can attach the LLM output as an appendix. I probably won't read it.
however the education system has done a disservice of how critical thinking actually happens.
when you write - then try edit your thoughts (written material). the editing part helps you clarify things, bring truth to power ie. whether you're bullshitting yourself and want to continue or choose another path.
the other part - in a world of answers - critical thinking is a result of asking better questions.
writing helps one to ask better questions.
preferably if you write in a dialogue style.
You see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.
> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?
It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.
If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.
If you have a problem with that, then you should also have a problem with computers in general.
But maybe you do have a problem with computers - after all, they regularly eliminate jobs, for example. In that case, AI is only special in its potentially greater effectiveness at doing what computers have always been used to do.
But most of us use computers in various ways even if we have qualms about such things. In practice, the same already applies to AI, and likely will for you too, in future.
If writing something is too tedious for you, at least respect my time as the reader enough to just give me the prompt you used rather than the output.
Prompt: here are 5 websites, 3 articles I wrote, 7 semi-relevant markdown notes, the invitation for the lecture I'm giving, a description of the intended audience, and my personal plan and outline.
Output: draft of a lecture
And then the review, the iteration, feedback loops.
The result is thoroughly a collaboration between me and AI. I am confident that this is getting me past writer blocks, and is helping me build better arcs in my writing and lectures.
The result is also thoroughly what I want to say. If I'm unhappy with parts, then I add more input material, iterate further.
I assure you that I spend hours preparing a 10_min pitch. With AI.
(This comment was produced without AI.)
You have less interest in sifting through multiple articles and wiki pages sent to you by a stranger with a prompt than the one paragraph same stranger selected as their curated point.
And pretending like you’d act otherwise is precisely the kind of “anti ai virtue signaling” that serves as a negative mind virus.
AI is full of hype, but the delusion and head in sand reactions are worse by a mile
I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.
End users? Other Devs? These two groups are not the same.
As an end user of something, I dont care about the details of your internal refactor, only the performance, features and solutions. As a dev looking at the notes there is a lot more I want to see.
The artifact exists to inform about what is in this version when updating. And it can come easily from the commit messages, and be split for each audience (user/dev).
It doesn't change the fact that once your in the code, that history, inline is much much more useful. The commit message says "We fixed a performance issue around XXX". The inline comment is where you can put in a reason FOR the choice made.
One comes across this pattern a lot in dealing with data (flow) or end user inputs. It's that ugly change of if/elseif/elesif... that you look at and wonder "why isnt this a simple switch" because thats what the last two options really are. Having clues as inline text is a boon to us, and to any agent out there, because it's simply context at that point. Neither of us have to make a (tool) call to look at a diff, or a ticket or any number of other systems that we keep artifacts in.
Sure.
> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.
I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."
I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.
I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you would on your own, but there is value in diverging from the consensus and thinking your own thoughts.
https://stephencagle.dev/posts-output/2025-10-14-you-should-...
The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:
https://youtube.com/watch?v=VQUBYaAgyKI
While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.
Importantly, it's not wrong. I say this as someone that seems to have the contrarian gene. I am worried too, that status-quo is now instant and all-consuming for anyone anywhere. But there's still hope in that AI compresses ramp up speed for anyone that would have the capacity to branch out anyway. So that's good.
Either we find some way to filter out AI slop or the internet just stops getting used to post and consume content.
Easier to produce, but now every reader has to do the filtering.
Obviously this is nonsensical long term. Why would I want to receive your LLM output when I could get the same output myself?
The bottleneck becomes judgment, and who’s willing to stand behind it.
It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
Explaining a design, problem, etc and trying to find solutions is extremely useful.
I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.
The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.
Neither is the LLM
LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing.
But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.
Unfortunately they can also validate some really bad ideas.
That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.
What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.
How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.
This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.
The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.
This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.
Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4
This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.
From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.
Yes, this is my process:
Record yourself rambling out loud, and import the audio in NotebookLM.
Then use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove filler words. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
I've definitely lost something since migrating my Artist's Way morning pages and to the netbook. (Worth it, though, to enable grep—and, now, RAG).
It's not the same thing as talking to someone (or a group) about something.
I talk to other people. They influence me, steer me. I am okay with that.
Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).
Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/
If I let an LLM generate the text, that cognitive resolution never happens. I can't offload a thought i haven't actually formed - hence am troubld to safely forget about it.
Using AI for that is like hiring someone to lift weights for you and expecting to get stronger (I remember Slavoj Žižek equating it to a mechanical lovemaking in his recent talk somewhere).
The real trap isn't that we/writers willbe replaced; it's that we'll read the eloquent output of a model and quietly trick ourselves into believing we possess the deep comprehension it just spit out.
It reminds me of the shift from painting to photography. We thought the point of painting was to perfectly replicate reality, right up until the camera automated it. That stripped away the illusion and revealed what the art was actually for.
If the goal is just to pump out boilerplate, sure, let AIdo it. But if the goal is to figure out what I actually think, I still have to do the tedious, frustrating work of writing it out myself .
Words / language are the great technology we've made for representing ideas, and representing those ideas in the written word enables us to evaluate, edit, and compose those smaller ideas into bigger ideas. Kind of like how teachers would ask for an explanation in my own words, writing down my understanding of something I'd heard or read forced me to really evaluate the idea, focus on the parts I cared about, and record that understanding. Without the writing step, ideas would easily just float through my mind like a phantasm, shapeless and out of focus and useless when I had a tangible need for the idea.
I am glad I learned to write (both code and text) long before Claude came online. It would have been very hard to struggle through translating ideas from my head into words and words (back) into ideas in my head if I knew there was an "Easy button" I could hit to get something cogent-sounding. I hope a large enough proportion of kids today will still put in the work and won't just end up with a stunted ability to write/think.
Though I’m unsure, this notion comes to mind:
to take a casual reply to a post
and turn it, with an easy button’s press,
to flawless iambic pentameter
might be the finest way to learn the art
of speaking thus extempore, off the cuff.
It's not perfect, but I envy the wealth of tools this generation has. They'll find uses for AI that leave us in awe.
"So if that is true then this next statement is also true..." and the LLM will either agree or disagree.
There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension.
Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.
There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.
The generic templated slop styles - rule of three, it's not this it's that, bullet points, "that's rare", strained weird or cringey similes, and the other tics - that appear all over social media are the low-skill default for AI writing. It doesn't have to be that crude or obvious, and learning how to push it beyond that is a skill in itself.
As is creating knowledge engineering systems that use agents to manage knowledge in useful ways, with writing as one possible output.
You already have this. Control over your writing is the default position.
Yeah, I regularly spend a lot of time with Claude fleshing out ideas and scoping out features. I'm behind the times and just use the chat interface rather than Claude Code, so perhaps there are controls I'm not aware of, but there can't be any that make it correctly understand an under-specified idea, or even correctly understand an adequately specified idea.
For example, I've been playing around building a side-project that involves building out a safety-weighted graph to support generating safer bike routes. I was recently working on integrating traffic control devices (represented on OpenStreetMaps nodes) into the model where I calculate weights for my graph (I essentially join the penalty for the traffic control device onto the destination end of an edge) and Claude kept wanting to average that penalty by the length of the edge (as that makes sense for some other factors in my model like crashes, surface material, max speed, etc), but doesn't make sense for traffic control signals at intersections (as the length of an edge shouldn't change the risk a cyclist experiences going through an intersection). If I didn't have a well-developed ability to closely parse words to ideas, I could have very easily just taken the working model Claude generated and built more on top of it, setting up a dangerous situation where the routing algo would promote routes running a user through more intersections (which are the most dangerous place for cyclists).
I hope a comparable proportion of kids coming up today will spend the time and energy to understand the ideas behind the text and the code, but I really doubt 18-year old me would have had that wisdom. I would have been underspecifying what I wanted out of a lack of prerequisite knowledge, receiving slop, and either promptly getting lost in debugging hell or more likely the worse case of erroneously believing the slop satisfied the brief.
> There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension. > Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.
In all of those areas, if you take away the human who can develop value-creating ideas into an accurate and high-fidelity written representation, you will just get slop. Developing ideas and representing them in words is the skill. There is no substitute.
Sometimes an LLM can shortcut me through a bunch of those misunderstandings. It feels like an easy win.
But ultimately, lacking context for how I got to that point in the debugging litany always slows me down more than the time it saved me. I frequently have to go backwards to uncover some earlier insight the LLM glossed over, in order to "unlock" a later problem in the litany.
If the problem is simple enough the LLM can actually directly tell you the answer, it's great. But using it as a debugging "coach" or "assistant" is actively worse than nothing for me.
This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.
I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.
Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.
When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.
Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.
So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.
When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.
In the office, that review step gets outsourced to your coworkers.
Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.
With an LLM doing all the writing for you, you learn close to nothing.
As for writing, we need to keep in mind that LLMs are tools that augment. So yes if you completely abdicate all responsibility to the LLM that is basically not constructive at all. But if you use it as a tool - what difference does it make? Spell and grammar checkers are also changing your text and of course I am exaggerating a little.
And I do think LLMs can help you think better but not in a default mode. It is not about prompting skills but making it work the way you want it. And that takes time because well, it is not deterministic and it requires understanding how you generally think and write. Most of the time might not be possible. For others it works really well, maybe because they write like an LLM?
Btw, we often forget that English is not native for majority of people on this planet.
IMHO using LLMs to express themselves clearly is many times better than remaining misunderstood.
When I asked it for alternatives/edits, they were not good however.
Docs written by agents almost always produce mediocre results.
Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.
No. Don't pretend your taking shortcuts is less questionable because everyone else is doing it too. We're not. Own it yourself, don't get me involved.
> I am able to be so much more effective by sheer volume of words
If you think value comes from volume of words you really need to understand writing better.
I've noticed this myself. Even in my Obsidian vault, which only I read and write in. I think it's a development into writing more imperatively, instinctually. Thinking more in instructions and commands than the speaking and writing habits I've developed organically over my life. Or just "talking to the computer" in plain English, after having to convert my thoughts to code anytime I want to make it do something.
I've been thinking about the role of "director" in media as an analogy to writing with LLMs. I'm working right now on an "essay," that I'm not sure I'll share with anyone, even family (who is my first audience). Right now, under the Authorship section, I wrote "Conceived, directed, and edited by Qaadika. Drafted by Claude", with a few sentences noting that I take responsibility for the content, and that the arguments, structure, audience, and editorial judgments are mine.
I had a unique idea and started with a single sentence prompt, and kept going from there until I realized it should be an essay. So the ideas in it are mine. The thesis is mine. I'm going back and forth with the LLM section by section. Some prompts are a sentence. Some are eight paragraphs. I can read the output and see exactly what was mine and what the LLM added. But my readers won't. They'll just see "Author: Qaadika" and presume every single word was mine. Or they'll sniff out the LLM-ness and stop reading.
I can make a film and call myself director without ever being seen in it. Is is the same if I direct the composition of words without ever writing any of the prose myself? Presuming I've written enough in prompts that it's identifiably unique from cheaper prompts and "LLM, fill in the blank".
We credit Steven Spielberg with E.T. But he didn't write the screenplay. He probably had comments on it, though. He didn't operate the camera. But he probably told the operators where to put it. He didn't act in it. But he probably told the actors where to stand and where to move and how to be. He didn't write the music. But he probably had a sense of when and where to place it in the audio. And he didn't spend every moment in the cutting room, placing every frame just so.
But his name is at the top. He must have done something, even if I can't point to anything specific. The "Vibe" of the film is Spielberg, but it's also the result of hundreds of minds, most of whole aren't named until the end of the film, and probably never read by most viewers.
His contribution to the film was instructions. Do this, don't do that. Let's move this scene to here. This shot would be better from this angle. The musical swell should be on this shot; cut it longer to fit.
So where, exactly, is "Spielberg" in E.T.? What can we objective credit him with, aside from the finished product: E.T. the Extra-Terrestrial: Coming June 1982?
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.
I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.
I have to disagree that it's good for LLMs to do the research, depending on the context.
If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.
If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.
I will sometimes write a lesson and have an LLM generate a quiz and give me feedback on my content search for mistakes or unclear content.
I have also used it to help me structure a document. I give it requirements it makes a general outline that I then just fill in with my own words.
I’m still not sure how to approach my students’ uses of an LLM. I am loath to make a hard and fast rule of no LLMs because that’s ridiculous. I want to encourage appropriate use. I don’t know what is appropriate use in the context of a student.
An LLM can be a great learning tool but it also can be a crutch.
I tend to do extensive research (that process in itself would involve LLMs too, sure) in a tech plan, a product spec, etc. and usually end up with a really solid idea in my head and like say, five critical key points about this tech plan or product spec that I absolutely must convey in this document.
Then I basically "brain dump" my critical key points (including everything about it, background/reasoning, why this or that way, what's counterintuitive about it, why is this point important, etc.) in pretty messy writing (but hitting all the important talking points) to a LLM prompt, asking it to produce the document I need (be it tech plan, product spec, whatever) and ask it to write it based on my points.
The resulting document has all the important substance on it this way.
If you use LLM to produce documents like this by a way of a prompt like "Write a tech plan for the product feature XYZ I want to build", you're going to get a lot of fluff. No substance, plenty of mistakes, wrong assumptions, etc.
There is nothing wrong with speechwriters. Various authors spilled out their thoughts in rough format and had writers turn them into better structured, prosed and understandable projections. Hand writing each sentence that is presented as an end-product to the reader doesn't solve that problem.
Forcibly coupling the two is an arbitrary choice that may be a valid tradeoff for some and not so for others, and not so for _all_ writing.
I'm not good at looping through a document with proper english prose. My writing is raw, particular, and I gloss over a lot of details. LLMs help me turn my shitty extensive notes in bad grammar and syntax, into shareable and understandable artifacts. They help me turn more of my thoughts into ingestable communication by others. Without AI, I communicate less of my thoughts due to friction. My thoughts are formed and authored and written, but not in a format consumable by anyone else.
Ebikes help older riders keep riding.
Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.
But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.
A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.
I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.
I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem.
In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on.
So I'll say: Do your own writing, first.
It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).
With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.
That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]
[0] https://time.com/7295195/ai-chatgpt-google-learning-school/
In essense, LLM's are a much better spell check.
To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.
This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.
But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.
I don't really understand why people will create blogs that are generated by Claude or ChatGPT. You don't have to have a blog, isn't the point of something like a blog to be your writing? If I wanted an opinion from ChatGPT I could just ask ChatGPT for an opinion. The whole point of a blog, in my mind, is that there's a human who has something that they want to say. Even if you have original ideas, if you have ChatGPT write the core article makes it feel kind of inhuman.
I'm more forgiving of stuff like Grammarly, because typos are annoying, though I've stopped using it because I found I didn't agree with a lot of its edits.
I admit that I will use Claude to bullshit ideas back and forth, basically as a more intelligent "rubber duck", but the writing is always me.
I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.
LLMs write poorly because most people write poorly. They didn’t cause it, they simply emulate it.
>Essay structured like LLM output
Hmmm...
The biggest problem is they don't understand the time effort tradeoff between understanding and language so they don't know how to pack the densities of information properly or how to swim through choppy relationships with the world around them while effectively communicating.
But who knows, maybe they're more effective and I'm just an idiot.
My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.
While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.
The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.
Then, ask an LLM to fix up the article, make it look professional and fill in the "fluff". Explicitly tell it to not include facts not already in the document.
Review the document and if its all good, its done.
Good AI writing takes time, can be valuable, and can inspire readers to send in praise about how insightful or thorough a particular article was (speaking from experience). Why do it? The same reason we all use Claude all day to write code - it is faster / you can do more of it. But in the same way that a junior engineer vibing code is a lot more likely to produce slop than a grizzled senior who is doing the same thing, you have to know what you are doing to get good results out of it.
Pushing back against AI writing in 2026 is like the people pushing back against AI coding in 2024. It's not a question of if it will happen. It's a question of how to do it well. ;)
You can write for yourself, through thinking, and it can be sloppy, bc you're doing it for yourself.
A homecooked meal does NOT look like a Thanksgiving meal.
Most of these writers think that all writing looks like Thanksgiving meals- they aren't. Homecooked meals can be simple, delicious, and not meant to cater for 20+ guests, from family to friends. Each with their own weird peculiarities and food allergies.
writing for thinking should be more like home cooked meals- really disorganized, really sloppy, with none of the presentation, but with all the nutrition and comfort that comes with home cooked meals.
writing is thinking for me, but writing looks like this post; something shot from the hip, and unpolished, to be consumed for myself. it'll probably be downvoted, and that's absolutely ok
This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?
Unless the idea is trivial, LLMs are probably just getting in the way.
Adults now have to be explained, like children, that you can’t just stream info through the eyes and ears and expect to learn anything.
That’s one explanation for this apparent need; there are also more sinister ones.
The problem with writing is the feedback tends to be inconsistent. With going to the gym you can track your progress quantitatively such as how fast or far you can run or weight lifted, but it's sometimes hard to know if you're improving at writing.