Grammarly is using our identities without permission, https://www.theverge.com/ai-artificial-intelligence/890921/g..., https://archive.ph/1w1oO
Big difference between "AI, rewrite this passage to sound more like Hunter S Thompson" and "Grammarly-brand unauthorized digital agent Hunter S Thompson, provide a critique of my writing"
Let's see what company values informed this decision [0].
> At Grammarly, it all starts with our EAGER values: Ethical, Adaptable, Gritty, Empathetic, and Remarkable. These values are guiding lights that keep the Grammarly experience compassionate and our business competitive.
Sounds like something I'd expect to see on a banner in an elementary school classroom.
In other words an LLM can spit out a plausible "output of X", however it cannot encode the process that lead X to transform their inputs into their output.
https://www.sciencedirect.com/science/article/pii/S0749596X2...
i can ask it to tell me how to write like a person X right now.
The person in that room, looking up a dictionary with Chinese phrases and patterns, certainly follows a process, but it's easy to dismiss the notion that the person understands Chinese. But the question is if you zoom out, is the room itself intelligent because it is following a process, even if it's just a bunch of pattern recognition?
can you give a specific example of what an llm can't do? be specific so we can test it.
Not sure why you need a concrete example to "test", but just think about the fact that the LLM has no idea how a writer brainstorms, re-iterates on their work, or even comes up with the ideas in the first place.
This isn't true in general, and not even true in many specific cases, because a great deal of writers have described the process of writing in detail and all of that is in their training data. Claude and chatgpt very much know how novels are written, and you can go into claude code and tell it you want to write a novel and it'll walk you through quite a lot of it -- worldbuilding, characters, plotting, timelines, etc.
It's very true that LLMs are not good at "ideas" to begin with, though.
It's certainly possible to mimic many aspects of a notable writer's published style. ("Bad Hemingway" contests have been a jokey delight for decades.) But on the sliding scale of ingenious-to-obnoxious uses for AI, this Grammarly/Superhuman idea feels uniquely misguided.
Imagine a interviewing a particularly diligent new grad. They've memorized every textbook and best practices book they can find. Will that alone make them a senior+ developer, or do they need a few years learning all the ways reality is more complicated than the curriculum?
LLMs aren't even to that level yet.
And that's often inaccurate - just as much as asking startup founders how they came to be.
Part of it is forgot, part of it is don't know how to describe it and part of it is don't want to tell you so.
To really recreate his writing style, you would need the notes he started with for himself, the drafts that never even made it to his editor, the drafts that did make to the editor, all the edits made, and the final product, all properly sequenced and encoded as data.
In theory, one could munge this data and train an LLM and it would probably get significantly better at writing terse prose where there are actually coherent, deep things going on in the underlying story (more generally, this is complicated by the fact that many authors intentionally destroy notes so their work can stand on its own--and this gives them another reason to do so). But until that's done, you're going to get LLMs replicating style without the deep cohesion that makes such writing rewarding to read.
But authors have not done this work alone. Grammarly is not going to sell "get advice from the editorial team at Vintage" or "Grammarly requires your wife to type the thing out first, though"
I'll also note that no human would probably want advice from the living versions of the author themselves.
I can do it at the moment with Shakespeare an LLMs.
ex: i read a lot of shakespeare, understand patterns, understand where he came from, his biography and i will be able to write like him. why is it different for an LLM?
i again don't get what the point is?
As another example, I can write a story about hobbits and elves in a LotR world with a style that approximates Tolkien. But it won't be colored by my first-hand WW1 experiences, and won't be written with the intention of creating a world that gives my conlangs cultural context, or the intention of making a bedtime story for my kids. I will never be able to write what Tolkien would have written because I'm not Tolkien, and do not see the world as Tolkien saw it. I don't even like designing languages
that's why we have really good fake van gogh's for which a person can't tell the difference.
of course you can't do the same as the original person but you get close enough many times and as humans we do this frequently.
in the context of this post i think it is for sure possible to mimic a dead author and give steps to achieve writing that would sound like them using an LLM - just like a human.
Editing is one of these things. There can be lots of different processes, informed by lots of different things, and getting similar output is no guarantee of a similar process.
If we are talking about human artifacts, you never have reproducibility. The same person will behave differently from one moment to the next, one environment to another. But I assume you will call that natural variation. Can you say that models can't approximate the artifacts within that natural variation?
If I trained (or, more likely, fine-tuned) an LLM to generate code like what's found in an individual's GitHub repositories, could you comfortably say it writes code the same way as that individual? Sure, it will capture style and conventions, but what about our limitations? What do you think happens if you fine-tune a model to write code like a frontend developer and ask it to write a simple operating system kernel? It's realistically not in their (individual) data but the response still depends on the individual's thought process.
Of course, but reasonable behavior across all humans is not the same as what one specific human would do. An individual, depending on the scenario, might stick to a specific choice because of their personality etc. which is not always explained, and heavily summarized if it is.
Look, I don't think you understand how LLM's work. Its not about fine tuning. Its about generalised reasoning. The key word is "generalised" which can only happen if it has been trained on literally everything.
> It's relevant for data it hasn't been trained on
LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.
Yes, but how does that help it capture the nuances of an individual? It can try to infer but it will not have enough information to always be correct, where correctness is what the actual individual would do.
this is not true, any examples?
I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.
This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo
As I look back on my day, I find myself quite pleased with this line.
The LLM does not model text at this meta-level. It can only use those texts as examples, it cannot apply what is written there to it's generation process.
can you provide a _single_ example where LLM might fail? lets test this now.
You need to show me an LLM applying writing techniques do not have examples in its corpus.
You would have to use some relatively unknown author, I can suggest Iida Turpeinen. There will be interviews of her describing her writing technique, but no examples that aren't from Elolliset (Beasts of the sea).
Find an interview where Turpeinen describes her method for writing Beasts of the Sea, e.g.: https://suffolkcommunitylibraries.co.uk/meet-the-author-iida...
Now ask it to produce a short story about a topic unrelated to Beasts of the Sea, let's say a book about the moonlanding.
A human doing this exercise will produce a text with the same feel as Beasts of the Sea, but an LLM-produced text will have nothing in common with it.
why are you bringing this constraint?
If someone has already done the work of giving an example of how to produce text according to a process, we have no way of knowing if the LLM has followed the process or copied the existing example.
And my point of course is that copying examples is the only way that LLMs can produce text. If you use an author who has been so analyzed to death that there are hundreds of examples of how to write like them, say, Hemingway, then that would not prove anything, because the LLM will just copy some existing "exercise in writing like Hemingway".
you have asked for an LLM to read a single interview and produce text that sounds similar to the author based on the techniques on that single interview.
https://claude.ai/share/cec7b1e5-0213-4548-887f-c31653a6ad67 here is the attempt. i don't think i could have done much better.
You could not have done better? Love it. You didn't even bother rewriting my post before pasting it into the box. The post isn't addressed as a prompt, it's my giving you the requirements of what to prompt.
Also, because you did that, you've actually provided evidence for my argument: notice that my attitudes about LLMs are reflected in the LLM output. E.g.:
"Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.""
That's precisely because it can't separate metatext from text. It's just copying the vibe of what I'm saying, instead of understanding the message behind the text and trying to apply it. It also hallucinates somewhat here, because it's argument is about humans absorbing the text rather than the metatext. But that's also to be expected from a syntax-level tool like an LLM.The end result is... nothing. You failed the task and you ended up supporting my point. But I appreciate that you took the time to do this experiment.
> "Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.
a human would have to read all the text, so would an LLM but you have not allowed this from your previous constraint. then allow an LLM to reproduce something that is in its training set?
why do you expect an LLM to achieve something that even a human can't do?
Do you remember the point we're arguing? That a human can understand text about a way of writing, and apply that information to the _process_ of writing (not the output).
If you admit the LLM can't do this, then you are conceding the point.
I don't know why you're claiming that humans can't do this when we very clearly can.
An illustrative example: I could describe a new way of rhyming to a human without an example, and they could produce a rhyme without an example. However describing this new rhyming scheme to an LLM without examples would not yield any results. (Rhyming is a bad example to test, however, because the LLM corpi have plenty of examples).
The point is that you dont become Jimi Hendrix or Eric Clapton even if you spend 20 years playing on a cover band. You can play the style, sound like but you wont create their next album.
Not being Jimi Hendrix or Eric Clapton is the context you are missing. LLMs are Cover Bands...
If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.
Instead what you will receive is a text that follows a statistically derived most likely (in accordance to the perplexity tuning) response to such a question.
Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral.
Its obvious to you.
It isnt obvious to the person I am responding to, and it isnt obvious to majority of individuals I speak with on the matter (which is why AI, personally, is in the bucket of religion amd politics for polite conversation to simply avoid)
https://news.ycombinator.com/item?id=47316787
does this show i'm a troll? throughout this thread there has been misinformation that i have been dispelling.
what you are doing is ad hominem.
here's another post where i ran the prompt that the person asked which would apparently show that LLM's can't reason
https://news.ycombinator.com/item?id=47316855
have you considered that you might be misinformed so what i say might look like trolling?
LLMs can reason about integrals as well as in a literature context. You suggested that if it’s not trained on literature then it can’t reason about it. But why does that matter?
this shows that you have very less idea on how llm's work.
LLM that is trained only on john steinbeck will not work at all. it simply does not have the generalised reasoning ability. it necessarily needs inputs from every source possible including programming and maths.
You have completely ignored that LLMs have _generalised_ reasoning ability that it derives from disparate sources.
This is not the same thing as reasoning.
LLMs are pattern matchers. If you trained an llm only to map some input to the output of John Steinbeck, then by golly that's what it'll be able to do. If you give it some input that isn't suitably like any of the input you gave it during training, then you'll get some unpredictable nonsense as output.
> If you trained an llm only to map some input to the output of John Steinbeck
this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever?
if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else.
here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr...
you could again say its just pattern matching but then i would argue that its the same thing we are doing.
why do you think that's the case? lets start from here.
the real answer is that you get benefits from having data from many sources that add up expontentially for intelligence.
> LLMs are pattern matchers
but lets try to falsify this. can you come up with a prompt that clearly shows that LLM's can't reason?
start with a way to falsify that it can't reason.
In school we would have a test with various questions to show you understand the concept of addition, for example. But while my calculator can perfectly add any numbers up to its memory limit, it has no understanding of addition.
"my calculator can perfectly add any numbers up to its memory limit" This kind of anthropomorphic language is misleading in these conversations. Your calculator isn't an agent so it should not be expected to be capable of any cognition.
They absolutely do not. If you "ask it how it came up with the process in natural language" with some input, it will produce an output that follows, because of the statistics encoded in the model. That output may or may not be helpful, but it is likely to be stylistically plausible. An LLM does not think or understand; it is merely a statistical model (that's what the M stands for!)
i can prove that it does have understanding because it behaves exactly like a human with understanding does. if i ask it to solve an integral and ask it questions about it - it replies exactly as if it has understood.
give me a specific example so that we can stress test this argument.
for example: what if we come up with a new board game with a completely new set of rules and see if it can reason about it and beat humans (or come close)?
The complete failure of Claude to play Pokemon, something a small child can do with zero prior instruction. The "how many r's are in strawberry" question. The "should I drive or walk to the car wash" question. The fact that right now, today all models are very frequently turning out code that uses APIs that don't exist, syntax that doesn't exist, or basic logic failures.
The cold hard reality is that LLMs have been constantly showing us they don't understand a thing since... forever. Anyone who thinks they do have understanding hasn't been paying attention.
> i can prove that it does have understanding because it behaves exactly like a human with understanding does.
First, no it doesn't. See my previous examples that wouldn't have posed a challenge for any human with a pulse (or a pulse and basic programming knowledge, in the case of the programming examples). But even if it were true, it would prove nothing. There's a reason that in math class, teachers make kids show their work. It's actually fairly common to generate a correct result by incorrect means.
cherry picking because gemini and gpt have beat it. claude doesn't have a good vision set up
> The "how many r's are in strawberry" question
it could do this since 2024
> The "should I drive or walk to the car wash" question
the SOTA models get it right with reasoning
> fact that right now, today all models are very frequently turning out code that uses APIs that don't exist, syntax that doesn't exist, or basic logic failures.
not when you use a harness. even humans can't write code that works in first attempt.
LLMs can't consistently win at chess https://www.nicowesterdale.com/blog/why-llms-cant-play-chess
Now, some of the best chess engines in the world are Neural Networks, but general purpose LLMs are consistently bad at chess.
As far as "LLM's don't have understanding", that is axiomatically true by the nature of how they're implemented. A bunch of matrix multiplies resulting in a high-dimensional array of tokens does not think; this has been written about extensively. They are really good for generating language that looks plausible; some of that plausable-looking language happens to be true.
https://maxim-saplin.github.io/llm_chess/
ets not cherry pick and actually see benchmarks please. i would say even ~1000 elo means that it can reason better than the average human.
this clearly tells me that GPT is good at chess, at least better than a normal person who has played ~30-40 games in their life.
If you think “the tacit knowledge and conscious/subconscious reasoning mix that caused X to write like X” can be meaningfully captured by some 1-page “style guide” like llmtropes, I’m not sure what to tell you. Such a style description would be informed by a soup of reviewers that most certainly cannot write like X even with their stronger and more nuanced observations than what the LLM picked up.
Most importantly, negative but unused signals might not be available if the text does not mention it.
When the “how many ‘r’ in ‘strawberry’” question was all the rage, you could definitely get LLMs to explain the steps of counting, too. It was still wrong.
I do have a number of examples to give you, but I no longer share those online so they aren’t caught and gamed. Now I share them strictly in person.
Maybe not in a single inference, but you can have an LLM question itself by running another inference using its previous prompt as input. You can easily see this in a deep research agent loop where it might find some data and then it goes to find other data to back that up but then finds that it was actually incorrect and then it changes its mind
First, modern LLMs are not "a huge table of phrases". They are neural networks with billions of learned parameters that generate tokens by computing probability distributions over vocabulary given prior context. There is no lookup table of stored sentences.
Second, Eliza-style bots used explicit scripted pattern matching rules. LLMs instead learn statistical representations from large corpora and can generalize to produce novel sequences that were never present in the training data.
Kent Pitman's Lisp Eliza from MIT-AI's ITS History Project (sites.google.com):
https://news.ycombinator.com/item?id=39373567
https://sites.google.com/view/elizagen-org/
https://sites.google.com/view/elizagen-org/original-eliza
Third, while "pattern matching" is sometimes used informally, it’s misleading technically. Transformers perform high-dimensional vector computations and attention over context to model relationships between tokens. That’s very different from rule-based pattern matching.
You can certainly debate whether LLMs "think", but describing them as "Eliza with a big phrase table" is not an accurate description of how they work.
You have the resources available at your fingertips to learn what the truth is, how LLMs actually work. You could start with Wikipedia, or read Steven Wolfram's article, or simply ask an LLM to explain how it works to you. It's quite good at that, while an Eliza bot certainly can't explain to you how it works, or even write code.
What Is ChatGPT Doing … and Why Does It Work?
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...
I suggest „randomly adjusting parameters while trying to make things better“ as that accurately reflects the „precision“ that goes into stuffing LLMs with more data.
This Grammarly thing seems to be a bastardized form of that not even sparing the dead.
I'd say that there was some incentive by the AI companies to muddle up the water here.
This isn't 2023 anymore
i give the LLM my codebase and it indeed learns about it and can answer questions.
Unless you are actually fine tuning models, in which case sure, learning is taking place.
if i showed a human a codebase and asked them questions with good answers - yes i would say the human learned it. the analogy breaks at a point because of limited context but learning is a good enough word.
This is revolting at so many levels.
Unless they're outright marketing this as "endorsed by" or similar, there is no case.
https://www.chiffandfipple.com/t/kenny-g-as-necrophile-long-...
You don't bring the dead virtually back to life to perform tricks for you.
I probably did not. Then I would have written that. They are fucking over the dead. They are clearly not communicating with the dead.
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
>> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Which part was snarky, excessively hostile or unprofessional?
You see, the comment that I replied to made an assumption, that assumption is embedded in the word 'probably'. The person that wrote that presumes to know what I intend. I corrected that. Clarified it and moved on. If that seems hostile and snarky to you then I'm happy to be educated. For myself, I think the comment I replied to could have been phrased as a question rather than a statement.
Your comment strikes me as a bit snarky too by the way.
You:
> I probably did not. Then I would have written that. They are fucking over the dead. They are clearly not communicating with the dead.
A less snarky and hostile version:
> I actually didn't! I specifically meant that they are fucking over the dead.
Pretty much the same content, and it makes you sound like you actually want to interact with other people. Even in your paragraph long explanation you do quite a few things that are just unnecessarily hostile. It's clearly just the way you talk, but it also doesn't mean you shouldn't work on improving it.
> For myself, I think the comment I replied to could have been phrased as a question rather than a statement.
Then you should have (politely) said it.
As well as your own exemplary behavior:
https://news.ycombinator.com/item?id=46653114
https://news.ycombinator.com/item?id=45505708
https://news.ycombinator.com/item?id=45334298
And probably others besides.
Generative AI is a plague at this point. Everybody is adding to their wares to see what happens. It's almost like ricing a car. All noise, no go.
We believed this was coming and that the best way to handle it was give the real person control over their persona to grow/edit/change and train it as they see fit.
I actually own the patent on building an expert persona based on the context of the prompt plus the real persons learned information manifold...
If it feels like Grammarly does not respect your right to digital sovereignty, it is because it does not.
Seems pretty likely usage of Grammarly's core product has cratered in the past few years. Not totally hard to imagine one of the big AI labs paying their legal fees in exchange for putting this out there and kick starting the legal process on some of these issues.
So IMO they are just flinging things at the wall trying to find a way back.
It reminds me of winzip.
Seems like there could be others that are better.
https://github.com/theJayTea/WritingTools/blob/main/Windows_...
It really feels so wrong to spare nobody, not even dead writer/people.
All it's gonna do is something similar to em-dashes where people who use it are now getting called LLM when it was their writing which would've trained LLM (the irony)
If this takes off, hypothetically, we will associate slop with the writing qualities similar to how Ghibli art is so good but it felt so sloppy afterwards and made us less appreciate the Ghibli artstyle seeing just about anyone make it.
The sad part is that most/some of these dead writers/artists were never appreciated by the people of their time and they struggled with so many feelings and writing/art was their way of expressing that. Van Gogh is an example which comes to my mind.[0] Many struggled from depression and other feelings too. To take that and expression of it and turn it into yet another product feels quite depressing for a company to do
[0]: https://en.wikipedia.org/wiki/Health_of_Vincent_van_Gogh
That train left at full steam when companies scraped the whole internet and claimed it was fair use. Now it's a slippery slope covered with slime.
I believe there'll be no slowing down from now on.
They are doing something amazing, will they ask for permission? /s.
Unrelated but surprising to me that I've found built-in grammar checking within JetBrains IDEs far more useful at catching grammar mistakes while not forcing me to rewrite entire sentences.
[1] https://plugins.jetbrains.com/plugin/12175-natural-languages... [2] https://languagetool.org -- warning: is coated in somewhat-misleading AI keywords [3] https://github.com/languagetool-org/languagetool
Isn't that what grammarly has always been, since long before the invention of the transformer? They give you a long list of suggestions, and unless you write a corporate press release half of them are best ignored. The skill is in choosing which half to ignore
Words paint the picture, but the meaning of the picture is what matters.
Or do they?
One lesson they might draw from the negative press is to offer a more open-ended interface, like ChatGPT, where for years people have already been asking "Pretend you are X and review my writing". This interface design pattern gives the press nowhere to point their angry fingers
For me, Grammarly gives me the same impression as Datadog, but I have no explanation for why I feel that way.
Does it add any value for writers?
"The work is public, hence the name. It's well known, it's in the data. Who cares".
What will they do next? Create similar publications with domainsquatting and write all-AI articles with the "public" names?
Is it still fair use, then?
It's very enlightening, if you ask me.