Grammarly is offering ‘expert’ AI reviews from famous dead and living writers
132 points
1 month ago
| 22 comments
| wired.com
| HN
See also:

Grammarly is using our identities without permission, https://www.theverge.com/ai-artificial-intelligence/890921/g..., https://archive.ph/1w1oO

randusername
25 days ago
[-]
> When asked if Superhuman considered notifying the people named in its AI feature, or requesting their permission, Gay said, “The experts in Expert Review appear because their published works are publicly available and widely cited.”

Big difference between "AI, rewrite this passage to sound more like Hunter S Thompson" and "Grammarly-brand unauthorized digital agent Hunter S Thompson, provide a critique of my writing"

Let's see what company values informed this decision [0].

> At Grammarly, it all starts with our EAGER values: Ethical, Adaptable, Gritty, Empathetic, and Remarkable. These values are guiding lights that keep the Grammarly experience compassionate and our business competitive.

[0]: https://www.grammarly.com/about

reply
kleene_op
25 days ago
[-]
EAGER is not enough. EAGER only guarantees one side of the conjoined triangles of success. They teach this at business school.
reply
dolebirchwood
25 days ago
[-]
> it all starts with our EAGER values

Sounds like something I'd expect to see on a banner in an elementary school classroom.

reply
Matl
25 days ago
[-]
Many CEOs treat their employees like kids in elementary school to make themselves feel smarter.
reply
drbig
26 days ago
[-]
The most interesting is the realization that if the LLM's input is only the output of a professional (human), then by definition the LLM cannot mimic the process the (human) professional applied to get from whatever input they had to produce the output.

In other words an LLM can spit out a plausible "output of X", however it cannot encode the process that lead X to transform their inputs into their output.

reply
BrtByte
25 days ago
[-]
LLMs obviously aren't reproducing the internal cognitive process, but they might still capture some of the structural patterns that emerge from it
reply
noslenwerdna
25 days ago
[-]
Interestingly, there is some neuroscience research that transformer architecture resembles "cue based retrieval" in the human brain in some important ways.

https://www.sciencedirect.com/science/article/pii/S0749596X2...

reply
simianwords
26 days ago
[-]
i don't get what the point of what you are saying is? i can ask it to explain how to solve an integral right now with steps.

i can ask it to tell me how to write like a person X right now.

reply
Peritract
26 days ago
[-]
"Explain how to solve" and "write like X" are crucially different tasks. One of them is about going through the steps of a process, and the other is about mimicking the result of a process.
reply
z2
26 days ago
[-]
Neural networks most certainly go through a process to transform input into output (even to mimic the results of another process) but it's a very different one from human neutral networks. But I think this is the crucial point of the debate, essentially unchanged from Searle's "Chinese Room" argument from decades ago.

The person in that room, looking up a dictionary with Chinese phrases and patterns, certainly follows a process, but it's easy to dismiss the notion that the person understands Chinese. But the question is if you zoom out, is the room itself intelligent because it is following a process, even if it's just a bunch of pattern recognition?

reply
simianwords
26 days ago
[-]
but llm can do both. so what's the point?

can you give a specific example of what an llm can't do? be specific so we can test it.

reply
plewd
26 days ago
[-]
like OP originally said, the LLM doesn't have access to the actual process of the author, only the completed/refined output.

Not sure why you need a concrete example to "test", but just think about the fact that the LLM has no idea how a writer brainstorms, re-iterates on their work, or even comes up with the ideas in the first place.

reply
empath75
25 days ago
[-]
> has no idea how a writer brainstorms

This isn't true in general, and not even true in many specific cases, because a great deal of writers have described the process of writing in detail and all of that is in their training data. Claude and chatgpt very much know how novels are written, and you can go into claude code and tell it you want to write a novel and it'll walk you through quite a lot of it -- worldbuilding, characters, plotting, timelines, etc.

It's very true that LLMs are not good at "ideas" to begin with, though.

reply
GCA10
25 days ago
[-]
Professional writer here. On our longer work, we go through multiple iterations, with lots of teardowns and recalibrations based on feedback from early, private readers, professional editors, pop culture -- and who knows. You won't find very clear explanations of how this happens, even in writers' attempts to explain their craft. We don't systematize it, and unless we keep detailed in-process logs (doubtful), we can't even reconstruct it.

It's certainly possible to mimic many aspects of a notable writer's published style. ("Bad Hemingway" contests have been a jokey delight for decades.) But on the sliding scale of ingenious-to-obnoxious uses for AI, this Grammarly/Superhuman idea feels uniquely misguided.

reply
AlotOfReading
25 days ago
[-]
The distinction being made is the difference between intellectual knowledge and experience, not originality.

Imagine a interviewing a particularly diligent new grad. They've memorized every textbook and best practices book they can find. Will that alone make them a senior+ developer, or do they need a few years learning all the ways reality is more complicated than the curriculum?

LLMs aren't even to that level yet.

reply
re-thc
25 days ago
[-]
> because a great deal of writers have described the process of writing in detail

And that's often inaccurate - just as much as asking startup founders how they came to be.

Part of it is forgot, part of it is don't know how to describe it and part of it is don't want to tell you so.

reply
JCharante
25 days ago
[-]
why not? datasets are not only finished works, there's datasets that go into the process they're just available in smaller quantities
reply
elmomle
25 days ago
[-]
Let's take the work of Raymond Carver as just one example. He would type drafts which would go through repeated iteration with a massive amount of hand-written markup, revision and excision by his editor.

To really recreate his writing style, you would need the notes he started with for himself, the drafts that never even made it to his editor, the drafts that did make to the editor, all the edits made, and the final product, all properly sequenced and encoded as data.

In theory, one could munge this data and train an LLM and it would probably get significantly better at writing terse prose where there are actually coherent, deep things going on in the underlying story (more generally, this is complicated by the fact that many authors intentionally destroy notes so their work can stand on its own--and this gives them another reason to do so). But until that's done, you're going to get LLMs replicating style without the deep cohesion that makes such writing rewarding to read.

reply
mold_aid
25 days ago
[-]
A good point. "Famous author" is a marketing term for Grammarly here; it's easy to conceive of an "author" as being an individual that we associate with a finite set of published works, all of which contain data.

But authors have not done this work alone. Grammarly is not going to sell "get advice from the editorial team at Vintage" or "Grammarly requires your wife to type the thing out first, though"

I'll also note that no human would probably want advice from the living versions of the author themselves.

reply
simianwords
25 days ago
[-]
Can a human replicate style without understanding process? Yes we can. We do it all the time with Shakespeare. Why not LLMs?

I can do it at the moment with Shakespeare an LLMs.

reply
Peritract
25 days ago
[-]
Mimicking the style of Shakespeare does not produce anything close to work with the quality of Shakespeare.
reply
simianwords
26 days ago
[-]
i don't buy this logic. if i have studied an author greatly i will be able to recognise patterns and be able to write like them.

ex: i read a lot of shakespeare, understand patterns, understand where he came from, his biography and i will be able to write like him. why is it different for an LLM?

i again don't get what the point is?

reply
wongarsu
26 days ago
[-]
You will produce output that emulates the patters of Shakespeare's works, but you won't arrive at them by the same process Shakespeare did. You are subject to similar limitations as the llm in this case, just to a lesser degree (you share some 'human experience' with the author, and might be able to reason about their though process from biographies and such)

As another example, I can write a story about hobbits and elves in a LotR world with a style that approximates Tolkien. But it won't be colored by my first-hand WW1 experiences, and won't be written with the intention of creating a world that gives my conlangs cultural context, or the intention of making a bedtime story for my kids. I will never be able to write what Tolkien would have written because I'm not Tolkien, and do not see the world as Tolkien saw it. I don't even like designing languages

reply
simianwords
26 days ago
[-]
that's fair and you have highlighted a good limitation. but we do this all the time - we try to understand the author, learn from them and mimic them and we succeed to good extent.

that's why we have really good fake van gogh's for which a person can't tell the difference.

of course you can't do the same as the original person but you get close enough many times and as humans we do this frequently.

in the context of this post i think it is for sure possible to mimic a dead author and give steps to achieve writing that would sound like them using an LLM - just like a human.

reply
Peritract
26 days ago
[-]
You're still confusing "has a result that looks the same" and "uses the same process"; these are different things.
reply
simianwords
26 days ago
[-]
Why do you say it has a different process? When I ask it to do integrals it uses the same process as me
reply
Peritract
26 days ago
[-]
Not everything works like integrals. Some things don't have a standard process that everyone follows the same way.

Editing is one of these things. There can be lots of different processes, informed by lots of different things, and getting similar output is no guarantee of a similar process.

reply
esafak
25 days ago
[-]
The process is irrelevant if the output is the same, because we never observe the process. I assume you are arguing that the outputs are not guaranteed to be the same unless you reproduce the process.

If we are talking about human artifacts, you never have reproducibility. The same person will behave differently from one moment to the next, one environment to another. But I assume you will call that natural variation. Can you say that models can't approximate the artifacts within that natural variation?

reply
Rohansi
25 days ago
[-]
It's relevant for data it hasn't been trained on. LLMs are trained to be all-knowing which is great as a utility but that does not come close to capturing an individual.

If I trained (or, more likely, fine-tuned) an LLM to generate code like what's found in an individual's GitHub repositories, could you comfortably say it writes code the same way as that individual? Sure, it will capture style and conventions, but what about our limitations? What do you think happens if you fine-tune a model to write code like a frontend developer and ask it to write a simple operating system kernel? It's realistically not in their (individual) data but the response still depends on the individual's thought process.

reply
esafak
25 days ago
[-]
I don't know if LLMs are trained to imitate sources like that. I also don't know what would happen if you asked it to do something like someone who does not know how to do it. Would they refuse, make mistakes, or assume the person can learn? Humans can do all three, so barring more specific instructions any such response is reasonable.
reply
Rohansi
25 days ago
[-]
> Humans can do all three, so barring more specific instructions any such response is reasonable.

Of course, but reasonable behavior across all humans is not the same as what one specific human would do. An individual, depending on the scenario, might stick to a specific choice because of their personality etc. which is not always explained, and heavily summarized if it is.

reply
simianwords
25 days ago
[-]
>If I trained (or, more likely, fine-tuned) an LLM to generate code like what's found in an individual's GitHub repositories, could you comfortably say it writes code the same way as that individual? Sure, it will capture style and conventions, but what about our limitations? What do you think happens if you fine-tune a model to write code like a frontend developer and ask it to write a simple operating system kernel? It's realistically not in their (individual) data but the response still depends on the individual's thought process.

Look, I don't think you understand how LLM's work. Its not about fine tuning. Its about generalised reasoning. The key word is "generalised" which can only happen if it has been trained on literally everything.

> It's relevant for data it hasn't been trained on

LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.

reply
Rohansi
25 days ago
[-]
> LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.

Yes, but how does that help it capture the nuances of an individual? It can try to infer but it will not have enough information to always be correct, where correctness is what the actual individual would do.

reply
volkk
25 days ago
[-]
i think there's a lot to be said about the process as well, the motivations, the intuitions, life experiences, and seeing the world through a certain lens. this creates for more interesting writing even when you are inspired by a certain past author. if you simply want to be a stochastic parrot that replicates the style of hemingway, it's not that difficult, but you'll also _likely_ have an empty story and you can extend the same concept to music
reply
simianwords
25 days ago
[-]
I don’t see why editing is any different. If a human can learn it why not an llm
reply
arkadiytehgraet
25 days ago
[-]
Even if the visualization of the integration process via steps typed out in the chat interface is the same as what you would have done on paper, the way the steps were obtained is likely very different for you and LLM. You recognized the integral's type and applied corresponding technique to solve it. LLM found the most likely continuation of tokens after your input among all the data it has been fed, and those tokens happen to be the typography for the integral steps. It is very unlikely are you doing the same, i.e. calculating probabilities of all the words you know and then choosing the one with the highest probability of being correct.
reply
simianwords
25 days ago
[-]
> the way the steps were obtained is likely very different for you and LLM

this is not true, any examples?

reply
arkadiytehgraet
25 days ago
[-]
I explained in detail why it is true, and what would the opposite imply for you as a human being.
reply
mold_aid
25 days ago
[-]
You are not able to write like Shakespeare. Shakespeare isn't really even a great example of an "author" per se. Like anybody else you could get away with: "well I read a lot of Bukowski and can do a passable imitation" or "I'm a Steinbeck scholar and here's a description of his style." But not Shakespeare.

I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.

This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo

reply
mold_aid
25 days ago
[-]
>Literally nobody would know if the LLM was doing even a passable job of Truman >Capote-ing its way through their half-written attempt at NaNoWriMo

As I look back on my day, I find myself quite pleased with this line.

reply
tovej
26 days ago
[-]
You can understand his biography and analyses about how shakespeare might have written. You can apply this knowledge to modify your writing process.

The LLM does not model text at this meta-level. It can only use those texts as examples, it cannot apply what is written there to it's generation process.

reply
simianwords
26 days ago
[-]
no it does and what you said is easily falsifiable.

can you provide a _single_ example where LLM might fail? lets test this now.

reply
tovej
25 days ago
[-]
Yes, what I said should be falsifiable. The burden is on you to give me an example, but I can give you an idea.

You need to show me an LLM applying writing techniques do not have examples in its corpus.

You would have to use some relatively unknown author, I can suggest Iida Turpeinen. There will be interviews of her describing her writing technique, but no examples that aren't from Elolliset (Beasts of the sea).

Find an interview where Turpeinen describes her method for writing Beasts of the Sea, e.g.: https://suffolkcommunitylibraries.co.uk/meet-the-author-iida...

Now ask it to produce a short story about a topic unrelated to Beasts of the Sea, let's say a book about the moonlanding.

A human doing this exercise will produce a text with the same feel as Beasts of the Sea, but an LLM-produced text will have nothing in common with it.

reply
simianwords
25 days ago
[-]
>You need to show me an LLM applying writing techniques do not have examples in its corpus.

why are you bringing this constraint?

reply
tovej
25 days ago
[-]
Because the entire point is the LLM cannot understand text about text.

If someone has already done the work of giving an example of how to produce text according to a process, we have no way of knowing if the LLM has followed the process or copied the existing example.

And my point of course is that copying examples is the only way that LLMs can produce text. If you use an author who has been so analyzed to death that there are hundreds of examples of how to write like them, say, Hemingway, then that would not prove anything, because the LLM will just copy some existing "exercise in writing like Hemingway".

reply
simianwords
25 days ago
[-]
>Because the entire point is the LLM cannot understand text about text.

you have asked for an LLM to read a single interview and produce text that sounds similar to the author based on the techniques on that single interview.

https://claude.ai/share/cec7b1e5-0213-4548-887f-c31653a6ad67 here is the attempt. i don't think i could have done much better.

reply
tovej
25 days ago
[-]
There is no actual short story behind the link? moon_landing_turpeinen.md cannot be opened.

You could not have done better? Love it. You didn't even bother rewriting my post before pasting it into the box. The post isn't addressed as a prompt, it's my giving you the requirements of what to prompt.

Also, because you did that, you've actually provided evidence for my argument: notice that my attitudes about LLMs are reflected in the LLM output. E.g.:

  "Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.""

That's precisely because it can't separate metatext from text. It's just copying the vibe of what I'm saying, instead of understanding the message behind the text and trying to apply it. It also hallucinates somewhat here, because it's argument is about humans absorbing the text rather than the metatext. But that's also to be expected from a syntax-level tool like an LLM.

The end result is... nothing. You failed the task and you ended up supporting my point. But I appreciate that you took the time to do this experiment.

reply
simianwords
25 days ago
[-]
my bad, apprently claude doesn't share the Md. here it is https://pastebin.com/LPW6QsLE

> "Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.

a human would have to read all the text, so would an LLM but you have not allowed this from your previous constraint. then allow an LLM to reproduce something that is in its training set?

why do you expect an LLM to achieve something that even a human can't do?

reply
tovej
24 days ago
[-]
Why are you taking the LLM-hallucinated version of the argument as truth? I even clearly stated how the LLM-version of my claim is a misunderstood version of the argument.

Do you remember the point we're arguing? That a human can understand text about a way of writing, and apply that information to the _process_ of writing (not the output).

If you admit the LLM can't do this, then you are conceding the point.

I don't know why you're claiming that humans can't do this when we very clearly can.

An illustrative example: I could describe a new way of rhyming to a human without an example, and they could produce a rhyme without an example. However describing this new rhyming scheme to an LLM without examples would not yield any results. (Rhyming is a bad example to test, however, because the LLM corpi have plenty of examples).

reply
inaros
25 days ago
[-]
>> i again don't get what the point is?

The point is that you dont become Jimi Hendrix or Eric Clapton even if you spend 20 years playing on a cover band. You can play the style, sound like but you wont create their next album.

Not being Jimi Hendrix or Eric Clapton is the context you are missing. LLMs are Cover Bands...

reply
TimorousBestie
26 days ago
[-]
This is the plot of a short story of Borges’ called “Pierre Menard, the Author of Don Quixote.”
reply
Peritract
25 days ago
[-]
There's a relatively common pattern of "new tech idea => Borges already explained why that approach is conceptually flawed".
reply
RobRivera
25 days ago
[-]
Actually this is the crux and the nuance which makes discussing LLM specifics a pain in the general space.

If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Instead what you will receive is a text that follows a statistically derived most likely (in accordance to the perplexity tuning) response to such a question.

reply
netdevphoenix
25 days ago
[-]
> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral.

reply
RobRivera
25 days ago
[-]
Its obvious to me.

Its obvious to you.

It isnt obvious to the person I am responding to, and it isnt obvious to majority of individuals I speak with on the matter (which is why AI, personally, is in the bucket of religion amd politics for polite conversation to simply avoid)

reply
kenjackson
25 days ago
[-]
Wait -- I'm fairly certain this is obvious to the person you were responding to. It may not be obvious to a lay person (who may not even know LLMs are trained at all). But I think this is obvious to almost all people with even a small understanding of LLMs.
reply
RobRivera
25 days ago
[-]
I'm actually pretty convinced they're a troll or at the very least a high confrontation participant who is quick to move goal posts, ignore entire chains of logic, engage in ad hominim attacks of other posters, and is bringing zero novel insight anywhere in this thread
reply
simianwords
25 days ago
[-]
one of the posters said it can't even reason through chess, i ran the actual benchmark, spent money and actually proved that it can beat a 1000 elo chess engine.

https://news.ycombinator.com/item?id=47316787

does this show i'm a troll? throughout this thread there has been misinformation that i have been dispelling.

what you are doing is ad hominem.

here's another post where i ran the prompt that the person asked which would apparently show that LLM's can't reason

https://news.ycombinator.com/item?id=47316855

have you considered that you might be misinformed so what i say might look like trolling?

reply
RobRivera
25 days ago
[-]
What do you do for work?
reply
simianwords
25 days ago
[-]
It’s obvious to me. What point are you trying to make? It’s not religion it’s falsifiable easily.

LLMs can reason about integrals as well as in a literature context. You suggested that if it’s not trained on literature then it can’t reason about it. But why does that matter?

reply
Talanes
25 days ago
[-]
Now what if we ask the LLM to write about social media? Do you think the output would be similar to what you'd get if we had a time machine to bring the actual man back and have him form his own thoughts firsthand?
reply
bigfishrunning
25 days ago
[-]
It may be stylistically similar, but it's impossible to predict what the content would be.
reply
simianwords
25 days ago
[-]
>If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

this shows that you have very less idea on how llm's work.

LLM that is trained only on john steinbeck will not work at all. it simply does not have the generalised reasoning ability. it necessarily needs inputs from every source possible including programming and maths.

You have completely ignored that LLMs have _generalised_ reasoning ability that it derives from disparate sources.

reply
bigfishrunning
25 days ago
[-]
LLMs have the ability to convince you that they've "reasoned". sometimes, an application will loop the output of an LLM to its input to provide a "chain of reasoning"

This is not the same thing as reasoning.

LLMs are pattern matchers. If you trained an llm only to map some input to the output of John Steinbeck, then by golly that's what it'll be able to do. If you give it some input that isn't suitably like any of the input you gave it during training, then you'll get some unpredictable nonsense as output.

reply
simianwords
25 days ago
[-]
this is outdated stuff from 3 years ago.

> If you trained an llm only to map some input to the output of John Steinbeck

this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever?

if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else.

here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr...

you could again say its just pattern matching but then i would argue that its the same thing we are doing.

reply
bigfishrunning
25 days ago
[-]
Domain specific LLM's absolutely exist, don't assume i've never seen one. You seem very misinformed on what is "literally not possible".

https://www.ibm.com/think/topics/domain-specific-llm

reply
simianwords
25 days ago
[-]
there are close to zero domain specific models that beat frontier SOTA models even in their own domain. (other than few edge cases like token extraction)

why do you think that's the case? lets start from here.

the real answer is that you get benefits from having data from many sources that add up expontentially for intelligence.

> LLMs are pattern matchers

but lets try to falsify this. can you come up with a prompt that clearly shows that LLM's can't reason?

reply
RobRivera
25 days ago
[-]
What do you do for work?
reply
simianwords
25 days ago
[-]
lets keep it focused on the discussion - in a previous thread you called me a troll.

start with a way to falsify that it can't reason.

reply
mysterydip
26 days ago
[-]
Is the reason it can show steps for solving an integral because the training set contained webpages or books showing how to do it?
reply
simianwords
26 days ago
[-]
if we have steps for understanding any author's english and creative process (generally not specific to an author) would you agree then it is possible for an llm to do it?
reply
Talanes
25 days ago
[-]
The real sticking point for me is I don't even believe that authors themselves FULLY understand their process. The idea that anybody could achieve such full introspection as to understand and articulate every little thing that influences their output seems astoundingly improbable.
reply
mysterydip
26 days ago
[-]
Repeating a process, yes for sure, even (pseudorandom?) variations on a process. Understanding a process is a different question, and I’m not sure how you would measure that.

In school we would have a test with various questions to show you understand the concept of addition, for example. But while my calculator can perfectly add any numbers up to its memory limit, it has no understanding of addition.

reply
netdevphoenix
25 days ago
[-]
> while my calculator can perfectly add any numbers up to its memory limit, it has no understanding of addition.

"my calculator can perfectly add any numbers up to its memory limit" This kind of anthropomorphic language is misleading in these conversations. Your calculator isn't an agent so it should not be expected to be capable of any cognition.

reply
simianwords
26 days ago
[-]
It’s the degree of generalisability. And LLMs do have understanding. You can ask it how it came up with the process in natural language and it can help - something a calculator can’t do.
reply
bigfishrunning
25 days ago
[-]
> And LLMs do have understanding.

They absolutely do not. If you "ask it how it came up with the process in natural language" with some input, it will produce an output that follows, because of the statistics encoded in the model. That output may or may not be helpful, but it is likely to be stylistically plausible. An LLM does not think or understand; it is merely a statistical model (that's what the M stands for!)

reply
simianwords
25 days ago
[-]
how would you empirically disprove that it doesn't have understanding?

i can prove that it does have understanding because it behaves exactly like a human with understanding does. if i ask it to solve an integral and ask it questions about it - it replies exactly as if it has understood.

give me a specific example so that we can stress test this argument.

for example: what if we come up with a new board game with a completely new set of rules and see if it can reason about it and beat humans (or come close)?

reply
bigstrat2003
25 days ago
[-]
> how would you empirically disprove that it doesn't have understanding?

The complete failure of Claude to play Pokemon, something a small child can do with zero prior instruction. The "how many r's are in strawberry" question. The "should I drive or walk to the car wash" question. The fact that right now, today all models are very frequently turning out code that uses APIs that don't exist, syntax that doesn't exist, or basic logic failures.

The cold hard reality is that LLMs have been constantly showing us they don't understand a thing since... forever. Anyone who thinks they do have understanding hasn't been paying attention.

> i can prove that it does have understanding because it behaves exactly like a human with understanding does.

First, no it doesn't. See my previous examples that wouldn't have posed a challenge for any human with a pulse (or a pulse and basic programming knowledge, in the case of the programming examples). But even if it were true, it would prove nothing. There's a reason that in math class, teachers make kids show their work. It's actually fairly common to generate a correct result by incorrect means.

reply
simianwords
25 days ago
[-]
> The complete failure of Claude to play Pokemon, something a small child can do with zero prior instruction

cherry picking because gemini and gpt have beat it. claude doesn't have a good vision set up

> The "how many r's are in strawberry" question

it could do this since 2024

> The "should I drive or walk to the car wash" question

the SOTA models get it right with reasoning

> fact that right now, today all models are very frequently turning out code that uses APIs that don't exist, syntax that doesn't exist, or basic logic failures.

not when you use a harness. even humans can't write code that works in first attempt.

reply
bigfishrunning
25 days ago
[-]
We don't need to come up with a new board game. How about a board game that has been written about extensively for hundreds of years

LLMs can't consistently win at chess https://www.nicowesterdale.com/blog/why-llms-cant-play-chess

Now, some of the best chess engines in the world are Neural Networks, but general purpose LLMs are consistently bad at chess.

As far as "LLM's don't have understanding", that is axiomatically true by the nature of how they're implemented. A bunch of matrix multiplies resulting in a high-dimensional array of tokens does not think; this has been written about extensively. They are really good for generating language that looks plausible; some of that plausable-looking language happens to be true.

reply
simianwords
25 days ago
[-]
false, chess ELO is pretty good

https://maxim-saplin.github.io/llm_chess/

ets not cherry pick and actually see benchmarks please. i would say even ~1000 elo means that it can reason better than the average human.

reply
bigfishrunning
25 days ago
[-]
If you look at the "workflow" section of that page, they had to add a bunch of scaffolding around telling the model what moves are legal -- an llm can't keep enough context to know how to play chess; only to choose an advantageous move from a given list. But feel free to "cherry pick".
reply
simianwords
25 days ago
[-]
why do you think this falsifies that it can't reason?
reply
simianwords
25 days ago
[-]
i ran the benchmark without the valid moves tool as well as the three mistakes grace and gpt-5.4 holds well. it can achieve 1000 ELO which is much higher than my own.

this clearly tells me that GPT is good at chess, at least better than a normal person who has played ~30-40 games in their life.

reply
blargey
25 days ago
[-]
“i can ask it to give a text description of a linear logical math process that has been described in text countless times”

If you think “the tacit knowledge and conscious/subconscious reasoning mix that caused X to write like X” can be meaningfully captured by some 1-page “style guide” like llmtropes, I’m not sure what to tell you. Such a style description would be informed by a soup of reviewers that most certainly cannot write like X even with their stronger and more nuanced observations than what the LLM picked up.

reply
Eddy_Viscosity2
26 days ago
[-]
Is it not possible for the process of input to output be inferred by the llm and therefore applied to new inputs to create appropriate outputs.
reply
whizzter
26 days ago
[-]
Only if the LLM knows the inputs connected to particular outputs, pre-digital era or classified material might not be available, neither informal discussions with other experts.

Most importantly, negative but unused signals might not be available if the text does not mention it.

reply
simianwords
26 days ago
[-]
challenge: provide a single example where the LLM can only provide the output and not the steps? (in text scenario)
reply
latexr
26 days ago
[-]
An LLM can always output steps, but it doesn’t mean they are true, they are great at making up bullshit.

When the “how many ‘r’ in ‘strawberry’” question was all the rage, you could definitely get LLMs to explain the steps of counting, too. It was still wrong.

reply
simianwords
26 days ago
[-]
can you provide a single example now with gpt 5.4 thinking that makes up things in steps? lets try to reproduce it.
reply
latexr
26 days ago
[-]
I’m pretty sure you can think of one yourself, I’m not going to play this game. Now it’s 5.4 Thinking, before that it was 5.3, before that 5.2, 5.1, 5, before that it was 4… At every stage there’s someone saying “oh, the previous model doesn’t matter, the current one is where it’s at”. And when it’s shown the current model can’t do something, there’s always some other excuse. It’s a profoundly bad faith argument, the very definition of moving the goal posts.

I do have a number of examples to give you, but I no longer share those online so they aren’t caught and gamed. Now I share them strictly in person.

reply
NuclearPM
25 days ago
[-]
Caught and gamed? What do you mean?
reply
bigstrat2003
25 days ago
[-]
He means that if the problem becomes known, the AI companies will hack in a workaround rather than solving the problem by making the model more intelligent. Given that they have been caught cheating in that way in the past, I can't blame the GP for not sharing his tests.
reply
simianwords
26 days ago
[-]
Ok so no example.
reply
treetalker
26 days ago
[-]
You've pinpointed the connection that people fail to make when they seek legal advice (or even information) from LLMs.
reply
JCharante
25 days ago
[-]
what prevents the input from being keystrokes and screen recordings of thousands of lawyers solving cases?
reply
treetalker
25 days ago
[-]
This makes the same error, or a related one. That input is not the lawyer's internal expert process, only the intermediate or (near-) final outcome of it.
reply
weird-eye-issue
26 days ago
[-]
Replace "LLM" with "student" and read that again. You don't just blindly give students output, you teach them, like what you are supposed to do with an LLM.
reply
dwb
26 days ago
[-]
If you change the words in a sentence then it changes its meaning.
reply
weird-eye-issue
26 days ago
[-]
Yeah but obviously my point in this context is that it doesn't. Its not like I said to replace the word with "potato". Thanks for your genius comment.
reply
dwb
25 days ago
[-]
It changes the meaning significantly. An LLM has very little in common with a human student.
reply
weird-eye-issue
25 days ago
[-]
How so? I see lots of similarities both in the training and inference/prompting stages.
reply
dwb
24 days ago
[-]
There are some similarities, but they are absolutely overwhelmed by the differences. Having a handful of superficial similarities is not enough to make draw a meaningful comparison. The act of teaching a human is very different from “training” an LLM because humans have the power of the whole brain and body, not just some information-integration part that the brain and LLMs may (or may not) share. Humans can be creative in ways that LLMs manifestly can’t be. Humans can act like mere token predictors, but we can (and routinely do) also transcend that, question it, play with it. LLMs can’t.
reply
weird-eye-issue
24 days ago
[-]
> but we can (and routinely do) also transcend that, question it, play with it. LLMs can’t.

Maybe not in a single inference, but you can have an LLM question itself by running another inference using its previous prompt as input. You can easily see this in a deep research agent loop where it might find some data and then it goes to find other data to back that up but then finds that it was actually incorrect and then it changes its mind

reply
dwb
23 days ago
[-]
I think this is anthropomorphising far too much. I’ve seen similar patterns, but the end result is still nothing that comes close to what you get from a human.
reply
weird-eye-issue
23 days ago
[-]
Depends on the human. The average person is really quite dumb and AI is already surpassing them. Where will it be in 5 years?
reply
dwb
22 days ago
[-]
Whose side are you on?
reply
ErroneousBosh
26 days ago
[-]
You can't "teach" an LLM. It can't think. It's a simple pattern-matching algorithm, basically just an Eliza bot with a huge table of phrases.
reply
DonHopkins
26 days ago
[-]
You're not thinking, just regurgitating catch phrases that are factually incorrect hallucinations. So how are you any better than an LLM?
reply
ErroneousBosh
25 days ago
[-]
Which part is "factually incorrect"?
reply
DonHopkins
25 days ago
[-]
Several parts of your claim are incorrect.

First, modern LLMs are not "a huge table of phrases". They are neural networks with billions of learned parameters that generate tokens by computing probability distributions over vocabulary given prior context. There is no lookup table of stored sentences.

Second, Eliza-style bots used explicit scripted pattern matching rules. LLMs instead learn statistical representations from large corpora and can generalize to produce novel sequences that were never present in the training data.

Kent Pitman's Lisp Eliza from MIT-AI's ITS History Project (sites.google.com):

https://news.ycombinator.com/item?id=39373567

https://sites.google.com/view/elizagen-org/

https://sites.google.com/view/elizagen-org/original-eliza

Third, while "pattern matching" is sometimes used informally, it’s misleading technically. Transformers perform high-dimensional vector computations and attention over context to model relationships between tokens. That’s very different from rule-based pattern matching.

You can certainly debate whether LLMs "think", but describing them as "Eliza with a big phrase table" is not an accurate description of how they work.

You have the resources available at your fingertips to learn what the truth is, how LLMs actually work. You could start with Wikipedia, or read Steven Wolfram's article, or simply ask an LLM to explain how it works to you. It's quite good at that, while an Eliza bot certainly can't explain to you how it works, or even write code.

What Is ChatGPT Doing … and Why Does It Work?

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

reply
shafyy
26 days ago
[-]
Enough with this analgoy. It's flawed on so many levels. First and foremost, stop devaluing humanitiy and hyping up AI companies by parroting their party line. Second, LLMs don't learn. They can hold a very limited amount of context, as you know. And every time you need to start over. So fuck no, "teaching" and LLM is nothing like teaching an actual human.
reply
KeplerBoy
26 days ago
[-]
It all went south when we started to call it "learning" instead of "fitting parameters".
reply
fxtentacle
26 days ago
[-]
„Fitting“ is still too nice of a word choice, because it implies that it’s easy to identify the best solution.

I suggest „randomly adjusting parameters while trying to make things better“ as that accurately reflects the „precision“ that goes into stuffing LLMs with more data.

reply
bonoboTP
26 days ago
[-]
It was called learning already back when the field was called cybernetics and foundational figures like Shannon worked on this kind of stuff. People tried to decipher learning in the nervous system and implement the extracted principles in machines. Such as Hebbian learning, the Perception algorithm etc. This stuff goes back to the 40s/50s/60s, so things must have gone south pretty early then.
reply
Imustaskforhelp
26 days ago
[-]
I agree with ya so much. I have seen so many people even in hackernews somehow give human qualities to LLM's.

This Grammarly thing seems to be a bastardized form of that not even sparing the dead.

I'd say that there was some incentive by the AI companies to muddle up the water here.

reply
weird-eye-issue
26 days ago
[-]
> very limited amount of context

This isn't 2023 anymore

reply
simianwords
26 days ago
[-]
absolutely they can learn. you are being emotional and the original point is correct.

i give the LLM my codebase and it indeed learns about it and can answer questions.

reply
RichardLake
26 days ago
[-]
That isn't learning, it can read things in its context, and generate materials to assist answering further prompts but that doesn't change the model weights. It is just updating the context.

Unless you are actually fine tuning models, in which case sure, learning is taking place.

reply
simianwords
26 days ago
[-]
i don't know why you think it matters how it works internally. whether it changes its weights or not is not important. does it behave like a person who learns a thing? yes.

if i showed a human a codebase and asked them questions with good answers - yes i would say the human learned it. the analogy breaks at a point because of limited context but learning is a good enough word.

reply
RichardLake
26 days ago
[-]
Maybe because I work on a legacy programming language with far less material in the training? For me it makes a difference because it partly needs to "learn" the language itself and have that in the context, along with codebase specific stuff. For something with the model already knowing the language and only needing codebase specific stuff it might feel different.
reply
simianwords
26 days ago
[-]
But my codebase isn’t there in training set yet it learns and I can ask questions
reply
jacquesm
25 days ago
[-]
Digital necrophilia. The living ones are the ones that are going to have to make the objections here.

This is revolting at so many levels.

reply
euroderf
25 days ago
[-]
In a similar vein, I'll bet you that rather soon Faceborg will announce a service to keep the deceased "alive" on the platform, posting and commenting away. For plenty of accounts there's plenty of training material.
reply
Rebelgecko
25 days ago
[-]
Funnily enough FB already has the patent for this: using LLMs for "simulating the user when the user is absent from the social networking system, for example... if the user is deceased."
reply
jacquesm
25 days ago
[-]
Oh, so they're not even hiding it. What is wrong with these people not to realize that this is way across the line?
reply
AlexeyBelov
25 days ago
[-]
The only relevant line for them is when they are "lining" their pockets.
reply
sensanaty
25 days ago
[-]
They already have a feature where a profile can be marked as a memorial page or some such thing, I don't think this is so far fetched considering how ghoulish that robot Zucc is.
reply
jsheard
25 days ago
[-]
That's almost exactly the premise of a Black Mirror episode (S2E1 - Be Right Back) so you know it's going to get Torment Nexus'ed into existence.
reply
jacquesm
25 days ago
[-]
Suddenly 18 years of HN comments get a different vibe...
reply
jacquesm
25 days ago
[-]
Oh shit, that never even crossed my mind. They don't even need the content creators any more, they can just keep everybody on the platform even though they have left. Faking likes and faking posts for eternity.
reply
Ferret7446
24 days ago
[-]
What objection? IANAL but offering something "inspired by" is fair use. We have not yet reached a point where you can get a government sanctioned monopoly for your writing style or personality.

Unless they're outright marketing this as "endorsed by" or similar, there is no case.

reply
pilooch
25 days ago
[-]
Revolting and so inevitable though I believe: we're sort of running these already in our minds, we'll be outrun here too.!
reply
thesuitonym
25 days ago
[-]
There is an unimaginably large gulf of distance between an individual thinking about what an author would say about their writing, and a corporate entity selling the "opinion" of an LLM asked to act like someone.
reply
beepbopboopp
25 days ago
[-]
Please share how this revolting to you
reply
jacquesm
25 days ago
[-]
Something in the same spirit as this:

https://www.chiffandfipple.com/t/kenny-g-as-necrophile-long-...

You don't bring the dead virtually back to life to perform tricks for you.

reply
PessimalDecimal
25 days ago
[-]
You probably mean necromancy.
reply
jacquesm
25 days ago
[-]
> You probably mean necromancy.

I probably did not. Then I would have written that. They are fucking over the dead. They are clearly not communicating with the dead.

reply
bigstrat2003
25 days ago
[-]
"fucking over" as an expression has nothing to do with sex, so it wouldn't qualify as necrophilia.
reply
jacquesm
25 days ago
[-]
Neither does digital. Is it pedant day today or did I miss the memo?
reply
mynameisvlad
25 days ago
[-]
What's with the excessively hostile attitude? That's more of a Reddit thing than a semi-professional discussion site that has this as a guideline:

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

reply
jacquesm
25 days ago
[-]
> What's with the excessively hostile attitude? That's more of a Reddit thing than a semi-professional discussion site that has this as a guideline:

>> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Which part was snarky, excessively hostile or unprofessional?

reply
mynameisvlad
25 days ago
[-]
The beginning, middle and end.
reply
jacquesm
25 days ago
[-]
So, in the interest of 'curious discourse' how would you suggest I should have written my comment so that it would be more professional, less snarky and less hostile?

You see, the comment that I replied to made an assumption, that assumption is embedded in the word 'probably'. The person that wrote that presumes to know what I intend. I corrected that. Clarified it and moved on. If that seems hostile and snarky to you then I'm happy to be educated. For myself, I think the comment I replied to could have been phrased as a question rather than a statement.

Your comment strikes me as a bit snarky too by the way.

reply
mynameisvlad
25 days ago
[-]
> You probably mean necromancy.

You:

> I probably did not. Then I would have written that. They are fucking over the dead. They are clearly not communicating with the dead.

A less snarky and hostile version:

> I actually didn't! I specifically meant that they are fucking over the dead.

Pretty much the same content, and it makes you sound like you actually want to interact with other people. Even in your paragraph long explanation you do quite a few things that are just unnecessarily hostile. It's clearly just the way you talk, but it also doesn't mean you shouldn't work on improving it.

> For myself, I think the comment I replied to could have been phrased as a question rather than a statement.

Then you should have (politely) said it.

reply
jacquesm
25 days ago
[-]
Thank you. I'm 60+ and I think I'm past saving, but I appreciate your effort to tone police HN.

As well as your own exemplary behavior:

https://news.ycombinator.com/item?id=46653114

https://news.ycombinator.com/item?id=45505708

https://news.ycombinator.com/item?id=45334298

And probably others besides.

reply
AlexeyBelov
25 days ago
[-]
As an outside observer: mynameisvlad is right, and even provided a better variation of your comment _which you requested_. To me this is an OK criticism, I'd like to see more of this on HN. Then you've thrown everythign in the trash, essentially, and started digging through their comment history. Their comment wasn't even under discussion. I'd like to see less of this on HN.
reply
dolebirchwood
25 days ago
[-]
He's not hostile; he's Dutch.
reply
mynameisvlad
25 days ago
[-]
That's no excuse. Dutch people still have to communicate with others.
reply
Cameri
12 days ago
[-]
The writer of the article confesses to using LLMs to improve their grammar. Writer is then upset Grammarly correctly attributes the recommendation he is getting to someone they know. A nothing burger essentially.
reply
himata4113
26 days ago
[-]
Grammarly seemed pretty dead on arrival the moment they added AI features. They would have said a lot more relevant and kept the costs down if they were strictly no-ai imo.
reply
bayindirh
26 days ago
[-]
The funny thing is, their core "grammar" engine has to work on a language model + some hard heuristics anyway. So they were on a path to utilize this thing for real good, with concrete benefits.

Generative AI is a plague at this point. Everybody is adding to their wares to see what happens. It's almost like ricing a car. All noise, no go.

reply
tsunamifury
25 days ago
[-]
I spent a great deal of time trying to do this at allofus.ai with a team of ex-googlers with our goal being to help creators eventually 'own' their personas and drive and compete to use them to help end users.

We believed this was coming and that the best way to handle it was give the real person control over their persona to grow/edit/change and train it as they see fit.

I actually own the patent on building an expert persona based on the context of the prompt plus the real persons learned information manifold...

reply
bigbuppo
25 days ago
[-]
So what you're saying is that you can put a stop to crap like this by slinging a C&D their way?
reply
chilipepperhott
25 days ago
[-]
I know all press is good press... but there are limits.

If it feels like Grammarly does not respect your right to digital sovereignty, it is because it does not.

reply
mikeocool
25 days ago
[-]
It almost seems like this whole feature is designed to invite law suits.

Seems pretty likely usage of Grammarly's core product has cratered in the past few years. Not totally hard to imagine one of the big AI labs paying their legal fees in exchange for putting this out there and kick starting the legal process on some of these issues.

reply
SoftTalker
25 days ago
[-]
LLMs basically made Grammarly irrelevant as a product. Why have a tool to correct your grammar when you can just have it write the whole piece for you. And one things LLMs do well is construct grammatically correct text.

So IMO they are just flinging things at the wall trying to find a way back.

reply
abirch
25 days ago
[-]
As Annie Duke said in her book Quit, "quitting on time usually feels like quitting too early." Grammarly was a great in the 2010s, but now it's too easily replaced.

It reminds me of winzip.

reply
nxobject
25 days ago
[-]
Depressingly enough, if Grammarly does throw in the hat, we'll lose an application of clear utility that could be run entirely locally.
reply
abirch
25 days ago
[-]
It seems like there are many apps that can be run locally that use LLMs. Although I haven't used this, I found it on reddit and it's made by a student. https://github.com/theJayTea/WritingTools

Seems like there could be others that are better.

reply
z3c0
25 days ago
[-]
I believe the person you're replying to meant local inferencing. The tool you shared, like most (all?) LLM utilities, is wrapping API calls.

https://github.com/theJayTea/WritingTools/blob/main/Windows_...

reply
nxobject
25 days ago
[-]
IIRC, I think the core of Grammarly is a CL-based pattern-matching system, so it may be even simpler than that.
reply
BrtByte
25 days ago
[-]
The real issue seems more about transparency and consent around how the models are trained and how author personas are being used
reply
josefritzishere
26 days ago
[-]
This feels illegal. Even if it's not, it further drives the perception that AI is only good for crime, like crypto.
reply
BrtByte
25 days ago
[-]
The weird part about tools like this isn't just the copyright question, it's the simulation of authority
reply
ludicrousdispla
25 days ago
[-]
Yeah, this is like applying a stained-glass image filter to your portrait in order to achieve sainthood.
reply
senaevren
26 days ago
[-]
A few things worth flagging: On GDPR: Using a named individual's identity to generate commercial AI output isn't obviously covered by "legitimate interest." Affected EU-based individuals likely have real grounds to object or request erasure. On IP/publicity rights: You can't copyright an editing style — but you absolutely can have a right of publicity claim when a company profits from your name and simulated judgment without consent. The Lanham Act's false endorsement provisions could also be in play here. The kicker: The "sources" cited by the feature were broken, spammy, or pointed to completely unrelated content. So the defense that suggestions are inspired by someone's actual work may not even hold up technically.
reply
Imustaskforhelp
26 days ago
[-]
Man I really don't like this at all.

It really feels so wrong to spare nobody, not even dead writer/people.

All it's gonna do is something similar to em-dashes where people who use it are now getting called LLM when it was their writing which would've trained LLM (the irony)

If this takes off, hypothetically, we will associate slop with the writing qualities similar to how Ghibli art is so good but it felt so sloppy afterwards and made us less appreciate the Ghibli artstyle seeing just about anyone make it.

The sad part is that most/some of these dead writers/artists were never appreciated by the people of their time and they struggled with so many feelings and writing/art was their way of expressing that. Van Gogh is an example which comes to my mind.[0] Many struggled from depression and other feelings too. To take that and expression of it and turn it into yet another product feels quite depressing for a company to do

[0]: https://en.wikipedia.org/wiki/Health_of_Vincent_van_Gogh

reply
bayindirh
26 days ago
[-]
> It really feels so wrong to spare nobody, not even dead writer/people.

That train left at full steam when companies scraped the whole internet and claimed it was fair use. Now it's a slippery slope covered with slime.

I believe there'll be no slowing down from now on.

They are doing something amazing, will they ask for permission? /s.

reply
dryadin
26 days ago
[-]
Frankly, I am surprised this was not shut down by their legal counsel (assuming they have one and they actually asked). The legal exposure here is significant. This could be defamation, there are publicity rights issues, copyright, and maybe even criminal liability.
reply
beernet
26 days ago
[-]
This feels like a desperate attempt to stay relevant in a post-LLM world. They’re basically wrapping an LLM in a "professional" skin and calling it an expert review. The problem is that once you start letting an AI "expert" dictate tone and logic, you effectively lobotomize the writer’s original intent. We’re reaching a point where AI is just reviewing other AI-generated text, creating a feedback loop of pure mediocrity. Copium for middle management, if you ask me.
reply
misir
26 days ago
[-]
Grammarly even from the start was very distracting to me even as a someone using english as a second language to communicate. I have developed my own taste and way of articulating thoughts, but grammarly (and LLMs today) forced me to remove that layer of personality from my texts which I didn't wanted to let go. Sure I sounded less professional, but that was the image I wanted to project anyways.

Unrelated but surprising to me that I've found built-in grammar checking within JetBrains IDEs far more useful at catching grammar mistakes while not forcing me to rewrite entire sentences.

reply
astra1701
25 days ago
[-]
JetBrains’s default grammar checking plugin[1] is actually built on languagetool[2], a pretty decent grammar checker that also happens to be partly open source and self-hostable[3]. Sadly, they have lately shoved in a few (thankfully optional) crappy LLM-based features (that don’t even work well in the first place) and coated their landing page in endless AI keywords, but their core engine is still more traditional and open-source, and hasn’t really seemed to change in years. You can just run it on your own device and point their browser and editor extensions to it.

[1] https://plugins.jetbrains.com/plugin/12175-natural-languages... [2] https://languagetool.org -- warning: is coated in somewhat-misleading AI keywords [3] https://github.com/languagetool-org/languagetool

reply
wongarsu
26 days ago
[-]
> The problem is that once you start letting an AI "expert" dictate tone and logic, you effectively lobotomize the writer’s original intent

Isn't that what grammarly has always been, since long before the invention of the transformer? They give you a long list of suggestions, and unless you write a corporate press release half of them are best ignored. The skill is in choosing which half to ignore

reply
Aerroon
26 days ago
[-]
I disagree. You write when you have something to say. A service like Grammarly tries to help you convey what you want to say, but better. What you want to say is still up to you.

Words paint the picture, but the meaning of the picture is what matters.

reply
ibejoeb
26 days ago
[-]
That's a tiny fraction. Most people write because they're told to write.
reply
NewsaHackO
26 days ago
[-]
Are you talking about children or students? I think most people write because they want to communicate.
reply
ibejoeb
26 days ago
[-]
Children and young students, certainly. Adult students: almost 100%. If writing is your job, then by definition, and your problem is more often finding something to say, not writing it.
reply
latexr
26 days ago
[-]
You’re not counting all the office workers who have to write reports or emails, or all the scammers who write those websites to manipulate SEO or show you ads.
reply
thesuitonym
25 days ago
[-]
If you fall into the camp of office workings needing to write reports or emails, maybe think twice about putting your name on AI garbage.
reply
latexr
25 days ago
[-]
Everyone should think twice about putting their name on AI garbage, or garbage of any kind. But wishing doesn’t stop it from happening, especially when companies are explicitly selling you on doing just that. Remember the Apple Intelligence office ads?
reply
bonoboTP
26 days ago
[-]
It's great. Now that fancy writing is cheap and infinite, fields whose entire scholarship value was in obscurantist jargon bending have to actually start to turn on their brains and care about making more sense than an LLM can.
reply
contagiousflow
25 days ago
[-]
What fields rely only on jargon manipulation to produce papers?
reply
bonoboTP
25 days ago
[-]
I would think that most people can think of some. Maybe those who can't are part of one.
reply
contagiousflow
23 days ago
[-]
That's a beautiful Kafkatrap you've constructed. Not much of an argument though. Maybe there's another explanation for this though. Perhaps you think you know much more about different fields than you actually do?
reply
jagged-chisel
26 days ago
[-]
> … have to actually start to …

Or do they?

reply
bonoboTP
26 days ago
[-]
Maybe not. But academia is going to change. Status will still have to be allocated by some mechanism but the classic journals and reviews based system will crumble under the weight of LLMs. Of course this will upset a great many of people who enjoy the current state of things.
reply
fritzo
25 days ago
[-]
This is great!

One lesson they might draw from the negative press is to offer a more open-ended interface, like ChatGPT, where for years people have already been asking "Pretend you are X and review my writing". This interface design pattern gives the press nowhere to point their angry fingers

reply
k__
25 days ago
[-]
I offer my expertise in tech writing to review your AI articles and docs.
reply
Applejinx
26 days ago
[-]
I would be surprised if the living writers can't sue over this.
reply
zac23or
25 days ago
[-]
I had a very bad experience with Datadog. They seem to be big scammers.

For me, Grammarly gives me the same impression as Datadog, but I have no explanation for why I feel that way.

reply
zamadatix
25 days ago
[-]
For the main link to the wired article as well: https://archive.is/2Qbdu
reply
Alan_Writer
25 days ago
[-]
Why Grammarly has to come to this point even knowing that this is not only "controversial" somehow, but rare?

Does it add any value for writers?

reply
cynicalsecurity
25 days ago
[-]
Is anyone still using Grammarly? Their app has been replaced with a single AI query.
reply
etchalon
25 days ago
[-]
"We can do it because no one can stop us."
reply
kome
26 days ago
[-]
that's so scummy. why they even needs "names"? it's a rhetorical question...
reply
bayindirh
26 days ago
[-]
Moreover, they don't even apologize:

"The work is public, hence the name. It's well known, it's in the data. Who cares".

What will they do next? Create similar publications with domainsquatting and write all-AI articles with the "public" names?

Is it still fair use, then?

reply
kome
26 days ago
[-]
yes i hate that. they still have the chutzpah of keeping doing it. and i am sure it's illegal in multiple legislation. because they are not writing articles where you can cite people, they are selling a product.
reply
bayindirh
26 days ago
[-]
I think we can thank the current times and developments as a whole for unearthing the greediest of the greedy among us.

It's very enlightening, if you ask me.

reply
SoKamil
26 days ago
[-]
Authority washing.
reply