The only moat left is knowing things
48 points
5 hours ago
| 10 comments
| growtika.com
| HN
bschne
3 hours ago
[-]
> Was this physically difficult to write? If it flowed out effortlessly in one go, it's usually fluff.

Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.

When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.

reply
graemep
2 hours ago
[-]
My most successful blog post was written about something I felt strongly about, backed by knowledge and a lot of prior thought. It was written with passion.

People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.

reply
jaapz
2 hours ago
[-]
Right? I think some of my best work flowed out effortlessly, it's amazing when you get into the flow state and just churn out line after line.
reply
mcny
4 hours ago
[-]
> If I subconsciously detect that you spent 12 seconds creating this, why should I invest five minutes reading it?

The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.

I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.

reply
fhd2
3 hours ago
[-]
First thought: In my experience, this is a muscle we build over time. Humans are pretty great at pattern detection, but we need some time to get there with new input. Remember 3D graphics in movies ~15 years ago? Looked mind blowingly realistic. Watching old movies now, I find they look painfully fake. YMMV of course.

Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.

reply
graerg
43 minutes ago
[-]
Exactly! It must be exhausting to have this huge preoccupation with determining if something has come from an LLM or not. Just judge the content on it's own merits! Just because an LLM was involved doesn't mean the underlying ideas are devoid of value. Conversely, the fact that an LLM wasn't involved doesn't mean the content is worth your time of day. It's annoying to read AI slop, but if you're spending more effort suspiciously squinting at it for LLM sign versus assessing the content itself, then you're doing yourself a disservice IMO.
reply
_tk_
4 hours ago
[-]
Big LinkedIn post on a concept with little proof.
reply
Growtika
4 hours ago
[-]
Fair point. This is more mindset than case study. The proof is still being built across client work. Though I'd say the same was true for SEO in the early days. People speculating on what made Google rank certain sites higher, what made pages index faster, etc. The frameworks came before the proven playbooks
reply
bsenftner
1 hour ago
[-]
I disagree. The moat now is being able to understand, and then communicate that understanding to others, even when they resist understanding. Crack that, and you'll save this civilization from all the immature shortsighted thinkers.
reply
p_v_doom
36 minutes ago
[-]
> even when they resist understanding

Agreed. You may know so many things, but ultimately its useless if the other party does not care about wanting to understand them. And I have no clue what the right way is, besides letting people and their models fail and then being there with an answer ...

reply
JamesTRexx
2 hours ago
[-]
I see the same point when it comes to fiction writing. Tested (via duck.ai) a while ago with creating fiction stories in less than 500 characters and it came up with generics and repeats that even went above the limit. Tried again just now with 5o mini, and although it waxed poetically, there were cracks and gaps, still felt rather generic, and certainly failed at twists and humour.

It can write about a spark, but the content has no spark.

reply
jmkd
3 hours ago
[-]
The central idea that we all have the same tools which now represent an infrastructure baseline, therefore we need to look harder to establish our moats (not just in knowing things although that's one) is sound and well put. Thanks.
reply
nottorp
1 hour ago
[-]
My opinion is "content" was slop even before "AI".

If you're worried about producing "content", the completion bots have caught up with you.

See the other posts calling the article "a Linkedin post". Those were slop even before LLMs.

Now if you have some information you want to share, that's another topic...

reply
Lerc
49 minutes ago
[-]
Content is not meant to imply fungibility by being nonspecific. It is supposed to represent an acknowledgement of diversity across a wide range of activities.

The term content creator represents inclusivity, not genericity.

You have used the term information as a candidate for an alternative. What if someone is sharing an experience, an artwork, or simply something they found to be beautiful? There may be an information component to some of those things but the primary reason that they were offered isn't to be informative.

You don't seek content any more than you seek words. You may read books made of words but it is what the book is about that you look for. The same goes for content, only with a broader spectrum. You seek things that you like, things that you value. Content, being nonspecific, means your horizons can be broad.

reply
nottorp
38 minutes ago
[-]
Your experience sharing is also information, not content. It doesn't have to be a technical manual or a self improvement text.

What I call content is ... well, content ... produced not because you have something to say but because you're aiming for quantity.

reply
Lerc
11 minutes ago
[-]
Experience sharing has an information component, the thing that distinguishes it from a written report containing the same facts is not.

What you call content is just low effort content. Slop is a more evocative term that probably captures the concept of low effort, unfortunately it has already been poisoned by people declaring anything AI assisted to be slop regardless of how much effort went into the work.

There are a lot of people who work tirelessly on things that have a massive time and effort commitment for each thing they produce. Yet they identify as content creators. Dismissing their work seems disrespectful to me.

Sturgeon's Law is a warning to not overlook the good because of the preponderance of the bad.

reply
jongjong
4 hours ago
[-]
I think the most valuable intellectual skill remaining is contrarian thinking which happens to be correct.

LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.

I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.

reply
pjc50
2 hours ago
[-]
> contrarian thinking which happens to be correct

Important qualifier there. There's a massive oversupply of contrarian thinking; it's cheap, popular (populist), and incorrect. All you have to do is take some piece of conventional wisdom and write the opposite. You don't have to supply evidence, or if you do then a single cherry-picked piece will suffice.

I'd say something more like "Chesterton's Fence Inspection Company": there are reasons why things are the way they are, but if you dig into them, maybe you will find that the assumptions are no longer true? Or they turn out to be still true and important.

reply
Imustaskforhelp
1 hour ago
[-]
> The data backs this up. 54% of LinkedIn posts are now likely AI-written (Originality.ai). 15% of Reddit posts too, up 146% since 2021. Every competitor has the same capability to generate keyword-optimized, structurally correct, grammatically polished content. In about twelve seconds.

I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.

But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.

Like I can see a post which can have an ending by,

Written with love & a passion by a fellow human, Peace.

And It would be a better / different than this.

Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.

Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.

I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.

I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.

I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)

And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.

And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.

reply
jdthedisciple
4 hours ago
[-]
Ironically this reads like AI slop.
reply
zvqcMMV6Zcr
4 hours ago
[-]
No, it reads like Linkedin post. That said, do we now have to check if the text we wrote doesn't look like something AI generated?
reply
burakemir
2 hours ago
[-]
You're absolutely right.
reply
Jensson
2 hours ago
[-]
If its a problem for you, then yeah. If you never get accused of using AI then no.
reply
Imustaskforhelp
1 hour ago
[-]
Um, I got called on HN three times now accused of being AI for writing comments by hand.

I got so annoyed at the second time that I even created a post about it. I guess I just get really annoyed when someone accuses me who writes things by hands as AI slop because it makes me feel like at this point, why not just write it with AI but I guess I just love to type.

I have unironically suggested in one of my HN comments that I should start making the grammatical mistakes I used to make when I had just started using HN like , this mistake that you see here. But I remember people actually flipping out in comments on this grammatical mistake so much that It got fixed.

I am this close to intentionally writing sloppy to prove my comments aren't AI slop but at the same time, I don't want to do this because I really don't want to change how I write just because of something what other people say imo.

reply
PurpleRamen
2 hours ago
[-]
Ironically, everything smells like AI now, even when it's human.
reply
Growtika
3 hours ago
[-]
Genuinely curious, what felt off? Ideas are mine, AI just helped clean up the English (I added a disclaimer)
reply
duskdozer
3 hours ago
[-]
The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.

I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.

[1] https://en.wikipedia.org/wiki/False_friend

reply
kranner
1 hour ago
[-]
> Using them isn't an advantage, but not using them is a disadvantage. They handle the production part so we can focus on the part that actually matters: acquiring the novel input that makes content worth creating.

I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.

Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.

reply
edent
2 hours ago
[-]
> AI just helped clean up the English

Why?

I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.

But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?

reply
Jensson
2 hours ago
[-]
You admit it yourself here:

> I run a marketing agency. We use Claude, ChatGPT, Ahrefs, Semrush. Same tools as everyone else. Same access to the same APIs.

Since you use it for your job of course you use it for this blog, and that will make people look harder for AI signs.

reply
djeastm
3 hours ago
[-]
For me it's a general feel of the style, but something about this stands out:

>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.

The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.

reply
ares623
2 hours ago
[-]
It reads like an Apple product page.
reply
InterviewFrog
1 hour ago
[-]
It did not feel off at all. I read every single word and that is all that counts.

I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.

The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.

So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.

reply
xnorswap
3 hours ago
[-]
Most of the subheadings starting with "The" and "What Actually" is a bit of a giveaway in my view.

Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.

reply