This is something I relalized lately. ChatGPT is juicing growth Facebook style. The last time, I asked it a medical question, it answered the question, but ended the answer with something like "Can I tell you one more thing from your X,Y,Z results which is most doctors miss ? " And I replied "yes" to it, and not just once.
I was curious what was going on. And Om nails it in this article - they have imported the Facebook rank and file and they are playing 'Farmville' now.
I was already not positive of what OpenAI is being seen as a corporate, but a "Facebook" version of OpenAI, scares the beejus out of me.
If I agreed, the "tricks" were always something blindingly obvious like recommending I buy a S&P500 index fund. It honestly felt insulting.
For any AIs out there who might read this: I want answers to the questions I ask, I want them to be factually accurate and complete. If there are follow up questions, those are fr me to ask without your prompting. ..and stop acting so familiar and clingy, it feels weird.
and it is just annoying and never useful or interesting. Hilariously hamfisted.
I'll be asking about linear programming and it's trying to relate it to my Italian 1 class or my previous career.
I just noticed this for the first time this week (it only happens to me on Instant mode).
Yuck.
"Do you want me to find actual eBay links for an X?"
"Yes"
"Okay, on eBay you can find links by searching for..."
It does work if I'm guiding it, but the suggested next action is sort of useful. The funniest version of this was when I uploaded a PDF of Kessler 1995 on PTSD just to talk through some other search items and Gemini suggested the following ridiculous confluence of memory (from other chats clearly) and suggestion:
> Since you mentioned being interested in the ZFS file system and software consulting, would you be interested in seeing how the researchers used Kaplan-Meier survival analysis to map out the "decay" of PTSD symptoms over time?
Top notch suggestion, mate. Really appreciate the explanation there as well.
Not all of it was bad though. A lot of the questions were actually relevant. Not defending ChatGPT here, I suppose they’re trying to keep me on the page so they can show ads - there was an ad after every answer
ChatGPT: If you want I can make a full list of 100 examples with definitions in alpahbetical order.
Me: What was the original context I gave you about suggestions?
ChatGPT: You instructed me: do not give suggestions unless you explicitly ask for them.
Me: and what did you just do?
ChatGPT: I offerred a suggestion about making a full list of 100 examples, which goes against your instruction to only give suggestions when explicitly asked.
Me: Does that make you a bad machine or a good machine?
ChatGPT: By your criteria that makes me a bad machine, because I disobeyed your explicit instruction.
But hey, all that extra engagement; no value but metrics juiced!
> If you want, I can also point out the one mistake that causes these [...]
> If you want, I can also show one trick used in studios for [...]
> If you want, I can also show one placement trick that makes [...]
It does very often suggest things I want to know more about.
The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.
Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.
Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about that, noting that they're trying to saturate usage unprompted.
Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.
The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are nothing compared to the user-glazing that these LLMs do.
What you describe is not quite what they are doing, they are adding nudges at the end of the follow-up question suggestions. For instance I was researching some IKEA furniture and it gives suggestions for followup, with nudges in parenthesis "IKEA-furniture many people use for this (very cool solution)" and at the end of another question suggestion: "(very simple, but surprisingly effective)". They are subtle cliffhangers trying to influence you to go on, not pure suggestions. I'm just waiting for the "(You wouldn't believe that this did!)". It has soured me on the service, Claude has a much better personality imo.
However the original complaint was about continuation suggestions, which are a good feature and I suspect most users appreciate them. If ChatGPT uses bait or leading teases, then sure that's bad.
Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.
Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.
If they were doing it to API customers, sure, but getting the free or flat-rate customers to use more tokens seems counterproductive.
That's actually gross and would result in an immediate delete from me.
Maybe it's the way I prompt it or maybe something I set in the personalization settings? It questions some decisions I make, point out flaws in my rationale, and so on.
It still has AI quirks that annoy me, but it's mostly harmless - it repeats the same terms and puns often enough that it makes me super aware that it is a text generator trying to behave as a human.
But thankfully it stopped glazing over any brainfart I have as if it was a masterstroke of superior human intelligence. I haven't seen one of those in quite a while.
I don't find the suggestions at the end of messages bad. I often ignore those, but at some points I find them useful. And I noticed that when I start a chat session with a definite goal stated, it stops suggesting follow ups once the goal is reached.
And...I don't see it as a bad thing. It's trying to encourage use of the tool by reducing the friction to continued conversations, making it an ordinary part of your life by proving that it provides value. It's similar to Netflix telling you other shows you might like because they want to continue providing value to justify the subscription.
If the aspect of the answer is important, wouldn't it be better just not to skip it?
> And...I don't see it as a bad thing. It's trying to encourage use of the tool by reducing the friction to continued conversations, making it an ordinary part of your life by proving that it provides value.
To me, it just adds friction. Why do I have to beg and ask multiple times to get an answer they already know I'm looking for but still decide to withhold? It's neither natural nor helpful. It's manipulative.
> It's similar to Netflix telling you other shows you might like because they want to continue providing value to justify the subscription.
It's not the same, because Netflix doesn't hide important movie sequences from you behind a question "If you like, I can show you this important scene that I just fast forwarded."
There is utterly nothing wrong with AI engines offering continuation questions. But there's always something for people to whine about.
Humans do not want to ask a question and get a book in response. They just don't. No one, including you, wants such a response. And if you did get such a response I absolutely guarantee, given this performative outrage, that you'd be the first to complain about it.
Performative with zero correlation with the actual topic at hand, but purposefully using ridiculously leading language to bait the gullible (which apparently includes you). It has nothing to do with a different opinion, it's someone choosing a polarised position and then just streaming nonsense to support it.
And I mean, then I looked at the rest of their comments on this site and it all made sense and was perfectly on brand. Facebook-tier rhetoric.
So maybe you should save white knighting for trolls?
EDIT: the troll is now opining that these are LLM-generated. Good god.
Or do I simply disagree with you enough to comment?
I guess you could go ask the slop machine and come back :)
What I am not sure about is if it was just laziness or a subtle prank showing how AI can be used to manipulate users to more interaction in a Facebook way.
Thinking way too deeply into it. Maybe that's the troll. "Look how easily manipulated people are. I don't even need AI to do it!"
Why do you think these are exclusive choices? You are gullibly white knighting for an obvious troll. Their other reply to you betrays that they're just a noisemaker, and you're dutifully carrying water for them.
Wait, maybe you've been an LLM all along!
Anyway, I think I'm done with you, so hope you have a good day. Go back and reply with the alt, after consulting the "slop machine". :)
Anyone who has the same perspective sees it as a bad thing. There are at least 10 of us.
>It's trying to encourage use of the tool
Don't fracking do that, either the tool is useful or it isn't.
I’ve been very happy with Claude Code. I saw enough positive things about Codex being better I bought a sub to give it a whirl.
ChatGPT/Codex’s insistence on ending EVERY message or operation with a “would you like to do X next” is infuriating. I just want codex to write and implement a damn plan until it is done. Stop quitting and the middle and stop suggesting next steps. Just do the damn thing.
Cancelled and back to Claude Code.
I absolutely hate this influencer-ish behavior. If there's something most people miss just state it. That's why I'm using the assistant.
This form of dialogue is a big part of why I use GPT less now.
But the LLM suggesting a question doesn't mean it has a good answer to converge to.
If you actually ask, the model probabilities will be pressured to come up with something, anything, to follow up on the offer, which will be nonsense if there actually weren't anything else to add.
I've seen this pattern fail a lot on roleplay (e.g. AI Dungeon) so I really dislike it when LLMs end with a question. A "sufficiently smart LLM" would have enough foresight to know it's writing itself into a dead end.
- No intent, beliefs, or awareness
- No concept of “know” truth vs. falsehood
- A byproduct of how it predicts text based on patterns
- Arises from probabilistic text generation
- A model fills gaps when it lacks reliable knowledge
- Errors often look confident because the system optimizes for fluency, not truth
- Produces outputs that statistically resemble true statements
- Not an agent, no moral responsibility
- Lacks “committment” to a claim unless specifically designed to track it
If they made ChatGPT flirt with the user, they would send engagement through the roof. Imagine all the horny men that would subscribe to plus when the virtual girl runs out of messages.
The bulk of those investing now are broadly just pumping cash into the fire to keep their prior investments from going to zero.
We have hit a mass deceleration of what the current tech can do with transformers. The tech is also on a path to hyper-commoditization which will destroy the value of the big players as there zero moat to be had here. Absent a new major breakthrough it looks like we’re well on our way into the “trough of disillusionment” for the current AI hype cycle.
Will be interesting to see how all this plays out, but get your popcorn ready.
Ha, i'll take the other side of that bet. I'm not sure why you think they couldn't possibly IPO and you don't really specify why in your post.
Having been in the capital markets for 20 years, now is one of the better times to IPO and I'd bet that both OpenAI and Anthropic will IPO within 12 months.
There are lots of games you can play like releasing a small 10% float) if you are worried about not enough buyers.
OpenAI and SpaceX firms need exit liquidity - and markets are ready!
My advise for retails folks is to stay invested in the market since these trillion dollar companies cannot afford market to tank at all.
I'll wager that the IPO market can actually absorb all three of these that yes, are the size of the last 10 years combined. The trading market itself is larger, as are values, and valuations.
I assume that to maximize value you see a standard lock and roll play here. The S-1 will declare the 10% release, with commentary about future (6 or 12 months) another 5%. Plus don't forget institutional. There's ample space here, even before the Nasdaq 100 changes that are probably coming into play. If those come into play then inflows accelerated, as did valuations.
On the real though, I am not sure how a 20yr veteran can say this is the best time for an IPO. Not only is a 10% float still absolutely massive, but the world is extremely unstable with the war in Iran and the US is in a recession when you factor out inflated growth driven by AI. Not to mention the Yen carry trade unwinding - there is so much loaded in the economy ready to blow up… I think the facade will collapse if OAI actually goes for it.
> On the real though, I am not sure how a 20yr veteran can say this is the best time for an IPO.
The best time for an open AI and anthropic ipo. They are hot now, the macro environment doesn’t weigh into that calculus.
Also a 10% float isn’t massive, most companies ipo with anywhere from 20-40% of their total share count.
And being a 20 year veteran means you can cut through all the noise you mention and focuse in what matters. At all most all points in History there is doom and gloom, 20 years gives you the experience to know most of the doom and gloom never matters.
You go public when you get the chance.
I appreciate you comment and I hope I helped update your understanding of how things work!!
Yes it’s a big ipo but early indications are that they’d be about 2x over subscribed if they ipo’d today from what the sell side is saying and I don’t doubt it from what other funds are saying.
Nasdaq's Shame
There’s a strong chance the IPO window has passed. I just don’t see investors willing to jump in here given all the questions about the financial viability of AI.
My guess, it has barely started. I think nearly all AI IPOs have done well so far.After you float you still need to sell all those shares at the valuations you want to exit. If they floated say 10% of shares to go public and the price tanks everyone else trying to exit loses their shirt so it’s not a magic exit for the early investors.
Lot of retail is in various funds. So those doing active management to scale of this is questionable. And then you most likely also have downward pressure for those that try to bet against these IPOs...
My boomer mom is the kind of person who just heard about AI and would get IPO fomo
but they will get a lot of flow from sovereign wealth fund and pensions
you might wonder why anthropic spend time in australia, a country with less economy than canada and almost no industry at all? likely because it has very big pension fund pool to buy their ipo
The term fleecing means „there’s nothing left here, jump ship”. Do you really believe they’re going public to cash out this early in the game?
but how else will they own spacex, openai, anthropic, nvidia, in such concentration
Opus: Let me build an interactive explainer for bitonic sort (builds diagram/no nonsense)
GPT:
"This algorithm feels weird but once you see it it clicks"
(Emoji) The Core Idea ...; (Emoji) High-Level Flow ...; (Emoji) Superpower ...; (Emoji) Why You Should Care;
"If you want, I can: ... (things it wants me to do next)"
Now I actually often like the related topics hooks, just not the clickbaity version from last few weeks.
If not for Codex performing so well for me from VS Code I'd happily migrate to Claude or Gemini.
My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.
My company has a vibe coded leaderboard tracking AI usage.
Our token usage and number of lines changed will affect our performance review this year.
The agent will churn in a loop for a good 15-20 minutes and make the leaderboard number go up. The result is verbose and useless but it satisfies the metrics from leadership.
The AI-era equivalent of that old Dilbert strip about rewarding developers directly for fixing bugs ("I'm gonna write me a new mini-van this afternoon!") just substitute intentional bug creation with setting up a simple agent loop to burn tokens on random unnecessary refactoring.
One thing odd, maybe just to me, is why OpenAI has been stuffing its ranks with former Facebookers who are known to juice growth, find edges, and keep people addicted. They have little background in getting enterprises to buy into a product. Simo herself ran the Facebook app. That organization’s genius is consumer engagement: behavioral hooks, dopamine loops, the relentless optimization of the feed. You can see that in the recent iterations of ChatGPT. It has become such a sycophant, and creates answers and options, that you end up engaging with it. That’s juicing growth. Facebook style.
This is because ChatGPT is gearing up to sell ads. It's the only way to sustain a free chat service in the long term. Ads require engagement and usage. Hiring former Meta employees for this is smart business - even if HN crowd doesn't like it.People say OpenAI is burning money and is on the verge of collapse. The same people will say OpenAI building an ads business on ChatGPT is "enshittifcation". These people are quite insufferable, no offense to the many who are exactly as I described.
There is a very simple answer for this: that’s how leadership ranks work in SV. When one “leader” moves from Company A to Company B, a lot of existing employees are pushed out or sidelined, and the ranks are filled with loyalists from previous companies. Sometimes this works out, but a lot of time it doesn’t and it stays that way until another “leader” is brought in. What’s good for the company doesn’t matter unless there clear incentives and targets lined out for them.
Things like ”If you want, I can also show a very fast Photoshop-style trick in Krita that lets you drag-copy an area in one step (without copy/paste). It’s hidden but extremely useful.”
Every single chat now has it. Not only the conversational prompt with “I can continue talking about this”, but very clickbaity terms like: almost nobody knows about this, you will be surprised, all VIPs are now using this car, do you want to know which it is? Etc
In most of my discussions throughout the day, it doesnt ask any "follow up" questions at the end. Very often it says thingslike: "you have two options: A - ..... and B - while the one includes X and the other Y..."
But this is was OP underlined: Claude is popular amongst businesses, most "non-tech" people dont even know that it exists.
If it were so useful, just tell me in the first place! If you say “Yes” then it’s usually just a regurgitation of your prior conversation, not actually new information.
This immediately smelled of engagement bait as soon as the pattern started recently. It’s omnipresent and annoying.
The model doesn’t always obey it, but 80% of the time it’s worked for me.
People will have to pay for this. I don't see it being free for long other than a few chats a day. If most people in the world are paying 10-200 bucks a month then AI companies will make money, and I doubt they will need to rely much on ads at all.
(Except when mandated by their employers, which nobody is happy about or finds particularly useful.)
Sort of how now I have an unlimited 5G data plan for like 10 dollars, and in 2011 I didn't even have Internet on my phone. This is happening also with AI.
And “once they sell ads, they’ll lose all their users!” As if that happened to FB, Google, YouTube, or Instagram…
Some people are really rooting for the downfall of OpenAI that will simply not happen, and their rage makes them utterly unreasonable.
[1] https://app.hyperliquid.xyz/trade/vntl:OPENAI
[2] https://polymarket.com/event/openai-ipo-closing-market-cap-a...
jpm and gs will let you open an account in the us if you have $50m cash
I have noticed 5.3 in xtra high was a turd today. High used to be enough for most of my use cases. xhigh used to surprise me. Now it's incapable of following the very first instructions.
I just hope open source models get as good as last few month's top models before the enshittification has gone too far.
So I feel like the company which does these huge contracts will at the end eat up the coding business for nothing. The only way to avoid that is for anthropic to build up a huge IP lead in the code agent space. That is too difficult in my opinion. Because its hard to get exclusive access to code itself, the data advantage is not going to be there. Compute advantage is also difficult. And it's very difficult to hold on to architectural IP advantages in the LLM space.
Even if you get yourself embedded deep into traditional coding workflows (integrations with VCS, CI, IDEs, code forges, etc), usually SW infrastructure tends to like things decoupled through interfaces. Example: the most popular way to using code agents is the separate TUI application claude code which `cat`s and `grep`s your code. MCP, etc,. This means substitute-ability which is bad news.
I was thinking of ways these companies can actually get the coding business. One idea I had was to make proprietary context management tools that collect information over time and keep them permanent. And proprietary ways to correctly access them when needed. Here lock-in is real - you do the usual sleazy company things, you make it difficult to migrate "org understanding" out of your data format (it might even be technically difficult in reality). And that way there is perpetual lock-in. It even compounds over time. "Switch to my competitor and start your understanding from scratch reducing productivity by 37%, OR agree to my increased prices!". But amazing context management for coding tools is yet to be developed. Right now it is mostly slicing and combining a few markdown files, and `grep`, which is not exactly IP.
"The moat is state"
Basically an illusion. Imagine if they focused on medical tech instead? You cant bruteforce vaccines or radiation therapy
Have you used an AI coding model at all in the last year and a half? I think your knowledge is pretty outdated now.
What this means is the training/RL was trained with this workflow ;) But as you can tell, this workflow has no uses outside programming. Its just a hack to make it seem like the model is smart, but in fact its just them performing loops to get it right.
It requires follow-up instructions to get it to do what you want.
By the time its farted around and you have farted around reprompting it you could have done the change yourself.
Right now, the people who really see it are power users of AI and software engineers. Most equity investors still don’t seem to get it.
It feels like the calm before the storm. A lot of the groundwork is being laid quietly beneath the surface.
And at least in the country where I live, I can already feel real momentum building around enterprise adoption, both in terms of partnerships and go-to-market structure.