Is ChatGPT's web front end being intentionally degraded?
2 points
2 hours ago
| 2 comments
| chatgpt.com
| HN
linzhangrun
2 hours ago
[-]
Recently, I've frequently felt that the frontend performance of ChatGPT's official website has significantly declined. As the conversation context grows, the page becomes increasingly laggy, quickly reaching a point of extreme sluggishness. This is clearly a frontend performance issue, and I haven't encountered it on other AI chat websites. I've verified this issue across multiple devices, multiple operating systems, and multiple browsers, including Fedora, Windows 11, and macOS. My device specs shouldn't be an issue either—i7-14650HX, 40GB RAM, and an RTX 4060 8GB shouldn't create any bottleneck for webpage rendering.

If the goal is to make users conserve tokens, making the page too laggy to continue using—forcing them to start a fresh conversation—sounds rather plausible.

I've already experienced another form of frontend degradation on Gemini: new conversations automatically switching to the fast model, upgrade-to-Ultra prompts appearing everywhere, in-progress generated responses disappearing after a refresh, image generation defaulting to the lower-tier model and requiring a manual regeneration request to use the Pro model, and so on. It wouldn't surprise me at all if OpenAI were pulling similar tricks.

reply
Rzor
2 hours ago
[-]
There was an engineer from OpenAI here a while ago saying that they go above and beyond to be able to offer the free access through that page. I guess they probably have to do a bunch of checks and that's starting to degrade the experience. I use chat.com a lot for quick stuff, but on my end I haven't noticed any difference whatsoever.

Edit: Found the post: https://news.ycombinator.com/item?id=47567575 (alleged engineer, I guess)

>Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.

>A big reason we invest in this is because we want to keep free and logged-out access available for more users. My team’s goal is to help make sure the limited GPU resources are going to real users.

>We also keep a very close eye on the user impact. We monitor things like page load time, time to first token and payload size, with a focus on reducing the overhead of these protections. For the majority of people, the impact is negligible, and only a very small percentage may see a slight delay from extra checks. We also continuously evaluate precision so we can minimize false positives while still making abuse meaningfully harder.

And your exact complaint being made 20 days ago: https://news.ycombinator.com/item?id=47567689

reply
linzhangrun
1 hour ago
[-]
The webpages on Safari on a Mac Mini M4 are so laggy they’re almost unusable; the degradation happens extremely fast, and after a few rounds of zsh command generation the user experience is terrible. On Fedora with Firefox, and on Windows 11 with Chrome, there are also stutters and loading delays, but overall it’s not as bad as taking two seconds to type a single character — it’s still usable. Thanks! I read those comments — it looks like chatgpt.com hasn’t done much optimization for this scenario. Given the abilities of the people who can work at OpenAI, I feel like this is intentional (they’re choosing not to optimize) :-/
reply
topham
1 hour ago
[-]
Their app is trash too for performance. If they generate some code and markdown the widgets to work with them perform like absolutely garbage.
reply
linzhangrun
1 hour ago
[-]
They might need to use CodeX to thoroughly re-optimize the architecture performance
reply
chrisjj
43 minutes ago
[-]
> >Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse

I suspect a highly selective definition of Integrity. But hey, its not like this company has any need for the regular definition.

reply
realityfactchex
1 hour ago
[-]
> As the conversation context grows, the page becomes increasingly laggy, quickly reaching a point of extreme sluggishness.

It was that way for me a year ago, so not that new of a phenomenon.

When a context/chat/session gets too long, I start a New chat and continue where I left off. Mostly solves this IME, though it's annoying to have to do.

reply
chrisjj
38 minutes ago
[-]
"Have you tried turning it off and on again?" is now no longer a joke, but part of the business model, earning OpenAI $millions in token cost.

Enshittification incarnate.

reply
chrisjj
45 minutes ago
[-]
> What are you working on?

It thinks it is social media.

Correct answer: none of your business.

reply