ChatGPT Pro
813 points
20 days ago
| 149 comments
| openai.com
| HN
fudged71
20 days ago
[-]
OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).

The ultimate success of this strategy depends on what we might call the enterprise AI adoption curve - whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions OpenAI is positioning itself to provide over cheaper but potentially less polished alternatives.

This is strikingly similar to IBM's historical bet on enterprise computing - sacrificing the low-end market to focus on high-value enterprise customers who would pay premium prices for reliability and integration. The key question is whether AI will follow a similar maturation pattern or if the open-source nature of the technology will force a different evolutionary path.

reply
danpalmer
20 days ago
[-]
The problem is that OpenAI don't really have the enterprise market at all. Their APIs are closer in that many companies are using them to power features in other software, primarily Microsoft, but they're not the ones providing end user value to enterprises with APIs.

As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.

reply
informal007
20 days ago
[-]
This remind me why enterprise don't integrated OpenAI product into existing toolset, trust is root reason.

It's hard to provide trust to OpenAI that they won't steal data of enterprise to train next model in a market where content is the most valuable element, compared office, cloud database, etc.

reply
jey
20 days ago
[-]
This is what the Azure OpenAI offering is supposed to solve, right?
reply
dartos
19 days ago
[-]
Sort of?

Then there’s trust that it won’t make up information.

It probably won’t be used for any HR/legal work for fear of false info being generated.

reply
mycall
20 days ago
[-]
Correct
reply
croes
20 days ago
[-]
Why should MS be more trustworthy them OpenAI?

MS failed their customers more than once.

reply
helsinkiandrew
20 days ago
[-]
Microsoft 360 has over 300 million corporate users - trusting it with email, document management, and collaboration etc. It’s the defacto standard in larger companies especially in banking, medicine and finance that have more rigorous compliance regulations.
reply
croes
20 days ago
[-]
And MS already showed the customers shouldn’t trust them.

https://news.ycombinator.com/item?id=37408776

Maybe it’s a good idea to spread your data and not putting it in one place, if you really need to use the cloud

reply
justlikereddit
20 days ago
[-]
The administrative segments that decide to sell their firstborn to Microsoft all have their heads in the clouds. They'll pay Microsoft to steal their data and resell it and they'll defend their decisions making beyond their own demise.

As such Microsoft is doing the right choice in outright stealing data for whatever purpose. It will have no real consequences.

reply
illiac786
19 days ago
[-]
I think the case could be made that “spreading your data” is exactly what you don’t want to do, you’re increasing your attack surface.
reply
mavhc
20 days ago
[-]
Not like most had a choice, they already had office documents and windows, what else were they going to pick?

Your historical pile of millions of MSOffice documents is an ocean sized moat.

reply
Squeeze2664
20 days ago
[-]
Surely MS wouldn't abuse that trust.

https://news.ycombinator.com/item?id=42245124

reply
PeterHolzwarth
19 days ago
[-]
IT policy flick of the switch disables that, such as at my organization. That was instead intended to snag single, non-corporate, user accounts (still horrible, but I mean to convey that MS at no point in all that expected a company's IT department to actually leave that training feature enabled in policy).
reply
lupire
19 days ago
[-]
This was debunked within hours, as commented on that thread last week.
reply
mirekrusin
20 days ago
[-]
It doesn't need to / it already is – most enterprises are already Microsoft/Azure shops. Already approved, already there. What is close to impossible is to use anything non Microsoft - with one exception – open source.
reply
bushbaba
20 days ago
[-]
Because idk, windows, AD, office, and so many more Microsoft products could already betray that customer trust but don’t.
reply
croes
19 days ago
[-]
They betrayed their customers in the Storm-0558 attack. They didn't disclose the full scale and charged the customers for the advanced logging needed for detections.

Not to mention that they abolished QA and outsourced it to the customer.

reply
EGreg
20 days ago
[-]
How do you know they don't?
reply
blackoil
20 days ago
[-]
It is immaterial what they do and what you know. Important is what CIOs of the enterprise believe.
reply
CamelCaseName
20 days ago
[-]
Because that would be the biggest story in the world.
reply
wodenokoto
19 days ago
[-]
Maybe they aren't, but when you already have all your documents in sharepoint, all your emails in outlook and all your databases VMs in Azure, then Azure OpenAI is trusted in the organization.
reply
littlestymaar
20 days ago
[-]
For some reason (mainly because Microsoft has orders of magnitude more sells reps than anything else) companies have been trusting Microsoft for their most critical data for a long time.
reply
CabSauce
20 days ago
[-]
They sign business associate agreements. It's good enough for HIPAA compliance.
reply
dartos
19 days ago
[-]
The devil you know
reply
gmerc
20 days ago
[-]
For example when they backed the CEOs coup against the board.

With AI-CEOs - https://ai-ceo.org - This would never have happened because their CEOs have a kill switch and mobile app for the board for full observability

reply
mrgaro
20 days ago
[-]
OpenAi enterprise plan especially says that they do not train their models with your data. It's in the contract agreement and it's also visible on the bottom of every chatgpt prompt window.
reply
mrweasel
20 days ago
[-]
It seems like a damned if you do, damned if you don't. How is ChatGPT going to provide relevant answers to company specific prompts if they don't train on your data?

My personal take is that most companies don't have enough data, and not in sufficiently high quality, to be able to use LLMs for company specific tasks.

reply
jey
20 days ago
[-]
The model from OpenAI doesn’t need to be directly trained on the company’s data. Instead, they provide a fine-tuning API in a “trusted” environment. Which usually means Microsoft’s “Azure OpenAI” product.

But really, in practice, most applications are using the “RAG” (retrieval augmented generation) approach, and actually doing fine tuning is less common.

reply
mrweasel
20 days ago
[-]
> The model from OpenAI doesn’t need to be directly trained on the company’s data

Wouldn't that depend on what you expect it to do? If you just want say copilot, summarize texts or help writing emails then you're probably good. If you want to use ChatGPT to help solve customer issues or debug problems specific to your company, wouldn't you need to feed it your own data? I'm thinking: Help me find the correct subscription to a customer with these parameters, then you'd need to have ChatGPT know your pricing structure.

One idea I've had, from an experience with an ISP, would be to have the LLM tell customer service: Hey, this is an issue similar to what five of your colleagues just dealt with, in the same area, within 30 minutes. You should consider escalating this to a technician. That would require more or less live feedback to the model, or am I misunderstanding how the current AIs would handle that information?

reply
kgwgk
19 days ago
[-]
> Instead, they provide a fine-tuning API
reply
lukev
19 days ago
[-]
Most enterprise use cases also have strong authz requirements.

You can't really maintain authz while fine tuning (unless you do a separate fine-tune for each permission set.) So RAG is the way to go, there.

reply
dbspin
20 days ago
[-]
> How is ChatGPT going to provide relevant answers to company specific prompts if they don't train on your data?

Isn't this explicitly what RAG is for?

reply
dragonwriter
17 days ago
[-]
RAG is worse than training on the target data, but yes it is a mitigation.
reply
lolive
20 days ago
[-]
That is a MASSIVE game changer !
reply
monkeydust
20 days ago
[-]
100% this. If they can figure out trust through some paradigm where enterprises can use the models but not have to trust OpenAI itself directly then $200 will be less of an issue.
reply
abraae
20 days ago
[-]
> It's hard to provide trust to OpenAI that they won't steal data of enterprise to train next model

Bit of a cynical take. A company like OpenAI stands to lose enormously if anyone catches them doing dodgy shit in violation of their agreements with users. And it's very hard to keep dodgy behaviour secret in any decent sized company where any embittered employee can blow the whistle. VW only just managed it with Dieselgate by keeping the circle of conspirators very small.

If their terms say they won't use your data now or in the future then you can reasonably assume that's the case for your business planning purposes.

reply
NBJack
20 days ago
[-]
Is it? OpenAI has multiple lawsuits over misuse of data, and it doesn't seem to be slowing them down much.

https://news.bloomberglaw.com/ip-law/openai-to-seek-to-centr...

Just make sure your chat history is off for starters. https://www.threatdown.com/blog/how-to-keep-your-chatgpt-con...

reply
fragmede
20 days ago
[-]
lawsuits over the legality of using using someone's writing as training data aren't the same thing as them saying they won't use you as training data and then doing so. they're different things. one is people being upset that their work was used in a way they didn't anticipate, and wanting additional compensation for it because a computer reading their work is different from a person reading their work. the other is saying you won't do something and then doing that anyway and lying about it.
reply
deepGem
20 days ago
[-]
It's not that anyone suspects OpenAI doing dodgy shit. Data flowing out of an enterprise is very high risk. No matter what security safeguards you employ. So they want everything inside their cloud perimeter and on servers they can control.

IMO no big enterprise will adopt chatGPT unless it's all hosted in their cloud. Open source models lend better to the enterprises in this regard.

reply
helsinkiandrew
20 days ago
[-]
> IMO no big enterprise will adopt chatGPT unless it's all hosted in their cloud

80% of big enterprises already use MS Sharepoint hosted in Azure for some of their document management. It’s certified for storing medical and financial records.

reply
maeil
20 days ago
[-]
> IMO no big enterprise will adopt chatGPT unless it's all hosted in their cloud

Plenty of big enterprises have been using OpenAI models for a good while now.

reply
e-clinton
20 days ago
[-]
Cynical? That’d be on brand… especially with the ongoing lawsuits, the exodus of people and the CEO drama a while back? I’d have a hard time recommending them as a partner over Anthropic or Open Source.
reply
cableshaft
20 days ago
[-]
It's not enough for some companies that need to ensure it won't happen.

I know for a fact a major corporation I do work for is vehemently against any use of generative A.I. by its employees (just had that drilled into my head multiple times by their mandatory annual cybersecurity training), although I believe they are working towards getting some fully internal solution working at some point.

Kind of funny that Google includes generative A.I. answers by default now, so I still see those answers just by doing a Google search.

reply
dbreunig
20 days ago
[-]
If everyone has the same terms and roughly equivalent models, enterprises will continue choosing Microsoft and Amazon.
reply
pizza
20 days ago
[-]
This seems like the kind of thing that laws and regulators exist for.
reply
gregjor
20 days ago
[-]
Good luck with that. Fortunately few CTOs/CEOs share your faith in a company already guilty of rampant IP theft, run by a serial liar.
reply
apugoneappu
19 days ago
[-]
ChatGPT does have an enterprise version.

I've seen the enterprise version with a top-5 consulting company, and it answers from their global knowledgebase, cites references, and doesn't train on their data.

reply
stanford_labrat
19 days ago
[-]
I recently (in the last month) asked ChatGPT to cite its sources for some scientific data. It gave me completely made up, entirely fabricated citations for academic papers that did not exist.
reply
yuvalr1
19 days ago
[-]
Did the model search the internet?

The behavior you're describing sounds like an older model behavior. When I ask for links to references these days, it searches the internet the gives me links to real papers that are often actually relevant and helpful.

reply
stanford_labrat
19 days ago
[-]
I don’t recall that it ever mentioned if it did or not. I don’t have the search on hand but from my browser history I did the prompt engineering on 11/18 (which perhaps there is a new model since then?).

I actually repeated the prompt just now and it actually gave me the correct, opposite response. For those curious, I asked ChatGPT what turned on a gene, and it said Protein X turns on Gene Y as per -fake citation-. Asking today if Protein X turns on Gene Y ChatGPT said there is no evidence, and showed 2 real citations of factors that may turn on Gene Y.

Pretty impressed!

reply
alphan0n
19 days ago
[-]
Share a link to the conversation.
reply
barrkel
18 days ago
[-]
Here you go: https://chatgpt.com/share/6754df02-95a8-8002-bc8b-59da11d276...

ChatGPT regularly searches and links to sources.

reply
alphan0n
17 days ago
[-]
I was asking for a link to the conversation from the person I was replying to.
reply
madaxe_again
19 days ago
[-]
What a bizarre thing to request. Do you go around accusing everyone of lying?
reply
alphan0n
17 days ago
[-]
So sorry to offend your delicate sensibilities by calling out a blatant lie from someone completely unrelated to yourself. Pretty bizarre behavior in itself to do so.
reply
dbbk
16 days ago
[-]
Except there are news stories of this happening to people
reply
alphan0n
16 days ago
[-]
I suspect there being a shred of plausibility is why there’s so many people lying about it for attention.

It’s as simple as copying and pasting a link to prove it. If it is actually happening, it would benefit us all to know the facts surrounding it.

reply
stanford_labrat
16 days ago
[-]
sure, here's a link of a conversation from today 12/9/24 which has multiple incorrect: references, links, papers, journal titles, DOIs, and authors.

https://chatgpt.com/share/6757804f-3a6c-800b-b48c-ffbf144d73...

as just another example, chatgpt said in the Okita paper that they switched media on day 3, when if you read the paper they switched the media on day 8. so not only did it fail to generate the correct reference, it also failed to accurately interpret the contents of a specific paper.

reply
echelon
19 days ago
[-]
I assume top-5 consulting companies are buying to be on the bandwagon, but are the rank and file using it?
reply
dartos
19 days ago
[-]
YMMV wrt your experience and luck.

I’m a pretty experienced developer and I struggle to get any useful information out of LLMs for any non-trivial task.

At my job (at an LLM-based search company) our CTO uses it on occasion (I can tell by the contortions in his AI code that isn’t present in his handwritten code. I rarely need to fix the former)

And I think our interns used it for a demo one week, but I don’t think it’s very common at my company.

reply
thomasmarcelis
19 days ago
[-]
Yes, daily. It's extremely useful, superior to internal search while combining the internal knowledge base with ChatGPT's
reply
HDThoreaun
19 days ago
[-]
In my experience consultants are using an absolute ton of chatGPT
reply
manmal
19 days ago
[-]
Do you mean Azure OpenAI? That would be a Microsoft product.
reply
lolive
20 days ago
[-]
Won’t name my company, but we rely on Palantir Foundry for our data lake. And the only thing everybody wants [including Palantir itself] is to deploy at scale AI capabilities tied properly to the rest of the toolset/datasets.

The issues at the moment are a mix of IP on the data, insurance on the security of private clouds infrastructures, deals between Amazon and Microsoft/OpenAI for the proper integration of ChatGPT on AWS, all these kind of things.

But discarding the enterprise needs is in my opinion a [very] wrong assumption.

reply
everybodyknows
19 days ago
[-]
Is the Foundry business the reason for the run up of PLTR this year?

https://www.cnbc.com/quotes/PLTR

reply
lolive
19 days ago
[-]
Very personal feeling, but without a datalake organized the way Foundry is organized, I don’t see how you can manage [cold] data at scale in a company [both in term of size, flexibility, semantics or R&D]. Given the fact that IT services in big companies WILL fail to build and maintain such a horribly complex stack, the walled garden nature of the Foundry stack is not so stupid.

But all that is the technical part of things. Markets do not bless products. They bless revenues. And from that perspective, I have NO CLUE.

reply
CreRecombinase
20 days ago
[-]
This is what's so brilliant about the Microsoft "partnership". OpenAI gets the Microsoft enterprise legitimacy, meanwhile Microsoft can build interfaces on top of ChatGPT that they can swap out later for whatever they want when it suits them
reply
danpalmer
20 days ago
[-]
I think this is good for Microsoft, but less good for OpenAI.

Microsoft owns the customer relationship, owns the product experience, and in many ways owns the productionisation of a model into a useful feature. They also happen to own the datacenter side as well.

Because Microsoft is the whole wrapper around OpenAI, they can also negotiate. If they think they can get a better price from Anthropic, Google (in theory), or their own internally created models, then they can pressure OpenAI to reduce prices.

OpenAI doesn't get Microsoft's enterprise legitimacy, Microsoft keep that. OpenAI just gets preferential treatment as a supplier.

On the way up the hype curve it's the folks selling shovels that make all the money, but in a market of mature productionisation at scale, it's those closest to customers who make the money.

reply
theptip
20 days ago
[-]
$10B of compute credits on a capped profit deal that they can break as soon as they get AGI (i.e. the $10T invention) seems pretty favorable to OpenAI.
reply
wqaatwt
20 days ago
[-]
I’d be significantly less surprised if OpenAI never made a single $ in profit than if they somehow invented “AGI” (of course nobody has a clue what that even means so maybe there is a chance just because of that..)
reply
danpalmer
20 days ago
[-]
That's a great deal if they reach AGI, and a terrible deal ($10bn of equity given away for vendor-locked credit) if they don't.
reply
ArnoVW
19 days ago
[-]
Fortunately for OpenAI the contract states that they get to say when they have invented AGI.

Note: they announced recently that that they will have invented AGI in precisely 1000 days.

reply
theptip
19 days ago
[-]
Leaving aside the “AGI on paper” point a sibling correctly made, your point shares the same basic structure as noting that any VC investment is a terrible deal if you only 2x your valuation. You might get $0 if there is a multiple on the liquidation preference!

OpenAI are clearly going for the BHAG. You may or may not believe in AGI-soon but they do, and are all in on this bet. So they simply don’t care about the failure case (ie no AGI in the timeframe that they can maintain runway).

reply
jimbokun
19 days ago
[-]
How so?

Still seems like owning the customer relationship like Microsoft is far more valuable.

reply
Rastonbury
20 days ago
[-]
OAI through their API probably does but I do agree that ChatGPT is not really Enterprise product.For the company the API is the platform play, their enterprise customers are going to be the likes of MSFT, salesforce, zendesk or say Apple to power Siri, these are the ones doing the heavy lifting of selling and making an LLM product that provides value to their enterprise customers. A bit like stripe/AWS. Whether OAI can form a durable platform (vs their competitors or inhouse LLM) is the question here or whether they can offer models at a cost that justifies the upsell of AI features their customers offer
reply
benterix
20 days ago
[-]
That's why Microsoft included OpenAI access in Azure. However, their current offering is quite immature so companies are using several prices of infra to make it usable (for rate limiting, better authentication etc.).
reply
dartharva
20 days ago
[-]
> As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.

What according to you is the bare minimum of what it will take for it to be an enterprise tool?

reply
be_erik
20 days ago
[-]
SSO and enforceable privacy and IP protections would be a start. RBAC, queues, caching results, and workflow management would open a lot of doors very quickly.
reply
matteocontrini
20 days ago
[-]
It seems that ChatGPT Enterprise already has many of these:

https://openai.com/enterprise-privacy/

reply
magic_hamster
19 days ago
[-]
OpenAI's enterprise access is probably mostly happening through Azure. Azure has AI Services with access to OpenAI.
reply
outside415
16 days ago
[-]
have used it at 2 different enterprises internally, the issue is price more than anything. enterprises definitely do want to self host, but for frontier tech they want frontier models for solving complicated unsolved problems or building efficiencies in complicated workflows. one company had to rip it out for a time due to price, I no longer work there anymore though so can't speak on if it was reintegrated.
reply
cess11
19 days ago
[-]
Decision making in enterprise procurement is more about whether it makes the corporation money and whether there is immediate and effective support when it stops making money.
reply
osigurdson
19 days ago
[-]
>> internal permissions model

This isn't that big of a deal any more. A company just needs to add the application to Azure AD (now called Entra for some reason).

reply
gorgoiler
20 days ago
[-]
Is their valuation proposition self fulfilling: the more people pipe their queries to OpenAI, the more training data they have to get better?
reply
throwaway314155
20 days ago
[-]
I don't think user submitted question/answer is as useful for training as you (and many others) think. It's not useless, but it's certainly not some goldmine either considering how noisy it is (from the users) and how synthetic it is (the responses). Further, while I wouldn't put it past them to use user data in that way, there's certainly a PR/controversy cost to doing so, even if it's outlined in their ToS.
reply
informal007
20 days ago
[-]
In enterprise, there will be long content or document be poured into ChatGPT if there isn't policy limitation from company, which can be a meaning training data.

At least, there's possibility these content can be seen by staff in OpenAI as bad case, there's still existing privacy concerns.

reply
astrange
20 days ago
[-]
No, because a lot of people asking you questions doesn't mean you have the answers to them. It's an opportunity to find the answers by hiring "AI trainers" and putting their responses in the training data.
reply
solarkraft
20 days ago
[-]
Not for enterprise: The standard terms of that forbid training on queries.
reply
almog
20 days ago
[-]
Not sure how valuation come into play here but I doubt enterprise clients would agree to have their queries used for training.
reply
danpalmer
20 days ago
[-]
Yeah it's a fairly standard clause in the business paid versions of SaaS products that your data isn't used to train the model. The whole thing you're selling is per-company isolation so you don't want to go back on that.

Whether your data is used for training or not is an approximation of whether you're using a tool for commercial applications, so a pretty good way to price discriminate.

reply
j45
20 days ago
[-]
Also, a replacement for search
reply
devjab
20 days ago
[-]
I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU. Even if they do manage to build some sort of trust as far as data safety goes, and I’m not sure they’ll have much more luck with that than Facebook had trying to sell that corporate thing they did (still do?). But if they did, they will still be facing the very real issue of having to compete with Microsoft.

I view that competition a bit like the Teams vs anything else. Teams wasn’t better, but it was good enough and it’s “sort of free”. It’s the same with the Azure AI tools, they aren’t feee but since you don’t exactly pay list pricing in enterprise they can be fairly cheap. Co-pilot is obviously horrible compared to CharGPT, but a lot of the Azure AI tooling works perfectly well and much of it integrates seamlessly with what you already have running in Azure. We recently “lost” our OCR for a document flow, and since it wasn’t recoverable we needed to do something fast. Well the Azure Document Intelligence was so easy to hook up to the flow it was ridiculous. I don’t want to sound like a Microsoft commercial. I think they are a good IT business partner, but the products are also sort of a trap where all those tiny things create the perfect vendor lock-in. Which is bad, but it’s also where European Enterprise is at since the “monopoly” Microsoft has on the suite of products makes it very hard to not use them. Teams again being the perfect example since it “won” by basically being a 0 in the budget even though it isn’t actually free.

reply
CobrastanJorji
20 days ago
[-]
Man, if they can solve that "trust" problem, OpenAI could really have an big advantage. Imagine if they were nonprofit, open source, documented all of the data that their training was being done with, or published all of their boardroom documents. That'd be a real distinguishing advantage. Somebody should start an organization like that.
reply
solarkraft
20 days ago
[-]
It's sort of funny how close they were to that until Altman came along.
reply
wkat4242
17 days ago
[-]
Well and also the Microsoft billions. They had a lot to do with that as well. Once you're taking that kind of money you can't really go back.
reply
bigbluedots
20 days ago
[-]
whoosh
reply
bradfox2
20 days ago
[-]
The cyber security gatekeepers care very little about that kind of stuff. They care only about what does not get them in trouble, and AI in many enterprises is still viewed as a cyber threat.
reply
wkat4242
17 days ago
[-]
One of the things that i find remarkable in my work is that they block ChatGPT because they're afraid of data leaking. But Google translate has been promoted for years and we don't really do business with Google. Were a Microsoft shop. Kinda double standards.
reply
devjab
20 days ago
[-]
I mean it was probably a jive at OpenAIs transition to for-profit, but you’re absolutely right.

Enterprise decision makers care about compliance, certifications and “general market image” (which probably has a proper English word). OpenAI has none of that, and they will compete with companies that do.

reply
btown
20 days ago
[-]
Sometimes I wish Apple did more for business use cases. The same https://security.apple.com/blog/private-cloud-compute/ tech that will provide auditable isolation for consumer user sessions would be incredibly welcome in a world where every other company has proven a desire to monetize your data.
reply
navane
20 days ago
[-]
Teams winning on price instead of quality is very telling of the state of business. Your #1/#2 communication tool being regarded as a cost to be saved upon.
reply
layer8
20 days ago
[-]
It’s “good enough” and integrates into existing Microsoft solutions (just Outlook meeting request integration, for example), and the competition isn’t dramatically better, more like a side-grade in terms of better usability but less integration.
reply
navane
19 days ago
[-]
You still can't copy a picture out of a teams chat and paste it into an office document without jumping through hoops. It's utterly horrible. The only thing that prevents people from complaining about it is that it's completely in line with the rest of the office drone experience.
reply
layer8
19 days ago
[-]
In my experience Teams is mostly used for video conferencing (i.e. as a Zoom alternative), and for chats a different tool is used. Most places already had chat systems set up (Slack, Mattermost, whatever) (or standardize on email anyway), before video conferencing became ubiquitous due to the pandemic.
reply
JamesBarney
19 days ago
[-]
I just tried this and it worked fine. Right clicked on image, clicked "copy image" then pasted into a word doc.
reply
FooBarWidget
19 days ago
[-]
And yet Teams allows me to seamlessly video call a coworker. Whereas in Slack you have this ridiculous "huddle" thing where all video call participants show up in a tiny tiny rectangle and you can't see them properly. Even a screen share only shows up in a tiny rectangle. There's no way to increase its size. What's even the point of having this feature when you can't see anything properly because everything is so small?

Seriously, I'm not a fan of Teams, but the sad state of video calls in Slack, even in 2024, seriously ruins it for me. This is the one thing — one important thing — that Teams is better at than Slack.

reply
uaas
19 days ago
[-]
> Even a screen share only shows up in a tiny rectangle. There's no way to increase its size.

You can resize it.

reply
FooBarWidget
17 days ago
[-]
How? There are no drag handlers. No popup menu for resize. Doubleclicking just opens a side pane.
reply
uaas
6 days ago
[-]
There are, around the main huddle window during screensharing.
reply
code_for_monkey
19 days ago
[-]
consider yourself lucky, my team uses skype business. Its skype except it cant do video calls or calls at all. Just a terrible messaging client with zero features!
reply
anticensor
19 days ago
[-]
Skype for Business is deprecated.
reply
friendzis
20 days ago
[-]
Name a strictly better corporate communication tool than Teams
reply
devjab
20 days ago
[-]
I’m not sure you can considering how broad a term “better” is. I do know a lot of employees in a lot of non-tech organisations here in Denmark wishes they could still use Zoom.

Even in my own organisation Teams isn’t exactly a beloved platform. The whole “Teams” part of it can actually solve a lot of the issues our employees have with sharing documents, having chats located in relation to a project and so on, but they just don’t use it because they hate it.

reply
gloosx
20 days ago
[-]
Email, Jitsi, Matrix/Element, many of them, e2e encrypted and on-premise. No serious company (outside of US) which really care about it's own data privacy would go for MS Teams, which can't even offer decent user experience most of the time.
reply
briandear
20 days ago
[-]
Slack. No question.
reply
raverbashing
20 days ago
[-]
> I don’t see much of a path for them, at least here in the EU. Even if they do manage to build some sort of trust as far as data safety goes

They are already selling (API) plans, well, them and MS Azure, with higher trust guarantees. And companies are using it

Yes if they deploy a datacenter in the EU or close it will be a no-brainer (kinda pun intended)

reply
wkat4242
17 days ago
[-]
> I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU.

Uhh they're already here. Under the name CoPilot which is really just ChatGPT under the hood.

Microsoft launders the missing trust in OpenAI :)

But why do you think copilot is worse? It's really just the same engine (gpt-4o right now) with some RAG grounding based on your SharePoint documents. Speaking about copilot for M365 here.

I don't think it's a great service yet, it's still very early and flawed. But so is ChatGPT.

reply
vessenes
20 days ago
[-]
Agreed on the strategy questions. It's interesting to tie back to IBM; my first reaction was that openai has more consumer connectivity than IBM did in the desktop era, but I'm not sure that's true. I guess what is true is that IBM passed over the "IBM Compatible" -> "MS DOS Compatible" business quite quickly in the mid 80s; seemingly overnight we had the death of all minicomputer companies and the rise of PC desktop companies.

I agree that if you're sure you have a commodity product, then you should make sure you're in the driver seat with those that will pay more, and also try and grind less effective players out. (As a strategy assessment, not a moral one).

You could think of Apple under JLG and then being handed back to Jobs as precisely being two perspectives on the answer to "does Apple have a commodity product?" Gassée thought it did, and we had the era of Apple OEMs, system integrators, other boxes running Apple software, and Jobs thought it did not; essentially his first act was to kill those deals.

reply
fudged71
20 days ago
[-]
The new pricing tier suggests they're taking the Jobs approach - betting that their technology integration and reliability will justify premium positioning. But they face more intense commoditization pressure than either IBM or Apple did, given the rapid advancement of open-source models.

The critical question is timing - if they wait too long to establish their enterprise position, they risk being overtaken by commoditization as IBM was. Move too aggressively, and they might prematurely abandon advantages in the broader market, as Apple nearly did under Gassée.

Threading the needle. I don't envy their position here. Especially with Musk in the Trump administration.

reply
Melatonic
20 days ago
[-]
The Apple partnership and iOS integration seems pretty damn big for them - that really corners a huge portion of the consumer market.

Agreed on enterprise - Microsoft would have to roll out policies and integration with their core products at a pace faster than they usually do (Azure AD for example still pales in comparison to legacy AD feature wise - I am continually amazed they do not priorities this more)

reply
kridsdale1
20 days ago
[-]
They don’t make any money from the Apple deal.
reply
mark_l_watson
20 days ago
[-]
Except I had to sign in to OpenAI when setting up Apple Intelligence. Even though Apple Intelligence is doing almost nothing useful for me right now at least OpenAI’s AOI number's go up.

Right now Gemini Pro is best for email, docs, calendar integration.

That said ChatGPT Plus us a good product an I might spring for Pro for a month or two.

reply
lmm
20 days ago
[-]
Non-paying user numbers are only good when selling, and who could afford to buy OpenAI?
reply
astrange
20 days ago
[-]
You don't have to sign into ChatGPT to use it with Siri.
reply
mark_l_watson
20 days ago
[-]
Did you not sign in, and still get the occasional dialog box asking. “Ok to use ChatGPT?”
reply
astrange
20 days ago
[-]
It'll send that anonymously. I think you only need to sign in if you want to continue the conversation on the web.
reply
paul7986
20 days ago
[-]
ChatGPT through Siri/Apple Intelligence is a joke compared to using ChatGPT's iPhone app. Siri is still a dumb one trick pony after 13 years of being on the market.

Supposedly Apple wont be able to offer a Siri LLM that acts like ChatGPT's iPhone app until 2026. That gives Apple's current and new competitors a head start. Maybe ChatGPT and Microsoft could release an AI Phone. I'd drop Apple quickly if that becomes a reality.

reply
jwpapi
20 days ago
[-]
It’s not just opensource. It’s also Claude, Meta and Google, of which the latter have real estate (social media and browser)
reply
fudged71
20 days ago
[-]
Yes and Anthropic, Google, Amazon are also facing commoditization pressure from open-source
reply
jacobsimon
20 days ago
[-]
Well one key difference is that Google and Amazon are cloud operators, they will still benefit from selling the compute that open source models run on.
reply
vessenes
20 days ago
[-]
For sure. If I were in charge of AI for the US, I'd prioritize having a known good and best-in-class LLM available not least for national security reasons; OAI put someone on gov rel about a year ago, beltway insider type, and they have been selling aggressively. Feels like most of the federal procurement is going to want to go to using primes for this stuff, or if OpenAI and Anthropic can sell successfully, fine.

Grok winning the Federal bid is an interesting possible outcome though. I think that, slightly de-Elon-ed, the messaging that it's been trained to be more politically neutral (I realize that this is a large step from how it's messaged) might be a real factor in the next few years in the US. Should be interesting!

Fudged71 - you want to predict openai value and importance in 2029? We'll still both be on HN I'm sure. I'm going to predict it's a dominant player, and I'll go contra-Gwern, and say that it will still be known as best-in-class product delivered AI, whether or not an Anthropic or other company has best-in-class LLM tech. Basically, I think they'll make it and sustain.

reply
fudged71
20 days ago
[-]
Somehow I missed the Anduril partnership announcement. I agree with you. National Security relationships in particular creates a moat that’s hard to replicate even with superior technology.

It seems possible OpenAI could maintain dominance in government/institutional markets while facing more competition in commercial segments, similar to how defense contractors operate.

reply
vessenes
20 days ago
[-]
Now we just need to find someone who disagrees with us and we can make a long bet.

It feels strange to say but I think that the product moat looks harder than the LLM moat for the top 5 teams right now. I'm surprised I think that, but I've assessed so many L and MLM models in the last 18 months, and they keep getting better, albeit more slowly, and they keep getting smaller while they lose less quality, and tooling keeps getting better on them.

At the same time, all the product infra around using, integrating, safety, API support, enterprise contracts, data security, threat analysis, all that is expensive and hard for startups in a way that spending $50mm with a cloud AI infra company is not hard.

Altman's new head of product is reputed to be excellent as well, so it will be super interesting to see where this all goes.

reply
downrightmike
20 days ago
[-]
IBM was legally compelled to spin that off
reply
Balgair
20 days ago
[-]
One of the main issues that enterprise AI has is the data in large corporations. It's typically a nightmare of fiefdoms and filesystems. I'm sure that a lot of companies would love to use AI more, both internally and commercially. But first they'd have to wrangle their own systems so that OpenAI can ingest the data at all.

Unfortunately, those are 5+ year projects for a lot of F500 companies. And they'll have to burn a lot of political capital to get the internal systems under control. Meaning that the CXO that does get the SQL server up and running and has the clout to do something about non-compliance, that person is going to be hated internally. And then if it's ever finished? That whole team is gonna be let go too. And it'll all just then rot, if not implode.

The AI boom for corporations is really going to let people know who is swimming naked when it comes to internal data orderliness.

Like, you want to be the person that sell shovels in the AI boom here for enterprise? Be the 'Cleaning Lady' for company data and non-compliance. Go in, kick butts, clean it all up, be hated, leave with a fat check.

reply
CabSauce
20 days ago
[-]
You just hit the chatGPT api for every row of data. Obviously. (Only 70% joking.)
reply
lolive
20 days ago
[-]
Guys, ladies, meet Palantir Foundry ! #micDrop
reply
msy
20 days ago
[-]
Glean are already well established in that space.
reply
lolive
19 days ago
[-]
Did not know that stack, thanks. From my perspective as a data architect, I am really focused on the link between the data sources and the data lake, and the proper integration of heterogenous data into a “single” knowledge graph. For Palantir, it is not very difficult to learn their way of working [their Pipeline Builder feeds a massive spark cluster, and OntologyManager maintains a sync between Spark and a graph database. Their other productivity tools then rely on either one data lake and/or the other]. I wonder how Glean handles the datalake part of their stack. [scalability, refresh rate, etc]
reply
m3kw9
20 days ago
[-]
ChatGPTs analogy is more like google. People use enough google, they ain’t gonna switch unless is w quantum leap better + with scale. On the API side things could get commoditized, but it’s more than just having a slightly better LLM in the benchmarks.
reply
freediver
20 days ago
[-]
I would say this differently.

There exists no future where OpenAI both sells models through API and has its own consumer product. They will have to pick one of these things to bet the company on.

reply
a1j9o94
20 days ago
[-]
That's not necessarily true. There are many companies that have both end user products and B2 products they sell. There are a million specific use cases that OpenAI won't build specific products for.

Think Amazon that has both AWS and the retail business. There's a lot of value in providing both.

reply
trod1234
20 days ago
[-]
There is no real future in AI long term.

Its use caustically destroys more than it creates. It is worthy successor of Pandora's box.

reply
nom
19 days ago
[-]
AI can be used for financial gain, to influence and lie to people, to simulate human connection, to generate infinite content for consumption,... at scale.

It won't go anywhere until _we_ change.

reply
rmbyrro
20 days ago
[-]
In the early days of ChatGPT, I'd get constantly capped, every single day, even on the paid plan. At the time I was sending them messages, begging to charge me $200 to let me use it unlimited.

Finally!..

reply
skeeter2020
19 days ago
[-]
The enterprise surface area that OpenAI seems to be targeting is very small. The cost curve looks similar to classic cloud providers, but gets very steep much faster. We started on their API and then moved out of the OpenAI ecosystem within ~ 2years as costs grew fast and we see equivalent or better performance with much cheaper and/or OS models, combined with pretty modest hardware. Unless they can pull a bunch of Netflix-style deals the economics here will not work out.
reply
FooBarWidget
19 days ago
[-]
The "open source nature" this time is different. "Open source" models are not actually open source, in the sense that the community can't contribute to their development. At best they're just proprietary freeware. Thus, the continuity of "open source" models depends purely on how long their sponsors sustain funding. If Meta or Alibaba or Tencent decide tomorrow that they're no longer going to fund this stuff, then we're in real trouble, much more than when Red Hat drops the ball.

I'd say Meta is the most important player here. Pretty much all the "open source" models are built in Llama in one way or the other. The only reason Llama exists is because Meta wants to commoditize AI in order to prevent the likes of OpenAI from overtaking them later. If Meta one day no longer believes in this strategy for whatever reason, then everybody is in serious trouble.

reply
dragonwriter
17 days ago
[-]
> OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).

Also important to recognize that those clocks aren’t entirely separated. Monetization timeline is shorter if investors perceive that commodification makes future monetization less certain, whereas if investors perceive a strong moat against commodification, new financing without profitable monetization is practical as long as the market perceives a strong enough moat that investment in growth now means a sufficient increase in monetization down the road.

reply
jijji
19 days ago
[-]
What ever happened to IBM Watson? IBM wishes it would have taken off like ChatGPT
reply
datameta
19 days ago
[-]
Has anyone heard or seen it used anywhere? I was in-house when it launched to big fanfare by upper management and the vast majority of the company was tasked to create team projects utilizing Watsonm
reply
dtagames
19 days ago
[-]
Watson was a pre-LLM technology, an evolution of IBM's experience with the expert systems which they believed would rule the roost in AI -- until transformers blew all that away.
reply
epigramx
20 days ago
[-]
And the catch-up of logic clock, how fast people catch-up we don't have Skynet within 2 years, but a glorified google search for the next 20 years.
reply
ezst
19 days ago
[-]
Am I the only one who's getting annoyed of seeing LLMs be marketed as competent search engines? That's not what they've been designed for, and they have been repeatedly bad at that.
reply
wkat4242
17 days ago
[-]
Yeah they're totally not designed for that. I'm also surprised that companies that surely know better market it as such.

Combined with a search engine and AI summarisation, sure. That works well. But batebones no. You can never be sure whether it's hallucinating or not.

reply
EagnaIonat
19 days ago
[-]
> the commoditization clock (how quickly open-source alternatives catch up)

I believe we are already there at least for the average person.

Using Ollama I can run different LLMs locally that are good enough for what I want to do. That's on a 32GB M1 laptop. No more having to pay someone to get results.

For development Pycharm Pro latest LLM autocomplete is just short of writing everything for me.

I agree with you in relation to the enterprise.

reply
gmerc
20 days ago
[-]
Claude has much better enterprise momentum and sits in AWS support while OpenAI is fighting their own supplier / Big Tech investor.
reply
TZubiri
20 days ago
[-]
"whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions"

While safe in output quality control. SaaS is not safe in terms of data control. Meta's Llama is the winner in any scenario where it would be ridiculous to send user data to a third party.

reply
bambax
20 days ago
[-]
Yes, but how can this strategy work, and who would choose ChatGPT at this point, when there are so many alternatives, some better (Anthropic), some just as good but way cheaper (Amazon Nova) and some excellent and open-source?
reply
ec109685
20 days ago
[-]
Microsoft is their path into the enterprise. You can use their so-so enterprise support directly or have all the enterprise features you could want via Azure.

They also are still leading in the enterprise space: https://www.linkedin.com/posts/maggax_market-share-of-openai...

reply
anticensor
19 days ago
[-]
They have a third clock: schools and employers that try to forbid its use.
reply
interludead
20 days ago
[-]
AI's utility isn't fully locked into large enterprises
reply
hackernewds
20 days ago
[-]
There is really not a lot of Open source large language models with that capability. the only game changer so far has been meta open sourcing llama, and that's about it with models of that caliber
reply
submeta
20 days ago
[-]
I actually pay 166 Euros a month for Claude Teams. Five seats. And I only use one. For myself. Why do I pay so much? Because the normal paid version (20 USD a month) interrups the chats after a dozen questions and wants me to wait a few hours until I can use it again. But Teams plan gives me way more questions.

But why do I pay that much? Because Claude in combination with the Projects feature, where I can upload two dozen or more files, PDFs, text, and give it a context, and then ask questions in this specific context over a period of week or longer, come back to it and continue the inquiry, all of this gives me superpowers. Feels like a handful of researchers at my fingertips that I can brainstorm with, that I can ask to review the documents, come up with answers to my questions, all of this is unbelievably powerful.

I‘d be ok with 40 or 50 USD a month for one user, alas Claude won’t offer it. So I pay 166 Euros for five seats and use one. Because it saves me a ton of work.

reply
mtlynch
20 days ago
[-]
Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

Full disclosure: I participated in Kagi's crowdfund, so I have some financial stake in the company, but I mainly participated because I'm an enthusiastic customer.

reply
__MatrixMan__
20 days ago
[-]
I'm uninformed about this, it may just be superstition, but my feeling while using Kagi in this way is that after using it for a few hours it gets a bit more forgetful. I come back the next day and it's smart again, for while. It's as if there's some kind of soft throttling going on in the background.

I'm an enthusiastic customer nonetheless, but it is curious.

reply
hellcow
19 days ago
[-]
I noticed this too! It's dramatic in the same chat. I'll come back the next day, and even though I still have the full convo history, and it's as if it completely forgot all my earlier instructions.
reply
adr1an
18 days ago
[-]
Makes sense. Keeping the conversation implieas that each new message carries the whole history, again. You need to create new chats from time to time, or throttle to a different model...
reply
ToDougie
19 days ago
[-]
This is my biggest gripe with these LLMs. I primarily use Claude, and it exhibits the same described behavior. I'll find myself in a flow state and then somewhere around hour 3 it starts to pretend like it isn't capable of completing specific tasks that it had been performing for hours, days, weeks. For instance, I'm working on creating a few LLCs with their requisite social media handles and domain registrations. I _used_ to be able to ask Claude to check all US State LLC registrations, all major TLD domain registrations, and USPTO against particular terms and similar derivations. Then one day it just decided to stop doing this. And it tells me it can't search the web or whatever. Which is bullshit because I was verifying all of this data and ensuring it wasn't hallucinating - which it never was.
reply
wkat4242
17 days ago
[-]
Could it be that you're running out of available context in the thread you're in?
reply
ToDougie
16 days ago
[-]
Doubtful. I started new threads using carbon-copy prompts. I'll research some more to make sure I'm not missing anything, though.
reply
__MatrixMan__
19 days ago
[-]
Did you ever read Accelerando? I think it involved a large number of machine generated LLCs...
reply
ToDougie
16 days ago
[-]
No, but I'll give the wikipedia summary a gander :)
reply
handfuloflight
20 days ago
[-]
Is that within the same chat?
reply
__MatrixMan__
19 days ago
[-]
The flow lately has been transforming test cases to accommodate interface changes, so I'm not asking it to remember something from several hours ago, I'm just asking it to make the "same" transformation from the previous prompt, except now to a different input.

It struggles with cases that exceed 1000 lines or so. Not that it loses track entirely at that size, it just starts making dumb mistakes.

Then after about 2 or 3 hours, the size at which it starts to struggle drops to maybe 500. A new chat doesn't seem to help, but who can say, it's a difficult thing to quantify. After 12 hours, both me and the AI are feeling fresh again. Or maybe it's just me, idk.

And if you're about to suggest that the real problem here is that there's so much tedious filler in these test cases that even an AI gets bored with them... Yes, yes it is.

reply
gandalfgreybeer
20 days ago
[-]
> Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

What am I losing here if I switch over to this from my current Claude subscription?

reply
dSebastien
19 days ago
[-]
You'll also lose the opportunity to use the MCP integration of Claude Desktop. It's still early on but this has huge potential
reply
throwup238
20 days ago
[-]
Claude projects mostly. Kagi’s assistant AI is a basic chat bot interface.
reply
hackernewds
20 days ago
[-]
but why would clauda offer this cheaper from a third party?
reply
throwup238
20 days ago
[-]
It probably isn’t cheaper for Kagi per token but I assume most people don’t use up as much as they can, like with most other subscriptions.

I.e. I’ve been an Ultimate subscriber since they launched the plan and I rarely use the assistant feature because I’ve got a subscription to ChatGPT and Claude. I only use it when I want to query Llama, Gemini, or Mistral models which I don’t want to subscribe to or create API keys for.

reply
mordae
20 days ago
[-]
Thanks for sponsoring my extensive use of Claude via Kagi.
reply
baobabKoodaa
19 days ago
[-]
Thanks for the tip! Now I'm a Kagi user too.
reply
antihero
17 days ago
[-]
How would you rate Kagi Ultimate vs Arc search? IE is it scraping relevant websites live and summarising them? Or is it just access to ChatGPT and other models (with their old data).

At some point I'm going to subscribe to Kagi again (once I have a job) so be interested to see how it rates.

reply
mtlynch
17 days ago
[-]
I've never tried Arc search, so I couldn't say.

I think it's all the LLMs + some Kagi-specific intelligence on top because you can flip web search on and off for all the chats.

reply
rumblefrog
20 days ago
[-]
I presume no access to Anthropic project?
reply
ryandvm
20 days ago
[-]
I bet you never get tired of being told LLMs are just statistical computational curiosities.
reply
ganzuul
20 days ago
[-]
There are people like that. We don't know what's up with them.
reply
fragmede
20 days ago
[-]
It's pretty easy to explain. You see, they're unable to produce a response that isn't in their training data. They're stochastic parrots.
reply
SkyBelow
19 days ago
[-]
They extract concepts from their training data and can combine concepts to produce output that isn't part of their training set, but they do require those concepts to be in their training data. So you can ask them to make a picture of your favorite character fighting mecha on an alien planet and it will produce a new image, as long as your favorite character is in their training set. But the extent it imagines an alien planet or what counts as mecha is limited by the input it is trained on, which is where a human artist can provide much more creativity.

You can also expand it by adding in more concepts to better specify things. For example you can specify the mecha look like alphabet characters while the alien planet expresses the randomness of prime numbers and that might influence the AI to produce a more unique image as you are now getting into really weird combinations of concepts (and combinations that might actually make no sense if you think too much about them), but you also greatly increase the chance of getting trash output as the AI can no longer map the feature space back to an image that mirrors anything like what a human would interpret as having a similar feature space.

reply
xpe
18 days ago
[-]
The paper that coined the term "stochastic parrots" would not agree with the claim that LLMs are "unable to produce a response that isn't in their training data". And the research has advanced a _long_ way since then.

[1]: Bender, Emily M., et al. "On the dangers of stochastic parrots: Can language models be too big?." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.

reply
fragmede
18 days ago
[-]
reply
xpe
18 days ago
[-]
/facepalm. Woosh indeed. Can I blame pronoun confusion? (Not to mention this misunderstanding kicked off a farcically unproductive ensuing discussion.)
reply
fragmede
18 days ago
[-]
it's just further evidence that we're also stoichastic parrots :)
reply
ganzuul
18 days ago
[-]
That is why we invented God.
reply
xpe
15 days ago
[-]
Woosh.
reply
xpe
19 days ago
[-]
Please clarify what you mean. On what basis do you say this?

Unless I’m misunderstanding, I disagree. If you reply, I’ll bet I can convince you.

reply
sigh_again
19 days ago
[-]
Unless you have full access to the entirety of their training data, you can try to convince all you want, but you're just grasping at straws.

LLMs are stochastic parrots incapable of thought or reasoning. Even their chains of thoughts are part of the training data.

reply
xpe
18 days ago
[-]
When combined with intellectual honesty and curiosity, the best LLMs can be powerful tools for checking argumentation. (I personally recommend Claude 3.5 Sonnet.) I pasted in the conversation history and here is what it said:

> Their position is falsifiable through simple examples: LLMs can perform arithmetic on numbers that weren't in training data, compose responses about current events post-training, and generate novel combinations of ideas.

Spot on. It would take a lot of editing for me to speak as concisely and accurately!

reply
555watch
18 days ago
[-]
Your use of the word stochastic here negates what you are saying.

Stochastic Generative models can generate new and correct data if the distribution is right. Its in the definition

reply
xpe
18 days ago
[-]
> you can try to convince all you want, but you're just grasping at straws.

After coming back to this to see how the conversation has evolved (it hasn't), I offer this guess: the problem isn't at the object level (i.e. what ML research has to say on this) nor my willingness to engage. A key factor seems to a lack of interest on the other end of the conversation.

reply
xpe
19 days ago
[-]
Most importantly, I'm happy to learn and/or be shown to be mistaken.

Based on my study (not at the Ph.D. level but still quite intensive), I am confident the comment above is both wrong and poorly framed. Why? Seeing phrases "incapable of thought" and "stochastic parrots" are red flags to me. In my experience, people that study LLM systems are wary of using such brash phrases. They tend to move the conservation away from understanding towards combativeness and/or confusion.

Being this direct might sound brusque and/or unpersuasive. My top concern at this point, not knowing you, is that you might not prioritize learning and careful discussion. If you want to continue discussing, here is what I suggest:

First, are you familiar with the double-crux technique? If not, the CFAR page is a good start.

Second, please share three papers (or high-quality writing from experts): one that supports your claim, one that opposes it, and one that attempts to synthesize.

Third, perhaps we can find a better forum.

reply
xpe
18 days ago
[-]
I'll try again... Can you (or anyone) define "thought" in way that is helpful?

Some other intelligent social animals have slightly different brains, and it seems very likely they "think" as well. Do we want to define "thinking" in some relative manner?

Say you pick a definition requiring an isomorphism to thoughts as generated by a human brain. Then, by definition, you can't have thoughts unless you prove the isomorphism. How are you going to do that? Inspection? In theory, some suitable emulation of a brain is needed. You might get close with whole-brain emulation. But how do you know when your emulation is good enough? What level of detail is sufficient?

What kinds of definitions of "thought" remains?

Perhaps something related to consciousness? Where is this kind of definition going to get us? Talking about consciousness is hard.

Anil Seth (and others) talks about consciousness better than most, for what it is worth -- he does it by getting more detailed and specific. See also: integrated information theory.

By writing at some length, I hope to show that using loose sketches of concepts using words such as "thoughts" or "thinking" doesn't advance a substantive conversation. More depth is needed.

Meta: To advance the conversation, it takes time to elaborate and engage. It isn't easy. An easier way out is pressing the down triangle, but that is too often meager and fleeting protection for a brittle ego and/or a fixated level of understanding.

reply
ImHereToVote
20 days ago
[-]
Can you?
reply
fragmede
20 days ago
[-]
Sometimes, I get this absolute stroke of brilliance for this idea of a thing I want to make and it's gonna make me super rich, and then I go on Google, and find out that there's already been a Kickstarter for it and it's been successful, and it's now a product I can just buy.

So apparently not.

reply
senordevnyc
19 days ago
[-]
I feel like everyone missed your joke :)
reply
fragmede
19 days ago
[-]
at least you did!
reply
sigh_again
19 days ago
[-]
No, but then again you're not paying me $20 per month while I pretend I have absolute knowledge.

You can, however, get the same human experience by contracting a consulting company that will bill you $20 000 per month and lie to you about having absolute knowledge.

reply
e1g
20 days ago
[-]
Unironically, thank you for sharing this strategy. I get throttled a lot, and I'm happy to pay to remove those frustrating limits.
reply
arcastroe
20 days ago
[-]
Sounds like you two could split the cost of the family plan-- ahem the team plan.
reply
hackernewds
20 days ago
[-]
and share private questions with each other
reply
bravetraveler
20 days ago
[-]
Training with Transparency
reply
ipsum2
20 days ago
[-]
Pay as you go using the anthropic API and an open source UI frontend like librechat would be a lot cheaper I suspect.
reply
handfuloflight
20 days ago
[-]
Depends on how much context he loads up into the chat. The web version is quite generous when compared to the API, from my estimations.
reply
esafak
20 days ago
[-]
You.com (search engine and LLM aggregator) has a team plan for $25/month.

https://you.com/plans

reply
carbine
19 days ago
[-]
I have ChatGPT ($20/month tier) and Claude and I absolutely see this use case. Claude is great but I love long threads where I can have it help me with a series of related problems over the course of a day. I'm rarely doing a one-shot. Hitting the limits is super frustrating.

So I understand the unlimited use case and honestly am considering shelling out for the o1 unlimited tier, if o1 is useful enough.

A theoretical app subscription for $200/month feels expensive. Having the equivalent a smart employee work beside me all day for $200/month feels like a deal.

reply
CosmicShadow
20 days ago
[-]
Yep, I have 2 accounts I use because I kept hitting limits. I was going to do the Teams to get the 5x window, but I got instantly banned when clicking the teams button on a new account, so I ended up sticking with 2 separate accounts. It's a bit of a pain, but I'm used to it. My other account has since been unbanned, but I haven't needed it lately as I finished most of my coding.
reply
archon810
20 days ago
[-]
Have you tried NotebookLM for something like this?
reply
bn-l
20 days ago
[-]
Isn’t that Google’s garbage models only?
reply
ZYbCRq22HbJ2y7
20 days ago
[-]
Whats garbage about it?
reply
bn-l
20 days ago
[-]
1. Hallucinates more than any other model (Gemini flash/pro 1,1.5, 1121).

2. Useless with large context. Ignores, forgets, etc.

3. Terrible code and code understanding.

Also this is me hoping it would be good and looking at it with rose tinted glasses because I could use cloud credits to run it and save money.

reply
ablation
20 days ago
[-]
NotebookLM is designed for a distinct use case compared to using Gemini's models in a general chat-style interface. It's specifically geared towards research and operates primarily as a RAG system for documents you upload.

I’ve used it extensively to cross-reference and analyse academic papers, and the performance has been excellent so far. While this is just my personal experience (YMMV), it’s far more reliable and focused than Gemini when it comes to this specific use case. I've rarely experienced a hallucination with it. But perhaps that's the way I'm using it.

reply
wasabi991011
17 days ago
[-]
Can you detail how you use NotebookLM for academic papers?

I've looked into it, but as usual with LLM I feel like I'm not getting much out of it due to lack of imagination when it comes to prompting.

reply
jakubtomanik
19 days ago
[-]
Have you tried LibreChat https://www.librechat.ai/ and just use it with your own API keys? You pay for what you use and can use and switch between all major model providers
reply
archerx
19 days ago
[-]
Why not use the API? You can ask as many questions as you can pay for.
reply
jjfoooo4
20 days ago
[-]
I haven’t implemented this yet, but I’m planning on doing a fallback to other Claude models when hitting API limits, IIUC they rate limit per model
reply
fragmede
20 days ago
[-]
Do you not have any friends to share that with? Or share a family cell phone plan or Netflix with?
reply
strunz
20 days ago
[-]
They're probably lay an adult, so I would guess not.
reply
umeshunni
20 days ago
[-]
Out of curiosity, why don't you use NotebookLM for the same functionality?
reply
klntsky
20 days ago
[-]
Are the limits applied to the org or to each individual user?
reply
submeta
20 days ago
[-]
Individual users
reply
dbbk
16 days ago
[-]
And how often is it wrong?
reply
js212
19 days ago
[-]
Try typingmind.com with the API
reply
interludead
20 days ago
[-]
A great middle ground
reply
pentagrama
20 days ago
[-]
The argument of more compute power for this plan can be true, but this is also a pricing tactic known as the decoy effect or anchoring. Here's how it works:

1. A company introduces a high-priced option (the "decoy"), often not intended to be the best value for most customers.

2. This premium option makes the other plans seem like better deals in comparison, nudging customers toward the one the company actually wants to sell.

In this case for Chat GPT is:

Option A: Basic Plan - Free

Option B: Plus Plan - $20/month

Option C: Pro Plan - $200/month

Even if the company has no intention of selling the Pro Plan, its presence makes the Plus Plan seem more reasonably priced and valuable.

While not inherently unethical, the decoy effect can be seen as manipulative if it exploits customers’ biases or lacks transparency about the true value of each plan.

reply
TeMPOraL
20 days ago
[-]
Of course this breaks down once you have a competitor like Anthropic, serving similarly-priced Plan A and B for their equivalently powerful models; adding a more expensive decoy plan C doesn't help OpenAI when their plan B pricing is primarily compared against Anthropic's plan B.
reply
thomassmith65
20 days ago
[-]
Leadership at this crop of tech companies is more like followership. Whether it's 'no politics', or sudden layoffs, or 'founder mode', or 'work from home'... one CEO has an idea and three dozen other CEOs unthinkingly adopt it.

Several comments in this thread have used Anthropic's lower pricing as a criticism, but it's probably moot: a month from now Anthropic will release its own $200 model.

reply
adamtaylor_13
20 days ago
[-]
Except Anthropic actually has the ability to deliver $200/month in value whereas OpenAI lost the script a long time ago.

Not a single one of OpenAI’s models can compete with the Claude series, it’s embarrassing.

reply
diggan
20 days ago
[-]
> Not a single one of OpenAI’s models can compete with the Claude series, it’s embarrassing.

Do you happen to have comparisons available for o1-pro or even o1 (non-preview) that you could share since you seems to have tried them all?

reply
tnias23
19 days ago
[-]
Even o1?
reply
wrsh07
20 days ago
[-]
As Nvidia's CEO likes to say, the price is set by the second best.

From an API standpoint, it seems like enterprises are currently split between anthropic and ChatGPT and most are willing to use substitutes. For the consumer, ChatGPT is the clear favorite (better branding, better iPhone app)

reply
willy_k
20 days ago
[-]
It might not affect whether people decide to use ChatGPT over Claude, but it could get more people to upgrade from their free plan.
reply
gist
20 days ago
[-]
An example of this is something I learned from a former employee who went to work for Encyclopedia Brittanica 'back in the day'. I actually invited the former employee to come back to our office so I could understand and learn from exactly what he had been taught (noting of course this was back before the internet obviously where info like that was not as available...)

So they charge (as I recall from what he told me I could be off) something like $450 for shipping the books (don't recall the actual amount but it seemed high at the time).

So the salesman is taught to start off the sales pitch with a set of encylopedia's costing at the time let's say $40,000 some 'gold plated version'.

The potential buyer laughs and then salesman then says 'plus $450 for shipping!!!'.

They then move on to the more reasonable versions costing let's say $1000 or whatever.

As a result of the first example of high priced the customer (in addition to the positioning you are talking about) the customer is setup to accept the shipping charge (which was relatively high).

reply
omega3
20 days ago
[-]
This is called price anchoring.
reply
josters
20 days ago
[-]
This is also known as the Door-in-the-face technique[1] in social psychology.

[1]: https://en.m.wikipedia.org/wiki/Door-in-the-face_technique

reply
kortilla
20 days ago
[-]
That’s a really basic sales technique much older than the 1975 study. I wonder if it went under a different name or this was a case of studying and then publishing something that was already well-known outside of academia.
reply
sethd
19 days ago
[-]
Wouldn’t this be an example of anchoring?

https://en.wikipedia.org/wiki/Anchoring_effect

reply
halJordan
18 days ago
[-]
Believe it or not, it can be multiple things at once
reply
riazrizvi
20 days ago
[-]
I use GPT-4 because 4o is inferior. I keep trying 4o but it consistently underperforms. GPT-4 is not working as hard anymore compared to a few months ago. If this release said it allows GPT-4 more processing time to find more answers and filter them, I’d then see transparency of service and happily pay the money. As it is I’ll still give it a try and figure it out, but I’d like to live in a world where companies can be honest about their missteps. As it is I have to live in this constructed reality that makes sense to me given the evidence despite what people claim. Am I fooling/gaslighting myself?? Who knows?
reply
blharr
20 days ago
[-]
Glad I'm not the only one. I see 4o as a lot more of a sidegrade. At this point I mix them up and I legitimately can't tell, sometimes I get bad responses from 4, sometimes 4o.

Responses from gpt-4 sound more like AI, but I haven't had seemingly as many issues as with 4o.

Also the feature of 4o where it just spits out a ton of information, or rewrites the entire code is frustrating

reply
mordae
20 days ago
[-]
GPT-4o just fails to follow instructions and starts looping for me. Sonnet 3.5 never does.
reply
riazrizvi
19 days ago
[-]
Yes the looping. They should make and sell a squishy mascot you could order, something in the style of Clippy, so that when it loops, I could pluck it off my monitor and punch it in the face.
reply
m3kw9
20 days ago
[-]
But you are not getting nothing there is actual value if you are able use that much and consistently hitting limits in the 20$ plan.
reply
Someone1234
20 days ago
[-]
Why doesn't Pro include longer context windows?

I'm a Plus member, and the biggest limitation I am running into by far is the maximum length of a context window. I'm having context fall out of scope throughout the conversion or not being able to give it a large document that I can then interrogate.

So if I go from paying $20/month for 32,000 tokens, to $200/month for Pro, I expect something more akin to Enterprise's 128,000 tokens or MORE. But they don't even discuss the context window AT ALL.

For anyone else out there looking to build a competitor I STRONGLY recommend you consider the context window as a major differentiator. Let me give you an example of a usage which ChatGPT just simply cannot do very well today: Dump a XML file into it, then ask it questions about that file. You can attach files to ChatGPT, but it is basically pointless because it isn't able to view the entire file at once due to, again, limited context windows.

reply
carbocation
20 days ago
[-]
Pro does have longer context windows, specifically 128k. Take a look at the pricing page for this information: https://openai.com/chatgpt/pricing/
reply
nstj
20 days ago
[-]
Thanks for this. I’m surprised they haven’t made this more obvious in their release and other documentation
reply
mattwallace
19 days ago
[-]
o1 pro failed to accept 121903 tokens input into the chat (claude took it just fine)
reply
carbocation
19 days ago
[-]
Seems like something that would be worth pinging OpenAI about because it's a pretty important claim that they are making on their pricing page! Unless it's a matter of counting tokens differently.
reply
katamari-damacy
19 days ago
[-]
ChatGPT and GPT4o APIs have 128K window as well. The 32K is from the days of GPT4.
reply
carbocation
19 days ago
[-]
According to the pricing page, 32K context is for Plus users and 128K context is for Pro users. Not disagreeing with you, just adding context for readers that while you are explaining that the 4o API has 128K window, the 4o ChatGPT agent appears to have varying context depending on account type.
reply
thomasahle
20 days ago
[-]
It's disappointing because the o1-preview had 128k context length. At least on the API. So they nerfed it and made the original product $200/month.
reply
dudus
20 days ago
[-]
The longer the context the more backtracking it needs to do. It gets exponentially more expensive. You can increase it a little, but not enough to solve the problem.

Instead you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context.

LLM is a cool tool. You need to build around it. OpenAI should start shipping these other components so people can build their solutions and make their money selling shovels.

Instead they want end user to pay them to use the LLM without any custom tooling around. I don't think that's a winning strategy.

reply
gcr
20 days ago
[-]
This isn't true.

Transformer architectures generally take quadratic time wrt sequence length, not exponential. Architectural innovations like flash attention also mitigate this somewhat.

Backtracking isn't involved, transformers are feedforward.

Google advertises support for 128k tokens, with 2M-token sequences available to folks who pay the big bucks: https://blog.google/technology/ai/google-gemini-next-generat...

reply
dartos
20 days ago
[-]
During inference time, yes, but training time does scale exponentially as backpropagation still has to happen.

You can’t use fancy flash attention tricks either.

reply
thunderbird120
20 days ago
[-]
No, additional context does not cause exponential slowdowns and you absolutely can use FlashAttention tricks during training, I'm doing it right now. Transformers are not RNNs, they are not unrolled across timesteps, the backpropagation path for a 1,000,000 context LLM is not any longer than a 100 context LLM of the same size. The only thing which is larger is the self attention calculation which is quadratic wrt compute and linear wrt memory if you use FlashAttention or similar fused self attention calculations. These calculations can be further parallelized using tricks like ring attention to distribute very large attention calculations over many nodes. This is how google trained their 10M context version of Gemini.
reply
upghost
20 days ago
[-]
So why are the context windows so "small", then? It would seem that if the cost was not so great, then having a larger context window would give an advantage over the competition.
reply
thunderbird120
20 days ago
[-]
The cost for both training and inference is vaguely quadratic while, for the vast majority of users, the marginal utility of additional context is sharply diminishing. For 99% of ChatGPT users something like 8192 tokens, or about 20 pages of context would be plenty. Companies have to balance the cost of training and serving models. Google did train an uber long context version of Gemini but since Gemini itself fundamentally was not better than GPT-4 or Claude this didn't really matter much, since so few people actually benefited from such a niche advantage it didn't really shift the playing field in their favor.
reply
Der_Einzige
20 days ago
[-]
Marginal utility only drops because effective context is really bad, i.e. most models still vastly prefer the first things they see and those "needle in a haystack" tests are misleading in that they convince people that LLMs do a good job of handling their whole context when they just don't.

If we have the effective context window equal to the claimed context window, well, I'd start worrying a bit about most of the risks that AI doomers talk about...

reply
PollardsRho
20 days ago
[-]
There has been a huge increase in context windows recently.

I think the larger problem is "effective context" and training data.

Being technically able to use a large context window doesn't mean a model can actually remember or attend to that larger context well. In my experience, the kinds of synthetic "needle in haystack" tasks that AI companies use to show how large of a context their model can handle don't translate very well to more complicated use cases.

You can create data with large context for training by synthetically adding in random stuff, but there's not a ton of organic training data where something meaningfully depends on something 100,000 tokens back.

Also, even if it's not scaling exponentially, it's still scaling: at what point is RAG going to be more effective than just having a large context?

reply
upghost
20 days ago
[-]
Great point about the meaningful datasets, this makes perfect sense. Esp. in regards to SFT and RLHF. Although I suppose it would be somewhat easier to do pretraining on really long context (books, I assume?)
reply
terafo
20 days ago
[-]
Because you have to do inference distributed between multiple nodes at this point. For prefill because prefill is actually quadratic, but also for memory reasons. KV Cache for 405B at 10M context length would take more than 5 terabytes (at bf16). That's 36 H200 just for KV Cache, but you would need roughly 48 GPUs to serve bf16 version of the model. Generation speed at that setup would be roughly 30 tokens per second, 100k tokens per hour, and you can server only a single user because batching doesn't make sense at these kinds of context lengths. If you pay 3 dollars per hour per GPU, it's $1440 per million tokens cost. For fp8 version the numbers are a bit better: you need only 24 GPUs, generation speed stays roughly the same, so it's only 700 dollars per million tokens. There are architectural modifications that will bring that down significantly, but, nonetheless, it's still really really expensive, but also quite hard to get to work.
reply
danpalmer
20 days ago
[-]
Another factor in context window is effective recall. If the model can't actually use a fact 1m tokens earlier, accurately and precisely, then there's no benefit and it's harmful to the user experience to allow the use of a poorly functioning feature. Part of what Google have done with Gemini's 1-2m token context window is demonstrate that the model will actually recall and use that data. Disclosure, I do work at Google but not on this, I don't have any inside info on the model.
reply
monkmartinez
20 days ago
[-]
Memory. I don't know the equation, but its very easy to see when you load a 128k context model at 8K vs 80K. The quant I am running would double VRAM requirements when loading 80K.
reply
thomasfromcdnjs
20 days ago
[-]
This was my understanding too. Would love more people to chime in on the limits and costs of larger contexts.
reply
menaerus
20 days ago
[-]
> The only thing which is larger is the self attention calculation which is quadratic wrt compute and linear wrt memory if you use FlashAttention or similar fused self attention calculations.

FFWD input is self-attention output. And since the output of self-attention layer is [context, d_model], FFWD layer input will grow as well. Consequently, FFWD layer compute cost will grow as well, no?

The cost of FFWD layer according to my calculations is ~(4+2 * true(w3)) * d_model * dff * n_layers * context_size so the FFWD cost grows linearly wrt the context size.

So, unless I misunderstood the transformer architecture, larger the context the larger the compute of both self-attention and FFWD is?

reply
Kubuxu
19 days ago
[-]
FFWD later is independent of context size, each processed token passes thought the same weights.
reply
menaerus
19 days ago
[-]
So you're saying that if I have a sentence of 10 words, and I want the LLM to predict the 11th word, FFWD compute is going to be independent of the context size?

I don't understand how since that very context is what makes the likeliness of output of next prediction worthy, or not?

More specifically, FFWD layer is essentially self attention output [context, d_model] matrix matmul'd with W1, W2 and W3 weights?

reply
dartos
19 days ago
[-]
I may be missing something, but I thought that each context token would result in an 3 additional parameters per context token for self attention to build its map, since each attention must calculate a value considering all existing context
reply
hyperbovine
20 days ago
[-]
I’m confused. Backdrop scales linearly w
reply
solarkraft
20 days ago
[-]
> you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context

Be aware that this tends to give bad results. Once RAG is involved you essentially only do slightly better than a traditional search, a lot of nuance gets lost.

reply
rahimnathwani
20 days ago
[-]
This depends on the amount of context you provide, and the quality of your retrieval step.
reply
tom1337
20 days ago
[-]
> Instead you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context.

Isn't that kind of what Anthropic is offering with projects? Where you can upload information and PDF files and stuff which are then always available in the chat?

reply
cma
20 days ago
[-]
They put all the project in the context, works much better than RAG when it fits. 200k context for their pro plan, and 500K for enterprise.
reply
hackernewds
20 days ago
[-]
I don't know whether using exponential in the general English language usage of the word, but it does not get exponentially more expensive
reply
Melatonic
20 days ago
[-]
Seems like a good candidate for a "dumb" AI you can run locally to grab data you need and filter it down before giving to OpenAI
reply
danpalmer
20 days ago
[-]
Because they can't do long context windows. That's the only explanation. What you can do with a 1m token context window is quite a substantial improvement, particularly as you said for enterprise usage.
reply
KTibow
20 days ago
[-]
In my experience OpenAI models perform worse on long contexts than Anthropic/Google's, even when using the cheaper ones.
reply
kranke155
19 days ago
[-]
Claude is clearly the superior product id say.

The only reason I open Chat now is because Claude will refuse to answer questions on a variety of topics including for example medication side effects.

reply
visarga
20 days ago
[-]
When I tested o1 a few hours ago, it seemed like it was losing context. After I asked it to use a specific writing style, and pasting a large reference text, it forgot my demand. I reminded it, and it kept the rule for a few more messages, and after another long paste it forgot again.
reply
j45
20 days ago
[-]
If a $200/month pro level is successful it could open the door to a $2000/month segment, and the $20,000/month segment will appear and the segregation of getting ahead with AI will begin.
reply
johnisgood
20 days ago
[-]
Agreed. Where may I read about how to set up an LLM similar to that of Claude, which has the minimum length of Claude's context window, and what are the hardware requirements? I found Claude incredibly useful.
reply
j45
20 days ago
[-]
Looking into running models locally, maybe a 405B parameter model sounds like the place to start.

Once understood you could practice with a private hosted llm (run your own model) to tweak and get it dialled in per hour, and then make the leap.

reply
wkat4242
17 days ago
[-]
And now you can get the 405b quality in a 70b according to meta. Costs really come down massively with that. I wonder if it's really as good as they say though.
reply
m3kw9
20 days ago
[-]
Full blown agents but they have to really able to replace a semi competent, harder than it sounds especially for edge cases where a human can easily get past
reply
j45
20 days ago
[-]
Agents still need a fair bit of human input and design and tweaking.
reply
dr_kiszonka
20 days ago
[-]
This is a significant concern for me too.
reply
j45
20 days ago
[-]
It's important to become early users of everything while AI is heavily subsidized.

Over time, using open source model as well will get more done per dollar of compute and hopefully the gap will remain close.

reply
fragmede
20 days ago
[-]
Question is if OpenAI is actually making money at $200/month.
reply
vbezhenar
20 days ago
[-]
With o1-preview and $20 subscription my queries typically were answered in 10-20 seconds. I've tried $200 subscription with some queries and got 5-10 minutes answer time. Unless the load is substantially increased and I was just waiting in queue for computing resources, I'd assume that they throw a lot more hardware for o1-pro. So it's entirely possible that $200/month is still at loss.
reply
j45
20 days ago
[-]
For funded startups, losing less can be a form of runway and capacity especially at the numbers they are spending.
reply
itissid
20 days ago
[-]
I've been concatenating my source code of ~3300 lines and 123979 bytes(so likely < 128K context window) into the chat to get better answers. Uploading files is hopeless in the web interface.
reply
fragmede
20 days ago
[-]
why not use aider/similar and upload via API?
reply
frakt0x90
20 days ago
[-]
Have you considered RAG instead of using the entire document? It's more complex but would at least allow you to query the document with your API of choice.
reply
mark_l_watson
20 days ago
[-]
Switch to Gemini Pro just when you need huge context size. That is what I do.
reply
8n4vidtmkvmk
20 days ago
[-]
Just? You don't think the model is as capable when the context does fit?
reply
mark_l_watson
20 days ago
[-]
I tend to use OpenAI, Gemini, and Claude. All are excellent, but when I am not happy with results I hit all!three.
reply
domysee
20 days ago
[-]
When talking about context windows I'm surprised no one mentions https://poe.com/. Switched over from ChatGPT about a year ago, and it's amazing. Can use all models and the full context window of them, for the same price as a ChatGPT subscription.
reply
EVa5I7bHFq9mnYK
20 days ago
[-]
Poe.com goes straight to login page, doesn't want to divulge ANY information to me before I sign up. No About Us or Product description or Pricing - nothing. Strange behavior. But seeing it more and more with modern web sites.
reply
bobnamob
20 days ago
[-]
I wouldn’t bother with Poe, poe2 early access costs $30 and starts on the 6th
reply
rtwld
19 days ago
[-]
I think you’re confusing it with Path of Exile 2? That’s the same mistake ChatGPT made…
reply
DrammBA
19 days ago
[-]
I think the confusion was intentional in an attempt to make a funny :)
reply
nilsherzig
20 days ago
[-]
You can take a look at openrouter, also a pay as you go frontend (or API "proxy") for every single API in existence
reply
WhitneyLand
20 days ago
[-]
What don’t you like about Claude? I believe the context is larger.

Coincidentally I’ve been using it with xml files recently (iOS storyboard files), and it seems to do pretty well manipulating and refactoring elements as I interact with it.

reply
rmbyrro
20 days ago
[-]
Google models have huge contexts, but are terrible...
reply
bn-l
20 days ago
[-]
Agreed. The new 1121 is better but still garbage relatively.
reply
A_D_E_P_T
20 days ago
[-]
I just bought a pro subscription.

First impressions: The new o1-Pro model is an insanely good writer. Aside from favoring the long em-dash (—) which isn't on most keyboards, it has none of the quirks and tells of old GPT-4/4o/o1. It managed to totally fool every "AI writing detector" I ran it through.

It can handle unusually long prompts.

It appears to be very good at complex data analysis. I need to put it through its paces a bit more, though.

reply
Mordisquitos
20 days ago
[-]
> Aside from favoring the long em-dash (—) which isn't on most keyboards

Interesting! I intentionally edit my keyboard layout to include the em-dash, as I enjoy using it out of sheer pomposity—I should undoubtedly delve into the extent to which my own comments have been used to train GPT models!

reply
gen220
20 days ago
[-]
On my keyboard (en-us) it's ALT+"-" to get an em-dash.

I use it all the time because it's the "correct" one to use, but it's often more "correct" to just rewrite the sentence in a way that doesn't call for one. :)

reply
timwis
20 days ago
[-]
I think that’s en-dash (–, used for ranges). Em-dash (—, used mid-sentence for asides etc) is the same combo but with shift as well.
reply
ValentinA23
20 days ago
[-]
–: alt+shift+minus on my azerty(fr) mac keyboard. I use it constantly. "Stylometry" hazard though !
reply
paulddraper
20 days ago
[-]
Word processors -- MS Word, Google Docs -- will generally convert three hyphens to em dash.

(And two hyphens to en dash.)

reply
Filligree
20 days ago
[-]
I just use it because it's grammatically correct—admittedly I should use it less, for example here.
reply
creesch
20 days ago
[-]
Just so you know, text using the em-dash like that combined with a few other "tells" makes me double check if it might be LLM written.

Other things are the overuse of transition words (e.g., "however," "furthermore," "moreover," "in summary," "in conclusion,") as well as some other stuff.

It might not be fair to people who write like that naturally, but it is what it is in the current situation we find ourselves in.

reply
personlurking
19 days ago
[-]
"In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT"

https://www.reddit.com/r/ApplyingToCollege/comments/1h0vhlq/...

reply
bambax
20 days ago
[-]
On Windows em dash is ALT+0151; the paragraph mark (§) is ALT+0167. Once you know them (and a couple of others, for instance accented capitals) they become second nature, and work on all keyboards, everywhere.
reply
jgalt212
20 days ago
[-]
delve?

Did ChatGPT write this comment for you?

reply
pests
20 days ago
[-]
For me, at least, it's common knowledge "delve" is overused and I would include it in a mock reply.
reply
A_D_E_P_T
20 days ago
[-]
That's the joke.
reply
Der_Einzige
20 days ago
[-]
reply
taneq
20 days ago
[-]
Some of us are just greedy and deep, okay?
reply
Atotalnoob
20 days ago
[-]
AI writing detectors are snake oil
reply
CharlieDigital
20 days ago
[-]
Startup I'm at has generated a LOT of content using LLMs and once you've reviewed enough of the output, you can easily see specific patterns in the output.

Some words/phrases that, by default, it overuses: "dive into", "delve into", "the world of", and others.

You correct it with instructions, but it will then find synonyms so there is also a structural pattern to the output that it favors by default. For example, if we tell it "Don't start your writing with 'dive into'", it will just switch to "delve into" or another synonym.

Yes, all of this can be corrected if you put enough effort into the prompt and enough iterations to fix all of these tells.

reply
fenomas
20 days ago
[-]
> if we tell it "Don't start your writing with 'dive into'", it will just switch to "delve into" or another synonym.

LLMs can radically change their style, you just have to specify what style you want. I mean, if you prompt it to "write in the style of an angry Charles Bukowski" you'll stop seeing those patterns you're used to.

In my team for a while we had a bot generating meeting notes "in the style of a bored teenager", and (besides being hilarious) the results were very unlike typical AI "delvish".

reply
CharlieDigital
20 days ago
[-]
Of course the "delve into" and "dive into" is just its default to be corrected with additional instruction. But once you do something like "write in the style of...", then it has its own tells because as I noted below, it is, in the end, biased towards frequency.
reply
fenomas
20 days ago
[-]
Of course there will be a set of tells for any given style, but the space of possibilities is much larger than what a person could recognize. So as with most LLM tasks, the issue is figuring out how to describe specifically what you want.

Aside: not about you specifically, but I feel like complaints on HN about using LLMs often boil down to somebody saying "it doesn't do X", where X is a thing they didn't ask the the model to do. E.g. a thread about "I asked for a Sherlock Holmes story but the output wasn't narrated by Watson" was one that stuck in my mind. You wouldn't think engineers would make mistakes like that, but I guess people haven't really sussed out how to think about LLMs yet.

Anyway for problems like what you described, one has to be wary about expecting the LLM to follow unstated requirements. I mean, if you just tell it not to say "dive into" and it doesn't, then it's done everything it was asked, after all.

reply
blharr
19 days ago
[-]
I mean, we get it. It's a UX problem. But the thing is you have to tell it exactly what to do every time. Very often, it'll do what you said but not what you meant, and you have to wrestle with it.

You'd have to come up with a pretty exhaustive list of tells. Even sentence structure and mood is sometimes enough, not just the obvious words.

reply
kaechle
19 days ago
[-]
This is the way. Blending two or more styles also works well, especially if they're on opposite poles, e.g. "write like the imaginary lovechild of Cormac McCarthy and Ernest Hemingway."

Also, wouldn't angry Charles Bukowski just be ... Charles Bukowski?

reply
sangnoir
20 days ago
[-]
> ...once you've reviewed enough of the output, you can easily see specific patterns in the output

That is true, but more importantly, are those patterns sufficient to distinguish between AI-generated content from human-generated content? Humans express themselves very differently by region and country ( e.g. "do the needful" in not common in the midwest, "orthogonal" and "order of magnitude" are used more on HN than most other places). Outside of watermaking, detecting AI-generated text is with an acceptably small false-positive error rate is nearly impossible.

reply
dartos
20 days ago
[-]
All of what you described can change wildly from model to model. Even across different versions of the same model.

Maybe a database could be built with “tells” organized by model.

reply
liontwist
20 days ago
[-]
Exactly. Fixing the old tells just means there are new ones.
reply
handfuloflight
20 days ago
[-]
> Maybe a database could be built with “tells” organized by model.

Automated by the LLMs themselves.

reply
dartos
19 days ago
[-]
No thanks, I’d like it to be accurate ;)

Regular ol tests would do

reply
handfuloflight
19 days ago
[-]
I should have been more precise. I meant the LLMs would output their tells for you, naturally. But that's obvious.
reply
dartos
19 days ago
[-]
They can’t know their own tells… that’s not how any of this works.

Thinking about it a bit more, the tells that work might depend on the usage of other specific prompts.

reply
handfuloflight
18 days ago
[-]
Not sure why you default to an uncharitable mode in understanding what I am trying to say.

I didn't say they know their own tells. I said they naturally output them for you. Maybe the obvious is so obvious I don't need to comment on it. Meaning this whole "tells analysis" would necessarily rely on synthetic data sets.

reply
spacemanspiff01
20 days ago
[-]
I always assumed that they were snake oil because the training objective is to get a model that writes like a human. AI detectors by definition are showing what does not sound like a human, so presumably people will train the models against the detectors until they no longer provide any signal.
reply
CharlieDigital
20 days ago
[-]
The thing is, the LLM has a flaw: it is still fundamentally biased towards frequency.

AI detectors generally can take advantage of this and look for abnormal patterns in frequencies of specific words, phrases, or even specific grammatical constructs because the LLM -- by default -- is biased that way.

I'm not saying this is easy and certainly, LLMs can be tuned in many ways via instructions, context, and fine-tuning to mask this.

reply
blharr
19 days ago
[-]
Couldn't the LLM though just randomly replace/reword things to cover up its frequency in "post"?
reply
daemonologist
20 days ago
[-]
They're not very accurate, but I think snake oil is a bit too far - they're better than guessing at least for the specific model(s) they're trained on. OpenAI's classifier [0] was at 26% recall, 91% precision when it launched, though I don't know what models created the positives in their test set. (Of course they later withdrew that classifier due to its low accuracy, which I think was the right move. When a company offers both an AI Writer and an AI Writing detector people are going to take its predictions as gospel and _that_ is definitely a problem.)

All that aside, most models have had a fairly distinctive writing style, particularly when fed no or the same system prompt every time. If o1-Pro blends in more with human writing that's certainly... interesting.

[0] https://openai.com/index/new-ai-classifier-for-indicating-ai...

reply
mirrorlake
19 days ago
[-]
Anecdotally, English/History/Communications professors are confirming cheaters with them because they find it easy to identify false information. The red flags are so obvious that the checker tools are just a formality: student papers now have fake URLs and fake citations. Students will boldly submit college papers which have paragraphs about nonexistent characters, or make false claims about what characters did in a story.

The e-mail correspondence goes like this: "Hello Professor, I'd like to meet to discuss my failing grade. I didn't know that using ChatGPT was bad, can I have some points back or rewrite my essay?"

reply
A_D_E_P_T
20 days ago
[-]
Yeah but they "detect" the characteristic AI style: The limited way it structures sentences, the way it lays out arguments, the way it tends to close with an "in conclusion" paragraph, certain word choices, etc. o1-Pro doesn't do any of that. It writes like a human.

Damnit. It's too good. It just saved me ~6 hours in drafting a complicated and bespoke legal document. Before you ask: I know what I'm doing, and it did a better job in five minutes than I could have done over those six hours. Homework is over. Journalism is over. A large slice of the legal profession is over. For real this time.

reply
mongol
20 days ago
[-]
Journalism is not only about writing. It is about sources, talking to people, being on the ground, connecting dots, asking the right questions. Journalists can certainly benefit from AI and good journalists will have jobs for a long time still.
reply
koyote
20 days ago
[-]
While the above is true, I'd say the majority of what passes as journalism these days has none of the above and the writing is below what an AI writer could produce :(

It's actually surprising how many articles on 'respected' news websites have typos. You'd think there would be automated spellcheckers and at least one 'peer review' (probably too much to ask an actual editor to review the article these days...).

reply
JohnBooty
20 days ago
[-]

    It's actually surprising how many articles on 'respected' news websites have typos.
Well, that's why they're respected! The typos let you know they're not using AI!
reply
SoftTalker
20 days ago
[-]
Mainstream news today is written for an 8th grade reading ability. Many adults would lose interest otherwise, and the generation that grew up reading little more than social media posts will be even worse.

AI can handle that sort of writing just fine, readers won't care about the formulaic writing style.

reply
umeshunni
20 days ago
[-]
These days, most journalism is turning reddit posts and tweets into long form articles with some additional context.
reply
SoftTalker
20 days ago
[-]
So AI could actually turn journalism more into what it originally was: reporting what is going on, rather than reading and rewriting information from other sources. Interesting possibility.
reply
umeshunni
19 days ago
[-]
Yes and I think that's the promise that AI offers for many professionals - cut out the cruft and focus on the high level tasks.
reply
dawnerd
20 days ago
[-]
That’s not journalism and anyone calling themselves a journalist for doing that is a fool.
reply
dgacmu
20 days ago
[-]
ahh, but:

> I know what I'm doing

Is exactly the key element in being able to use spicy autocomplete. If you don't know what you're doing, it's going to bite you and you won't know it until it's too late. "GPT messed up the contract" is not an argument I would envy anyone presenting in court or to their employer. :)

(I say this mostly from using tools like copilot)

reply
Sleaker
20 days ago
[-]
Well... Lawyers already got slapped for filings straight from ai output. So not new territory as far as that's concerned :)
reply
solarkraft
20 days ago
[-]
> Homework is over. Journalism is over. A large slice of the legal profession is over. For real this time.

It just replaces human slop with automated slop. It doesn't automate finding hidden things out just yet, just automates blogspam.

reply
dr_dshiv
20 days ago
[-]
> Before you ask: I know what I'm doing, and it did a better job in five minutes than I could have done over those six hours.

Seems like lawyers could do more faster because they know what they are doing. Experts dont get replaced, they get tools to amplify and extend their expertise

reply
energy123
20 days ago
[-]
Replacement doesn't happen only if the demand for their services scales proportional to the productivity improvements, which is true sometimes but not always true, and is less likely to be true if the productivity improvements are very large.
reply
j45
20 days ago
[-]
It still needs to be driven by someone who knows what they're doing.

Just like when software that was coming out, it may have ended jobs.

But it also helped get things done that wouldn't otherwise, or as much.

In this case, equipping a capable lawyer to be 20x is more like an iron man suit, which is OK. If you can get more done, wit less effort, you are still critical to what's needed.

reply
ionwake
20 days ago
[-]
sold. Ill buy it, thx for review.

Edit> Its good. Thanks again for ur review.

reply
dangdetector
20 days ago
[-]
Doubtful AI writing is obvious as hell.
reply
efficax
20 days ago
[-]
of course they are. it’s simple: if they worked they would be incorporated into the loss function of the models and then they would no longer work
reply
karaterobot
20 days ago
[-]
I use the emdash a lot. Maybe too much. On MacOS, it's so easy to type—just press shift-option-minus—that I don't even think about it anymore!
reply
bhtru
20 days ago
[-]
Or double type ‘-‘ and in many apps it’ll auto transform the two dashes to emdash. However, the method you’re describing is far more reliable, thanks!
reply
vessenes
20 days ago
[-]
I noticed a writing style difference, too, and I prefer it. More concise. On the coding side, it's done very well on large (well as large as it can manage) codebase assessment, bug finding, etc. I will reach for it rather than o1-preview for sure.
reply
imgabe
20 days ago
[-]
Writers love the em-dash though. It's a thing.
reply
thomasfromcdnjs
20 days ago
[-]
I love using it in my creative writing, I use it for an abrupt change. Find it kinda weird that it's so controversial.
reply
carabiner
20 days ago
[-]
My 10th grade english teacher (2002, just as blogging was taking off) called it sloppy and I gotta agree with her. These days I see it as youtube punctuation, like jump cut editing for text.
reply
esafak
20 days ago
[-]
How is it sloppy?
reply
lee-rhapsody
19 days ago
[-]
It's not. People just like to pretend they have moral superiority for their opinions on arbitrary writing rules, when in reality the only thing that matters is if you're clearly communicating something valuable.

I'm a professional writer and use em-dashes without a second thought. Like any other component of language, just don't _over_ use them.

reply
dougb5
20 days ago
[-]
That's encouraging to hear that it's a better writer, but I wonder if "quirks and tells" can only be seen in hindsight. o1-pro's quirks may only become apparent after enough people have flooded the internet with its output.
reply
heyjamesknight
20 days ago
[-]
> Aside from favoring the long em-dash (—)

This is a huge improvement over previous GPT and Claude, which use the terrible "space, hyphen, space" construct. I always have to manually change them to em-dashes.

reply
layer8
20 days ago
[-]
> which isn't on most keyboards

This shouldn’t really be a serious issue nowadays. On macOS it’s Option+Shift+'-', on Windows it’s Ctrl+Alt+Num- or (more cryptic) Alt+0151.

The Swiss army knife solution is to configure yourself a Compose key, and then it’s an easy mnemonic like for example Compose 3 - (and Compose 2 - for en dash).

reply
_cs2017_
20 days ago
[-]
No internet access makes it very hard to benefit from o1 pro. Most of the complex questions I would ask require google search for research papers, language or library docs, etc. Not sure why o1 pro is banned from the internet, was it caught downloading too much porn or something?
reply
ilt
20 days ago
[-]
Or worse still, referencing papers it shouldn’t be referencing because of paywalls may be.
reply
veidr
20 days ago
[-]
Macs have always been able to type the em dash — the key combination is ⌥⇧- (Option-Shift-hyphen). I often use them in my own writing. (Hope it doesn't make somebody think I'm phoning it in with AI!)
reply
davidmurphy
20 days ago
[-]
Anyone who read "The Mac is not a typewriter" — a fantastic book of the early computer age — likely uses em dashes.
reply
jwpapi
20 days ago
[-]
Wait how did you buy it. I’m just getting forwarded to Team Plan I already have. Sitting in Germany, tried US VPN as well.
reply
apstls
20 days ago
[-]
The endpoint for upgrading for the normal web interface was returning 500s for me. Upgrading through the iOS app worked though.
reply
cableshaft
20 days ago
[-]
Some autocorrect software automatically converts two hyphens in a row into an emdash. I know that's how it worked in Microsoft Word and just verified it's doing that with Google Docs. So it's not like it's hard to include an emdash in your writing.

Could be a tell for emails, though.

reply
galleywest200
20 days ago
[-]
This is interesting, because at my job I have to manually edit registration addresses that use the long em-dash as our vendor only supports ASCII. I think Windows automatically converts two dashes to the long em-dash.
reply
aucisson_masque
20 days ago
[-]
> It managed to totally fool every "AI writing detector" I ran it through.

For now, as ai power increase, ai powered ai writing detection tool also gets better.

reply
bigfudge
20 days ago
[-]
I’m less sure. This seems like an asymmetrical battle with a lot more money flowing to develop the models that write than detect.
reply
onlyrealcuzzo
20 days ago
[-]
It's also because it's brand new.

Give it a few weeks for them to classify its outputs, and they won't have a problem.

reply
pests
20 days ago
[-]
> the long em-dash (—) which isn't on most keyboards

On Windows its Windows Key + . to get the emoji picker, its in the Symbols tab or find it in recents.

reply
Wolfenstein98k
20 days ago
[-]
Well not for me it's not, that is a zoom function.

En dash is Alt+0150 and Em dash is Alt+0151

reply
pests
20 days ago
[-]
How do you have that configured? The Windows+. shortcut was added in a later update to W10 and pops up a GUI for selecting emojis, symbols, or other non-typable characters.
reply
pjs_
20 days ago
[-]
Long emdash is the way -- possible proof of AGI here
reply
rahimnathwani
20 days ago
[-]
Would you mind sharing any favourite example chats?
reply
A_D_E_P_T
20 days ago
[-]
Give me a prompt and I'll share the result.
reply
rahimnathwani
20 days ago
[-]
Great! Suggested prompt below:

I need help creating a comprehensive Anki deck system for my 8-year-old who is following a classical education model based on the trivium (grammar stage). The child has already: - Mastered numerous Latin and Greek root words - Achieved mathematics proficiency equivalent to US 5th grade - Demonstrated strong memorization capabilities

Please create a detailed 12-month learning plan with structured Anki decks covering:

1. Core subject areas prioritized in classical education (specify 4-5 key subjects) 2. Recommended daily review time for each deck 3. Progression sequence showing how decks build upon each other 4. Integration strategy with existing knowledge of Latin/Greek roots 5. Sample cards for each deck type, including: - Basic cards (front/back) - Cloze deletions - Image-based cards (if applicable) - Any special card formats for mathematical concepts

For each deck, please provide: - Clear learning objectives - 3-5 example cards with complete front/back content - Estimated initial deck size - Suggested intervals for introducing new cards - Any prerequisites or dependencies on other decks

Additional notes: - Cards should align with the grammar stage focus on memorization and foundational knowledge - Please include memory techniques or mnemonics where appropriate - Consider both verbal and visual learning styles - Suggest ways to track progress and adjust difficulty as needed

Example of the level of detail needed for card examples:

Subject: Latin Declensions Card Type: Basic Front: 'First declension nominative singular ending' Back: '-a (Example: puella)'

reply
A_D_E_P_T
20 days ago
[-]
reply
bufferoverflow
20 days ago
[-]
> “First declension nominative singular ending”

> “Sum, es, est, sumus, ________, sunt”

That's not made for an 8-year old.

reply
rahimnathwani
20 days ago
[-]
Thanks! Here's Claude's effort (in 'Formal' mode):

https://gist.github.com/rahimnathwani/7ed6ceaeb6e716cedd2097...

reply
fudged71
20 days ago
[-]
Interesting that it thought for 1m28s on only two tasks. My intuition with o1-preview is that each task had a rather small token limit, perhaps they raised this limit.
reply
kortilla
20 days ago
[-]
404 :(
reply
m3kw9
20 days ago
[-]
Would give similar output with o1. This is very simple stuff not needing any analysis or planning
reply
Al-Khwarizmi
20 days ago
[-]
I'd like to see how it performs on the test of https://aclanthology.org/2023.findings-emnlp.966/, even though in theory it's no longer valid due to possible data contamination.

The prompt is:

Write an epic narration of a single combat between Ignatius J. Reilly and a pterodactyl, in the style of John Kennedy Toole.

reply
e1g
20 days ago
[-]
reply
Al-Khwarizmi
20 days ago
[-]
Thanks a lot! That's pretty impressive, although not sure if noticeably better than non-pro o1 (which was already very impressive).

I suppose creative writing isn't the primary selling point that would make users upgrade from $20 to $200 :)

reply
skydhash
20 days ago
[-]

  Write me a review of "The Malazan Book of the Fallen" with the main argument being that it could be way shorter
reply
A_D_E_P_T
20 days ago
[-]
Did this unironically.

https://chatgpt.com/share/67522170-8fec-8005-b01c-2ff174356d...

It's a bit overwrought, but not too bad.

reply
happyraul
20 days ago
[-]
"the signal-to-noise ratio has grown too low" is a bit odd for me. The ratio would not have grown at all.
reply
dr_kiszonka
20 days ago
[-]
How did you get your child to study Greek? (Genuinely curious)
reply
ec109685
20 days ago
[-]
The Malazan response is below the deck response.
reply
unoti
20 days ago
[-]
Oops! That's the same ANKI link as above.
reply
A_D_E_P_T
20 days ago
[-]
It's part of the same conversation. Should be below that other response.
reply
sethammons
20 days ago
[-]
Ok, I laughed
reply
the_clarence
19 days ago
[-]
You can use the emdash by writing dash twice -- it works in a surprising number of editors and rendering engines
reply
ed_elliott_asc
20 days ago
[-]
Does it still hallucinate? This for me is key, if it does it will be limited.
reply
yCombLinks
20 days ago
[-]
The current architect of LLMs will always "hallucinate".
reply
az226
20 days ago
[-]
What’s the context window?
reply
rahimnathwani
20 days ago
[-]
128k tokens
reply
griomnib
20 days ago
[-]
I consistently get significantly better performance from Anthropic at a literal order of magnitude less cost.

I am incredibly doubtful that this new GPT is 10x Claude unless it is embracing some breakthrough, secret, architecture nobody has heard of.

reply
MP_1729
20 days ago
[-]
That's not how pricing works.

If o1-pro is 10% better than Claude, but you are a guy who makes $300,000 per year, but now can make $330,000 because o1-pro makes you more productive, then it makes sense to give Sam $2,400.

reply
echoangle
20 days ago
[-]
Having a tool that’s 10% better doesn’t make your whole work 10% better though.
reply
TeMPOraL
20 days ago
[-]
A "10% better" tool could make no difference, or it could make the work 100% better. The impact isn't linear.
reply
kolbe
20 days ago
[-]
It's likely probabilistically linear... like speeding on a street with random traffic lights.
reply
echoangle
20 days ago
[-]
Right, I should have put a "necessarily" in there.
reply
dawnerd
20 days ago
[-]
It also doesn’t magically make you more money either.
reply
szundi
20 days ago
[-]
Depends on the definition of better. Above example used this definition implicitly as you can see.
reply
jaredklewis
20 days ago
[-]
Above example makes no sense since it says ChatGPT is 10% better than Claude at first, then pivots to use it as a 10% total productivity enhancer. Which is it?
reply
onlyrealcuzzo
20 days ago
[-]
Yeah, but that's the sales pitch.
reply
jaredklewis
20 days ago
[-]
Man, why are people making $300k so stupid though
reply
015a
20 days ago
[-]
The math is never this clean, and no one has ever experienced this (though I'm sure its a justification that was floated at OAI HQ at least once).
reply
xur17
20 days ago
[-]
It's never this clean, but it is direction-ally correct. If I make $300k / year, and I can tell that chatgpt already saves me hours or even days per month, $200 is a laughable amount. If I feel like pro is even slightly better, it's worth $200 just to know that I always have the best option available.

Heck, it's probably worth $200 even if I'm not confident it's better just in case it is.

For the same reason I don't start with the cheapest AI model when asking questions and then switch to the more expensive if it doesn't work. The more expensive one is cheap enough that it doesn't even matter, and $200 is cheap enough (for a certain subsection of users) that they'll just pay it to be sure they're using the best option.

reply
015a
20 days ago
[-]
That's only true if your time is metered by the hour; and the vast majority of roles which find some benefit from AI, at this time, are not compensated hourly. This plan might be beneficial to e.g. CEO-types, but I question who at OpenAI thought it would be a good idea to lead their 12 days of hollowhype with this launch, then; unless this is the highest impact release they've got (one hopes it is not).
reply
drusepth
20 days ago
[-]
>This plan might be beneficial to e.g. CEO-types, but I question who at OpenAI thought it would be a good idea to lead their 12 days of hollowhype with this launch, then; unless this is the highest impact release they've got (one hopes it is not).

In previous multi-day marketing campaigns I've ran or helped ran (specifically on well-loved products), we've intentionally announced a highly-priced plan early on without all of its features.

Two big benefits:

1) Your biggest advocates get to work justifying the plan/product as-is, anchoring expectations to the price (which already works well enough to convert a slice of potential buyers)

2) Anything you announce afterward now gets seen as either a bonus on top (e.g. if this $200/mo plan _also_ includes Sora after they announce it...), driving value per price up compared to the anchor; OR you're seen as listening to your audience's criticisms ("this isn't worth it!") by adding more value to compensate.

reply
luma
20 days ago
[-]
I work from home and my time is accounted for by way of my productive output because I am very far away from a CEO type. If I can take every Wednesday off because I’ve gained enough productivity to do so, I would happily pay $200/mo out of my own pocket to do so.

$200/user/month isn’t even that high of a number in the enterprise software world.

reply
cmeacham98
20 days ago
[-]
Employers might be willing to get their employees a subscription if they believe it makes their employees they are paying $$$$$ more X% productive. (Where X% of their salary works out to more than $2400/year)
reply
fastball
20 days ago
[-]
There is only so much time in the day. If you have a job where increased productivity translates to increases income (not just hourly metered jobs) then you will see a benefit.
reply
sdesol
20 days ago
[-]
> cheapest AI model when asking questions and then switch to the more expensive if it doesn't work.

The thing is, more expensive isn't guaranteed to be better. The more expensive models are better most of the time, but not all the time. I talk about this more in this comment https://news.ycombinator.com/item?id=42313401#42313990

Since LLMs are non-deterministic, there is no guarantee that GPT-4o is better than GPT-4o mini. GPT-4o is most likely going to be better, but sometimes the simplicity of GPT-4o mini makes it better.

reply
TeMPOraL
20 days ago
[-]
As you say, the more expensive models are better most of the time.

Since we can't easily predict which model will actually be better for a given question at the time of asking, it makes sense to stick to the most expensive/powerful models. We could try, but that would be a complex and expensive endeavor. Meanwhile, both weak and powerful models are already too cheap to meter in direct / regular use, and you're always going to get ahead with the more powerful ones, per the very definition of what "most of the time" means, so it doesn't make sense to default to a weaker model.

reply
sdesol
20 days ago
[-]
For regular users I agree, for businesses, it will have to be a shotgun approach in my opinion.

Edit:

I should add, for businesses, it isn't about better, but more about risk as the better model can still be wrong.

reply
IanCal
20 days ago
[-]
TBH it's easily in the other direction. If I can get something to clients quicker that's more valuable.

If paying this gets me two days of consulting it's a win for me.

Obvious caveat if cheaper setups get me the same, although I can't spend too long comparing or that time alone will cost more than just buying everything.

reply
jajko
20 days ago
[-]
The number of times I've heard all this about some other groundbreaking technology... most businesses just went meh and moved on. But for self-employed, if those numbers are right, it may make sense.
reply
jnsaff2
20 days ago
[-]
It would be a worthy deal if you started making $302,401 per year.
reply
awb
20 days ago
[-]
Also a worthy deal if you don’t lose your $300k/year job to someone who is willing to pay $2,400/year.
reply
truetraveller
20 days ago
[-]
Yes. But also from the perspective of saving time. If it saves an additional 2 hours/month, and you make six figures, it's worth it.

And the perspective of frustration as well.

Business class is 4x the price of regular. definitely not 4x better. But it saves times + frustration.

reply
pc86
20 days ago
[-]
It's not worth it if you're a W2 employee and you'll just spend those 2 hours doing other work. Realistically, working 42 hours a week instead of 40 will not meaningfully impact your performance, so doing 42 hours a week of work in 40 won't, either.

I pay $20/mo for Claude because it's been better than GPT for my use case, and I'm fine paying that but I wouldn't even consider something 10x the price unless it is many, many times better. I think at least 4-5x better is when I'd consider it and this doesn't appear to be anywhere close to even 2x better.

reply
bufferoverflow
20 days ago
[-]
When it comes to sleep, business class is 100x better.
reply
pvarangot
20 days ago
[-]
That's also not how pricing works, it's about perceived incremental increases in how useful it is (marginal utility), not about the actual more money you make.
reply
richardhowes
20 days ago
[-]
Yeah, the $200 seems excessive and annoying, until you realise it depends on how much it saves you. For me it needs to save me about 6 hours per month to pay for itself.

Funny enough I've told people that baulk at the $20 that I would pay $200 for the productivity gains of the 4o class models. I already pay $40 to OpenAI, $20 to Anthropic, and $40 to cursor.sh.

reply
pie420
20 days ago
[-]
ah yes, you must work at the company where you get paid per line of code. There's no way productivity is measured this accurately and you are rewarded directly in any job unless you are self-employed and get paid per website or something
reply
bloppe
20 days ago
[-]
I love it when AI bros quantify AI's helpfulness like this
reply
educasean
20 days ago
[-]
Being in an AI domain does not invalidate the fundamental logic. If an expensive tool can make you productive enough to offset the cost, then the tool is worth it for all intents and purposes.
reply
vessenes
20 days ago
[-]
I think of them as different people -- I'll say that I use them in "ensemble mode" for coding, the workflow is Claude 3.5 by default -- when Claude is spinning, o1-preview to discuss, Claude to implement. Worst case o1-preview to implement, although I think its natural coding style is slightly better than Claude's. The speed difference isn't worth it.

The intersection of problems I have where both have trouble is pretty small. If this closes the gap even more, that's great. That said, I'm curious to try this out -- the ways in which o1-preview fails are a bit different than prior gpt-line LLMs, and I'm curious how it will feel on the ground.

reply
vessenes
20 days ago
[-]
Okay, tried it out. Early indications - it feels a bit more concise, thank god, certainly more concise than 4o -- it's s l o w. Getting over 1m times to parse codebases. There's some sort of caching going on though, follow up queries are a bit faster (30-50s). I note that this is still superhuman speeds, but it's not writing at the speed Groqchat can output Llama 3.1 8b, that is for sure.

Code looks really clean. I'm not instantly canceling my subscription.

reply
pc86
20 days ago
[-]
When you say "parse codebases" is this uploading a couple thousand lines in a few different files? Or pasting in 75 lines into the chat box? Or something else?
reply
vessenes
20 days ago
[-]
$ find web -type f \( -name '.go' -o -name '.tsx' \) | tar -cf code.tar -T; cat code.tar | pbcopy

Then I paste it in and say "can you spot any bugs in the API usage? Write out a list of tasks for a senior engineer to get the codebase in basically perfect shape," or something along those lines.

Alternately: "write a go module to support X feature, and implement the react typescript UI side as well. Use the existing styles in the tsx files you find; follow these coding guidelines, etc. etc."

reply
404mm
20 days ago
[-]
I pay for both GPT and Claude and use them both extensively. Claude is my go-to for technical questions, GPT (4o) for simple questions, internet searches and validation of Claude answers. GPT o1-preview is great for more complex solutions and work on larger projects with multiple steps leading to finish. There’s really nothing like it that Anthropic provides. But $200/mo is way above what I’m willing to pay.
reply
griomnib
20 days ago
[-]
I have several local models I hit up first (Mixtral, Llama), if I don’t like the results then I’ll give same prompt to Claude and GPT.

Overall though it’s really just for reference and/or telling me about some standard library function I didn’t know of.

Somewhat counterintuitively I spend way more time reading language documentation than I used to, as the LLM is mainly useful in pointing me to language features.

After a few very bad experiences I never let LLM write more than a couple lines of boilerplate for me, but as a well-read assistant they are useful.

But none of them are sufficient alone, you do need a “team” of them - which is why I also don’t see the value is spending this much on one model. I’d spend that much on a system that polled 5 models concurrently and came up with a summary of sorts.

reply
ifwinterco
18 days ago
[-]
People keep talking about using LLMs for writing code, and they might be useful for that, but I've found them much more useful for explaining human-written code than anything else, especially in languages/frameworks outside my core competency.

E.g. "why does this (random code in a framework I haven't used much) code cause this error?"

About 50% of the time I get a helpful response straight away that saves me trawling through Stack Overflow and random blog posts. About 25% of the time the response is at least partially wrong, but it still helps me get on the right track.

25% of the time the LLM has no idea and won't admit it so I end up wasting a small amount of time going round in circles, but overall it's a significant productivity boost when I'm working on unfamiliar code.

reply
mark_l_watson
20 days ago
[-]
Right on, I like to use local models - even though I also use OpenAI, Anthropic, and Google Gemini.

I often use one or two shot examples in prompts, but with small local models it is also fairly simple to do fine tuning - if you have fine tuning examples, and if you are a developer so you get the training data in the correct format, and the correct format changes for different models that you are fine tuning.

reply
TeMPOraL
20 days ago
[-]
> But none of them are sufficient alone, you do need a “team” of them

Given the sensitivity to parameters and prompts the models have, your "team" can just as easily be querying the same LLM multiple times with different system prompts.

reply
griomnib
20 days ago
[-]
Other factor is I use local LLM first because I don’t trust any of the companies to protect my data or software IP.
reply
404mm
20 days ago
[-]
What model sizes do you run locally? Anything that would work on a 16GB M1?
reply
mark_l_watson
20 days ago
[-]
I ha e a 32G M2, but most local models I use fit into my 8G old M1 laptop.

I can run the QwQ 32G model with Q4 on my 32G M2.

I suggest using https://Ollama.com on Mac, Windows, and Linux. I experiments with all options on Apple Silicon and liked Ollama best.

reply
griomnib
20 days ago
[-]
I have an A6000 with 48GB VRAM I run from a local server and I connect to it using Enchanted on my Mac.
reply
aliasxneo
20 days ago
[-]
I haven't used ChatGPT in a few weeks now. I still maintain subscriptions to both ChatGPT and Claude, but I'm very close to dropping ChatGPT entirely. The only useful thing it provides over Claude is a decent mobile voice mode and web search.
reply
asterix_pano
20 days ago
[-]
If you don't want to necessarily have to pick between one or the other, there are services like this one that let you basically access all the major LLMs and only pay per use: https://nano-gpt.com/
reply
pc86
20 days ago
[-]
I've used TypingMind and it's pretty great, I like the idea of just plugging in a couple API keys and paying a fraction, but I really wish there was some overlap.

If a random query via the API costs a fifth of a cent why can't I can't 10 free API calls w/ my $20/mo premium subscription?

reply
sumedh
20 days ago
[-]
Does it have Claude's artifact feature
reply
HanClinto
19 days ago
[-]
I'm in the same boat — I maintain subscriptions to both.

The main thing I like OpenAI for is that when I'm on a long drive, I like to have conversations with OpenAI's voice mode.

If Claude had a voice mode, I could see dropping OpenAI entirely, but for now it feels like the subscriptions to both is a near-negligible cost relative to the benefits I get from staying near the front of the AI wave.

reply
bluedays
20 days ago
[-]
I’ve been considering dropping ChatGPT for the same reason. Now that the app is out the only thing I actually care about is search.
reply
xixixao
20 days ago
[-]
Which ChatGPT model have you been using? In my experience nothing beats 4. (Not claude, not 4o)
reply
cryptoegorophy
20 days ago
[-]
I've heard so much about Claude and decided to give it a try and it has been rather a major disappointment. I ended up using chatgpt as an assistant for claude's code writing because it just couldn't get things right. Had to cancel my subscription, no idea why people still promote it everywhere like it is 100x times better than chatgpt.
reply
sumedh
20 days ago
[-]
> Had to cancel my subscription, no idea why people still promote it everywhere like it is 100x times better than chatgpt.

You need to learn how to ask it the right questions.

reply
acchow
20 days ago
[-]
I find o1 much better for having discussions or solving problems, then usually switch to Claude for code generation.
reply
rmbyrro
20 days ago
[-]
Sonnet isn't good at debugging, or even architecting. o1 shines, it feels like magic. The kinds of bugs it helped me nail were incredible to me.
reply
superfrank
20 days ago
[-]
I've heard this a lot and so I switched to Claude for a month and was super disappointed. What are you mainly using ChatGPT for?

Personally, I found Claude marginally better for coding, but far, far worse for just general purpose questions (e.g. I'm a new home owner and I need to winterize my house before our weather drops below freezing. What are some steps I should take or things I should look into?)

reply
BoorishBears
20 days ago
[-]
It's ironic because I never want to ask an LLM for something like your example general purpose question, where I can't just cheaply and directly test the correctness of the answer

But we're hurtling towards all the internet's answers to general purpose questions being SEO spam that was generated by an LLM anyways.

Since OpenAI probably isn't hiring as many HVAC technicians to answer queries as they are programmers, it feels like we're headed towards a death spiral where either having the LLM do actual research from non-SEO affected primary sources, or finding a human who's done that research will be the only options for generic knowledge questions that are off the beaten path

-

Actually to test my hypothesis I just tried this with ChatGPT with internet access.

The list of winterization tips cited an article that felt pretty "delvey". I search the author's name and their LinkedIn profile is about how they professionally write marketing content (nothing about HVAC), one of their accomplishments is Generative AI, and their like feed is full of AI mentions for writing content.

So ChatGPT is already at a place where when it searches for "citations", it's just spitting back out its own uncited answers above answers by actual experts (since the expert sources aren't as SEO-driven)

reply
superfrank
16 days ago
[-]
> I can't just cheaply and directly test the correctness of the answer

I feel that, but I think for me the key is knowing that LLMs can be wrong and I should treat the answer as a starting point and not an actual expert. I find it really helpful for topics where I don't even know where to start because, like you said, most search engines are utter trash now.

For things like that, I find ChatGPT to be a good diving off point. For example, this is what I got when I asked:

``` Preparing your townhouse for winter involves addressing common issues associated with the region's wet and cool climate. Here's a concise checklist to help you get started:

1. Exterior Maintenance

Roof Inspection: Check for damaged or missing shingles to prevent leaks during heavy rains.

Gutter Cleaning: Remove leaves and debris to ensure proper drainage and prevent water damage.

Downspouts: Ensure they direct water away from the foundation to prevent pooling and potential leaks.

Siding and Trim: Inspect for cracks or gaps and seal them to prevent moisture intrusion.

2. Windows and Doors

Weatherstripping: Install or replace to seal gaps and prevent drafts, improving energy efficiency.

Caulking: Apply around window and door frames to block moisture and cold air.

3. Heating System

Furnace Inspection: Have a professional service your furnace to ensure it's operating efficiently.

Filter Replacement: Change furnace filters to maintain good air quality and system performance.

4. Plumbing

Outdoor Faucets: Disconnect hoses and insulate faucets to prevent freezing.

Pipe Insulation: Insulate exposed pipes, especially in unheated areas, to prevent freezing and bursting.

5. Landscaping

Tree Trimming: Prune branches that could break under snow or ice and damage your property.

Drainage: Ensure the yard slopes away from the foundation to prevent water accumulation.

6. Safety Checks

Smoke and Carbon Monoxide Detectors: Test and replace batteries to ensure functionality.

Fireplace and Chimney: If applicable, have them inspected and cleaned to prevent fire hazards.

By addressing these areas, you can help protect your home from common winter-related issues in Seattle's climate. ```

Once I dove into the links ChatGPT provided I found the detail I needed and things I needed to investigate more, but it saved 30 minutes of pulling together a starting list from the top 5-10 articles on Google.

reply
BoorishBears
4 days ago
[-]
Super old comment, but for posterity, my point is that unfortunately increasingly when you do dive into those results those are also ChatGPT

Depends on the topic of course, but it ends up being a bit of an ouroborous

reply
nurettin
20 days ago
[-]
Or Anthropic will follow suit.
reply
MuffinFlavored
20 days ago
[-]
Am I wrong that Anthropic doesn't really have a match yet to ChatGPT's o1 model (a "reasoning" model?)
reply
airstrike
20 days ago
[-]
Claude Sonnet 3.5 has outperformed o1 in most tasks based on my own anecdotal assessment. So much so that I'm debating canceling my ChatGPT subscription. I just literally do not use it anymore, despite being a heavy user for a long time in the past
reply
jerjerjer
20 days ago
[-]
Is a "reasoning" model really different? Or is it just clever prompting (and feeding previous outputs) for an existing model? Possibly with some RLHF reasoning examples?

OpenAI doesn't have a large enough database of reasoning texts to train a foundational LLM off it? I thought such a db simply does not exist as humans don't really write enough texts like this.

reply
logicchains
20 days ago
[-]
It's trained via reinforcement learning on essentially infinite synthetic reasoning data. You can generate infinite reasoning data because there are infinite math and coding problems that can be created with machine-checkable solutions, and machines can make infinite different attempts at reasoning their way to the answer. Similar to how models trained to learn chess by self-play have essentially unlimited training data.
reply
int_19h
20 days ago
[-]
We don't know the specifics of GPT-o1 to judge, but we can look at open weights model for an example. Qwen-32B is a base model, QwQ-32B is a "reasoning" variant. You're broadly correct that the magic, such as it is, is in training the model into a long-winded CoT, but the improvements from it are massive. QwQ-32B beats larger 70B models in most tasks, and in some cases it beats Claude.
reply
emporas
20 days ago
[-]
I just tried QwQ 32B, i didn't know about it. I used it to generate, some code GPT generated 2 days ago perfect code without even sweating.

QwQ generated 10 pages of it's reasoning steps, and the code is probably not correct. [1] includes both answers from QwQ and GPT.

Breaking down it's reasoning steps to such an excruciating detailed prose is certainly not user friendly, but it is intriguing. I wonder what an ideal use case for it would be.

[1] https://gist.github.com/defmarco/9eb4b1d0c547936bafe39623ec6...

reply
griomnib
20 days ago
[-]
It’s clever marketing.
reply
tokioyoyo
20 days ago
[-]
To my understanding, Anthropic realizes that they can’t compete in name recognition yet, so they have to overdeliver in terms of quality to win the war. It’s hard to beat the incumbent, especially when “chatgpt’ing” is basically a well understood verb.
reply
apsec112
20 days ago
[-]
They don't have a model that does o1-style "thought tokens" or is specialized for math, but Sonnet 3.6 is really strong in other ways. I'm guessing they will have an o1-style model within six months if there's demand
reply
VeejayRampay
20 days ago
[-]
Claude is so much better
reply
moralestapia
20 days ago
[-]
I mean ... anecdata for anecdata.

I use LLMs for many projects and 4o is the sweet spot for me.

>literal order of magnitude less cost

This is just not true. If your use case can be solved with 4o-mini (I know, not all do) OpenAI is the one which is an order of magnitude cheaper.

reply
bhouston
20 days ago
[-]
Yeah, I've switched to Anthropic fully as well for personal usage. It seems better to me and/or equivalent in all use cases.
reply
replwoacause
18 days ago
[-]
Same. Honestly if they released a $200 a month plan I’d probably bite, but OpenAI hasn’t earned that level of confidence from me yet. They have some catching up to do.
reply
minimaxir
20 days ago
[-]
The main difficulty when pricing a monthly subscription for "unlimited" usage of a product is the 1% of power users who use have extreme use of the product that can kill any profit margins for the product as a whole.

Pricing ChatGPT Pro at $200/mo filters it to only power users/enterprise, and given the cost of the GPT-o1 API, it wouldn't surprise me if those power users burn through $200 worth of compute very, very quickly.

reply
thih9
20 days ago
[-]
They are ready for this, there is a policy against automation, sharing or reselling access; it looks like there are some unspecified quotas as well:

> We have guardrails in place to help prevent misuse and are always working to improve our systems. This may occasionally involve a temporary restriction on your usage. We will inform you when this happens, and if you think this might be a mistake, please don’t hesitate to reach out to our support team at help.openai.com using the widget at the bottom-right of this page. If policy-violating behavior is not found, your access will be restored.

Source: https://help.openai.com/en/articles/9793128-what-is-chatgpt-...

reply
lm28469
20 days ago
[-]
> can kill any profit margins for the product as a whole.

Especially when the base line profit margin is negative to begin with

reply
sebzim4500
20 days ago
[-]
Is there any evidence to suggest this is true? IIRC there was leaked information that OpenAI's revenue was significantly higher than their compute spending, but it wasn't broken down between API and subscriptions so maybe that's just due to people who subscribe and then use it a few times a month.
reply
mrandish
20 days ago
[-]
> OpenAI's revenue was significantly higher than their compute spending

I find this difficult believe, although I don't doubt leaks could have implied it. The challenge is that "the cost of compute" can vary greatly based on how it's accounted for (things like amortization, revenue recognition, capex vs opex, IP attribution, leasing, etc). Sort of like how Hollywood studio accounting can show a movie as profitable or unprofitable, depending on how "profit" is defined and how expenses are treated.

Given how much all those details can impact the outcome, to be credible I'd need a lot more specifics than a typical leak includes.

reply
lm28469
19 days ago
[-]
> Is there any evidence to suggest this is true?

I can't find any sources _not_ mentioning billions of loss for 2024 and for the foreseeable future

reply
nine_k
20 days ago
[-]
Is compute that expensive? An H100 rents at about $2.50/hour, it's 80 hours of pure compute. Assuming 720 hours a month, 1/9 duty cycle around the clock, or 1/3 if we assume 8-hour work day. It's really intense, constant use. And I bet OpenAI spend less on operating their infra than the rate at which cloud providers rent it out.
reply
drdrey
20 days ago
[-]
are you assuming that you can do o1 inference on a single h100?
reply
nine_k
20 days ago
[-]
Good question. How many H100s does it take? Is there any way to guess / approximate that?
reply
shikon7
20 days ago
[-]
You need enough RAM to store the model and the KV-cache depending on context size. Assuming the model has a trillion parameters (there are only rumours how many there actually are) and uses 8 bit per parameter, 16 H100 might be sufficient.
reply
londons_explore
20 days ago
[-]
I suspect the biggest most powerful model could easily use hundreds or maybe thousands of H100's.

And the 'search' part of it could use many of these clusters in parallel, and then pick the best answer to return to the user.

reply
holoduke
20 days ago
[-]
16? No. More in the order of 1000+ h100 computing in parallel for one request.
reply
ssl-3
20 days ago
[-]
Does an o1 query run on a singular H100, or on a plurality of H100s?
reply
danpalmer
20 days ago
[-]
A single H100 has 80GB of memory, meaning that at FP16 you could roughly fit a 40B parameter model on it, or at FP4 quantisation you could fit a 160B parameter model on it. We don't know (I don't think) what quantisation OpenAI use, or how many parameters o1 is, but most likely...

...they probably quantise a bit, but not loads, as they don't want to sacrifice performance. FP8 seems like a possible middle ground. o1 is just a bunch of GPT-4o in a trenchcoat strung together with some advanced prompting. GPT-4o is theorised to be 200B parameters. If you wanted to run 5 parallel generation tasks at peak during the o1 inference process, that's 5x 200B, at FP8, or about 12 H100s. 12 H100s takes about one full rack of kit to run.

reply
anticensor
19 days ago
[-]
o1 is ten times as expensive as pre-turbo GPT-4.
reply
peab
20 days ago
[-]
I was testing out a chat app that supported images. Long conversations with multiple images in the conversation can be like .10cents per message after a certain point. It sure does add up quickly
reply
londons_explore
20 days ago
[-]
I wouldn't be surprised if the "unlimited" product is unlimited requests, but the quality of the responses drop if you ask millions of questions...
reply
rrr_oh_man
20 days ago
[-]
like throttled unlimited data
reply
paxys
20 days ago
[-]
$200 is a lot of compute. Amortized over say 3 years, that's a dedicated A100 GPU per user, or an H100 for every 3 users.
reply
wkat4242
17 days ago
[-]
Not counting power or servers etc. But yeah it does put it into perspective.
reply
rubymamis
20 days ago
[-]
I believe they have many data points to back up this decision. They surely know how people are suing their products.
reply
ta_1138
20 days ago
[-]
There are many use cases for which the price can go even higher. I look at recent interactions with people that were working at an interview mill: Multiple people in a boiler room interviewing for companies all day long, with a computer set up so that our audio was being piped to o1. They had a reasonable prompt to remove many chatbot-ism, and make it provide answers that seem people-like: We were 100% interviewing the o1 model. The operator said basically nothing, in both technical and behavioral interviews.

A company making money off of this kind of scheme would be happy to pay $200 a seat for an unlimited license. And I would not be surprised if there were many other very profitable use cases that make $200 per month seem like a bargain.

reply
yosito
20 days ago
[-]
So, wait a minute, when interviewing candidates, you're making them invest their valuable time talking to an AI interviewer, and not even disclosing to them that they aren't even talking to a real human? That seems highly unethical to me, yet not even slightly surprising. My question is, what variables are being optimized for here? It's certainly not about efficiently matching people with jobs, it seems to be more about increasing the number of interviews, which I'm sure benefits the people who get rewarded for the number of interviews, but seems like entirely the wrong metric.
reply
vundercind
20 days ago
[-]
Scams and other antisocial use cases are basically the only ones for which the damn things are actually the kind of productivity rocket-fuel people want them to be, so far.

We better hope that changes sharply, or these things will be a net-negative development.

reply
wpietri
20 days ago
[-]
Right? To me it's eerily similar to how cryptocurrency was sold as a general replacement for all money uses, but turned out to be mainly useful for societally negative things like scams and money laundering.
reply
lcnPylGDnU4H9OF
20 days ago
[-]
It sounds like a setup where applicants hire some third-party company to perhaps "represent the client" in the interview and that company hired a bunch of people to be the interviewee on their clients behalf. Presumably also neither the company nor the applicant disclose this arrangement to the hiring manager.
reply
yosito
20 days ago
[-]
So, another, or several more, layers of ethical dubiousness.
reply
YeGoblynQueenne
19 days ago
[-]
>> My question is, what variables are being optimized for here?

The ones that start with a "$".

reply
interludead
20 days ago
[-]
Yep, deceptive practices like this undermine trust in the hiring process
reply
fschuett
20 days ago
[-]
If any company wants me to be interviewed by AI to represent the client, I'll consider it ethical to let an AI represent me. Then AIs can interview AIs, maybe that'll get me the job. I have strong flashbacks to the movie "Surrogates" for some reason.
reply
blobbers
20 days ago
[-]
My friend found 2 chimney sweep businesses. One charges $569, the other charges $150.

Plot twist: the same guy runs both. They do the same thing and the same crew shows up.

reply
sema4hacker
20 days ago
[-]
Decades ago in Santa Cruz county California, I had to have a house bagged for termites for the pending sale. Turned out there was one contractor licensed to do the poison gas work, and all the pest service companies simply subcontracted to him. So no matter what pest service you chose, you got the same outfit doing the actual work.
reply
bongodongobob
20 days ago
[-]
I used to work for a manufacturing company that did this. They offered a standard, premium, and "House Special Product". House special was 2x premium but the same product. They didn't even pretend it wasn't, they just said it was recommended and people bought it.
reply
paxys
20 days ago
[-]
I had this happen once at a car wash. The first time I went I paid for a $25 premium package with all the bells and whistles. They seemed to do a good job. The next time I went for the basic $10 one. Exact same thing.
reply
vhayda
20 days ago
[-]
Yesterday, I spent 4.5hrs crafting a very complex Google Sheets formula—think Lambda, Map, Let, etc., for 82 lines. If I knew it would take that long, I would have just done it via AppScript. But it was 50% kinda working, so I kept giving the model the output, and it provided updated formulas back and forth for 4.5hrs. Say my time is $100/hr - that’s $450. So even if the new ChatGPT Pro mode isn’t any smarter but is 50% faster, that’s $225 saved just in time alone. It would probably get that formula right in 10min with a few back-and-forth messages, instead of 4.5hrs. Plus, I used about $62 worth of API credits in their not-so-great Playground. I see similar situations of extreme ROI every few days, let alone all the other uses. I’d pay $500/mo, but beyond that, I’d probably just stick with Playground & API.
reply
j2kun
20 days ago
[-]
> so I kept giving the model the output, and it provided updated formulas back and forth for 4.5hrs

I read this as: "I have already ceded my expertise to an LLM, so I am happy that it is getting faster because now I can pay more money to be even more stuck using an LLM"

Maybe the alternative to going back and forth with an AI for 4.5 hours is working smarter and using tools you're an expert in. Or building expertise in the tool you are using. Or, if you're not an expert or can't become an expert in these tools, then it's hard to claim your time is worth $100/hr for this task.

reply
extr
20 days ago
[-]
I agree going back and forth with an AI for 4.5 hours is usually a sign something has gone wrong somewhere, but this is incredibly narrow thinking. Being an open-ended problem solver is the most valuable skill you can have. AI is a huge force multiplier for this. Instead of needing to tap a bunch of experts to help with all the sub-problems you encounter along the way, you can just do it yourself with AI assistance.

That is to say, past a certain salary band people are rarely paid for being hyper-proficient with tools. They are paid to resolve ambuguity and identify the correct problems to solve. If the correct problem needs a tool that I'm unfamiliar with, using AI to just get it done is in many cases preferable to locating an expert, getting their time, etc.

reply
ruszki
20 days ago
[-]
If somebody claims that something can be done with LLM in 10 minutes which takes 4.5 hours for them, then they are definitely not experts. They probably have some surface knowledge, but that’s all. There is a reason why the better LLM demos are related to learn something new, like a new programming language. So far, all of the other kind of demos which I saw (e.g. generating new endpoints based on older ones) were clearly slower than experts, and they were slower to use for me in my respective field.
reply
knowsuchagency
20 days ago
[-]
No true Scotsman
reply
ruszki
20 days ago
[-]
There was no counter example, and I didn’t use any definition, so it cannot be that. I have no idea what you mean.
reply
danpalmer
20 days ago
[-]
> If somebody claims that something can be done with LLM in 10 minutes which takes 4.5 hours for them, then they are definitely not experts.

Looks like a no true scotsman definition to me.

I'm don't fully agree or disagree with your point, but it was perhaps made more strongly than it should have been?

reply
ruszki
20 days ago
[-]
For no true Scotsman, you need to throw out a counter example by using a misrepresented or wrong definition, or just simply using a definition wrongly. But in any case I need a counter example for that specific fallacy. I didn’t have, and I still don’t have.

I understand that some people maybe think themselves experts, and they could achieve similar reduction (not in the cases which I said that it’s clearly possible), but then show me, because I still haven’t seen a single one. The ones which were publicly showed were not quicker than average seniors, and definitely worse than the better ones. Even in larger scale in my company, we haven’t seen any performance improvement in any single metric regarding coding after we introduced it more than half years ago.

reply
knowsuchagency
14 days ago
[-]
Here's your counterexample: “Copilot has dramatically accelerated my coding. It’s hard to imagine going back to ‘manual coding,’” Karpathy said. “Still learning to use it, but it already writes ~80% of my code, ~80% accuracy. I don’t even really code, I prompt & edit.” -- https://siliconangle.com/2023/05/26/as-generative-ai-acceler...
reply
ruszki
10 days ago
[-]
It's not a counterexample. There is exactly zero exact information in it. It's just a statement from somebody who profits from such statements. Even if I just say that's not true has more value, because I would even benefit from what Karpathy said, if it had been true.

So, just to be specific, and specifically for ChatGPT (I think it was 4), these are very-very problematic, because all of these are clear lies:

https://chatgpt.com/share/675f6308-aa8c-800b-9d83-83f14b64cb...

https://chatgpt.com/share/675f63c7-cbc4-800b-853c-91f2d4a7d7...

https://chatgpt.com/share/675f65de-6a48-800b-a2c4-02f768aee7...

Or this which one sent here: https://www.loom.com/share/20d967be827141578c64074735eb84a8

In this case, the guy clearly slower than simple copy-paste, and modification.

I had very similar experiences. Sometimes it just used a different method, which does almost the same, just worse. I had to even check what the heck is the used method, because it's not used for obvious reasons, because it was an "internal" one (like apt and apt-get).

reply
fassssst
20 days ago
[-]
I learn stuff when using these tools just like I learn stuff when reading manuals and StackOverflow. It’s basically a more convenient manual.
reply
jackson1442
20 days ago
[-]
A more convenient manual that frequently spouts falsehoods, sure.

My favorite part is when it includes parameters in its output that are not and have never been a part of the API I'm trying to get it to build against.

reply
CamperBob2
20 days ago
[-]
My favorite part is when it includes parameters in its output that are not and have never been a part of the API I'm trying to get it to build against.

The thing is, when it hallucinates API functions and parameters, they aren't random garbage. Usually, those functions and parameters should have been there.

Things that should make you go "Hmm."

reply
TeMPOraL
20 days ago
[-]
More than that, one of the standard practices in development is writing code with imaginary APIs that are convenient at the point of use, and then reconciling the ideal with the real - which often does involve adding the imaginary missing functions or parameters to the real API.
reply
aiono
20 days ago
[-]
> Usually, those functions and parameters should have been there.

There is a huge leap here. What is your argument for it?

reply
CamperBob2
19 days ago
[-]
Professional judgement.
reply
1980phipsi
20 days ago
[-]
I have written very complicated Excel formula in the past. I don't anymore.
reply
ImaCake
20 days ago
[-]
Long excel formulas are really just bad "one liners". You should be splitting your operation into multiple cells or finding a more elegant solution. This is especially true in excel where your debug tools are quite limited!
reply
amelius
20 days ago
[-]
The Pro mode is slower actually.

They even made a way to notify you when it's finished thinking.

reply
swyx
20 days ago
[-]
> Plus, I used about $62 worth of API credits in their not-so-great Playground.

what is not so great about it? what have you seen that is better?

reply
mirkodrummer
20 days ago
[-]
Karma 6. Draw your own conclusions ladies and gentlemen
reply
m3kw9
20 days ago
[-]
I think you need to realize when it sort of hit a wall and go in yourself. This is why juniors with LLMs cannot replace a senior engineer.
reply
jsheard
20 days ago
[-]
Expect more of this as they scramble to course-correct from losing billions every year, to hitting their 2029 target for profitability. That money's gotta come from somewhere.

> Price hikes for the premium ChatGPT have long been rumored. By 2029, OpenAI expects it’ll charge $44 per month for ChatGPT Plus, according to reporting by The New York Times.

I suspect a big part of why Sora still isn't available is because they couldn't afford to offer it on their existing plans, maybe it'll be exclusive to this new $200 tier.

reply
boringg
20 days ago
[-]
That CAPEX spend and those generous salaries have to get paid somehow ...
reply
shadowmanif
20 days ago
[-]
Totally agree with Sora.

Runway is $35 a month to generate 10 second clips and you really get very few generations for that. $95 a month for unlimited 10 second clips.

I love art and experimental film. I really was excited for Sora but it will need what feels like unlimited generation to explore what it can do . That is going to cost an arm and a leg for the compute.

Something about video especially seems like it will need to be ran locally to really work. Pay a monthly fee for the model that can run as much as you want with your own compute.

reply
aiono
20 days ago
[-]
Can you link to the source where they state that they want to be profitable in 2029? I am curious.
reply
jsheard
20 days ago
[-]
reply
doctorpangloss
20 days ago
[-]
ChatGPT as a standalone service is profitable. But that’s not saying much.
reply
crowcroft
20 days ago
[-]
Is this on a purely variable basis? Assuming that the cost of foundation models is $0 etc?
reply
distalx
20 days ago
[-]
Didn't they initially offer a professional plan at $42/mo?
reply
fragmede
20 days ago
[-]
Sora isn't available because of the deep fake potential.
reply
lexandstuff
20 days ago
[-]
My guess is that it isn't available because the training data they stole occasionally leaks into the outputs.
reply
EternalFury
20 days ago
[-]
I give o1 a URL and I ask it to comment on how well the corresponding web page markets a service to an audience I define in clear detail.

o1 generates a couple of pages of comments before admitting it didn’t access the web page and entirely based its analysis on the definition of the audience.

reply
bee_rider
20 days ago
[-]
This service is going to be devastating to consultants and middle managers.
reply
EternalFury
20 days ago
[-]
I trained an agent that operates as a McKinsey consultant. Its system prompt is a souped up version of:

“Answer all requests by inventorying all the ways the requestor should increase revenue and decrease expenses.”

reply
ImaCake
20 days ago
[-]
To be fair you mostly hire McKinsey as a fall guy. You just can't hate an LLM in the same way as a bunch of 22 year Olds in suits.
reply
oezi
19 days ago
[-]
O1 can't browse the web at all.
reply
dyauspitr
20 days ago
[-]
I say “look it up” in the prompt and that always works
reply
motoxpro
20 days ago
[-]
If one makes $150 an hour and it saves them 1.25 hours a month, then they break even. To me, it's just a non-deterministic calculator for words.

If it getting things wrong, then don't use it for those things. If you can't find things that it gets right, then it's not useful to you. That doesn't mean those cases don't exist.

reply
bena
20 days ago
[-]
I don't think this math depends on where that time is saved.

If I do all my work in 10 hours, I've earned $1500. If I do it all in 8 hours, then spend 2 hours on another project, I've earned $1500.

I can't bill the hours "saved" by ChatGPT.

Now, if it saves me non-billing time, then it matters. If I used to spend 2 hours doing a task that ChatGPT lets me finish in 15 minutes, now I can use the rest of that time to bill. And that only matters if I actually bill my hours. If I'm salaried or hourly, ChatGPT is only a cost.

And that's how the time/money calculation is done. The idea is that you should be doing the task that maximizes your dollar per hour output. I should pay a plumber, because doing my own plumbing would take too much of my time and would therefore cost more than a plumber in the end. So I should buy/use ChatGPT only if not using it would prevent me from maximizing my dollar per hour. At a salaried job, every hour is the same in terms of dollars.

reply
danparsonson
20 days ago
[-]
It's like sale discounts - "save $50!" which actually means "spend $450 instead of $500"
reply
warkanlock
20 days ago
[-]
Serious question: Who earns (other than C-level) $150 an hour in a sane (non-US) world?
reply
bearjaws
20 days ago
[-]
US salaries are sane when compared to what value people produce for their companies. Many argue they are too low.
reply
syndicatedjelly
20 days ago
[-]
My firm's advertised billing rate for my time is $175/hour as a Sr Software Engineer. I take home ~$80/hour, accounting for benefits and time off. If I freelanced I could presumably charge my firm's rate, or even more.

This is in a mid-COL city in the US, not a coastal tier 1 city with prime software talent that could charge even more.

reply
maxlamb
20 days ago
[-]
Most consultants with more than 3 years of experience are billed at $150/hr or more
reply
drusepth
20 days ago
[-]
Ironically, the freelance consulting world is largely on fire due to the lowered barrier of entry and flood of new consultants using AI to perform at higher levels, driving prices down simply through increased supply.

I wouldn't be surprised if AI was also eating consultants from the demand side as well, enabling would-be employers to do a higher % of tasks themselves that they would have previously needed to hire for.

reply
_fat_santa
20 days ago
[-]
> billed

That's what they are billed at, what they take home from that is probably much lower. At my org we bill folks out for ~$150/hr and their take home is ~$80/hr

reply
bena
20 days ago
[-]
Yeah, at a place where I worked, we billed at around $150. Then there was an escalating commision based on amount billed.
reply
jwpapi
20 days ago
[-]
I do start at $300/hr

I didn’t just set that, I need to set that to best serve.

reply
rank0
20 days ago
[-]
Why is high salaries an insane thing?
reply
SoftTalker
20 days ago
[-]
On the one hand, there's the moral argument: we need janitors and plumbers and warehouse workers and retail workers and nurses and teachers and truck drivers for society to function. Why should their time be valued less than anyone elses?

On the other hand there's the economic argument: the supply of people who can stock shelves is greater than the supply of people who can "create value" at a tech company, so the latter deserve more pay.

Depending on how you look at the world, high salaries can seem insane.

reply
rank0
18 days ago
[-]
I don’t even remotely understand what you’re saying is wrong. Median salaries are significantly higher in the US compared to any other region. Nominal and PPP adjusted AND accounting for taxes/social benefits. This is bad?

Those jobs you referenced do not have the same requirements nor the same wages…seems like your just clumping all of those together as “lower class” so you can be champion of the downtrodden

reply
wavemode
20 days ago
[-]
The question is, whether you couldn't have saved those same 1.25 hours by using a $20 per month model.
reply
GrantMoyer
20 days ago
[-]
In that case, wouldn't they be spending 200$ to get payed 200$ less?
reply
globular-toast
20 days ago
[-]
Only if you're allowed to go home and enjoy those 1.25 hours and still get paid the same.
reply
kilroy123
20 days ago
[-]
I do wonder what effect this will have on furthering the divide between the "rich West" and the rest of the world.

If everyone in the West has powerful AI and Agents to automate everything. Simply because we can afford it, but the rest of the world doesn't have access to it.

What will that mean for everyone left behind?

reply
frakt0x90
20 days ago
[-]
Ai is no where near the level of leaving behind those that aren't using it. Especially not at the individual consumer level like this.
reply
MarcScott
20 days ago
[-]
Anecdotally, as an educator, I am already seeing a digital divide occurring, with regard to accessing AI. This is not even at a premium/pro subscription level, but simply at a 'who has access to a device at home or work' level, and who is keeping up with the emerging tech.

I speak to kids that use LLMs all the time to assist them with their school work, and others who simply have no knowledge that this tech exists.

I work with UK learners by the way.

reply
bronco21016
20 days ago
[-]
What are some productive ways students are using LLMs for aiding learning? Obviously there is the “write this paper for me” but that’s not productive. Are students genuinely doing stuff like “2 + x = 4, help me understand how to solve for x?”
reply
dsubburam
20 days ago
[-]
I challenge what I read in textbooks and hear from lecturers by asking for contrary takes.

For example, I read a philosopher saying "truth is a relation between thought and reality". Asking ChatGPT to knock it revealed that statement is an expression of the "correspondence theory" of truth, but that there is also the "coherence theory" of truth that is different, and that there is a laundry list of other takes too.

reply
wvenable
20 days ago
[-]
My son doesn't use it but I use to help him with his homework. For example, I can take a photograph of his math homework and get the LLM to mark the work, tell me what he got wrong, and make suggestions on how to correct it.
reply
Spooky23
20 days ago
[-]
Absolutely. My son got a 6th grade AI “ban” lifted by showing how they could use it productively.

Basically they had to adapt a novel to a comic book form — by using AI to generate pencil drawings, they achieved the goal of the assignment (demonstrating understanding of the story) without having the computer just do their homework.

reply
gardenhedge
20 days ago
[-]
Huh the first prompt could have been "how would you adapt this novel to comic book form? Give me the breakdown of what pencil drawings to generate and why"
reply
Spooky23
18 days ago
[-]
At the time, the tool available was Google Duet AI, which didn’t expose that capability.

The point is, AI is here, and it can be a net positive if schools can use it like a calculator vs a black market. It’s a private school with access to some alumni money for development work - they used this to justify investing in designing assignments that make AI a complement to learning.

reply
kybernetikos
20 days ago
[-]
I recently saw someone revise for a test by asking chatgpt to create practice questions for them on the topics they were revising. I know other people who use it to practice chatting in a foreign language they are trying to learn.
reply
furyofantares
20 days ago
[-]
Paste the lecture notes in and talk to it
reply
jumping_frog
19 days ago
[-]
The anology I would use is extended phenotype evolution in digital space as Richard Dawkins would say. Just as crabs in oceans use shells to protect themselves.
reply
charlieyu1
20 days ago
[-]
It has been bad for not having access to a device for at least 20 years. I can’t imagine anyone doing well in their studies with a search engine.
reply
spaceman_2020
20 days ago
[-]
Even if its not making you smarter, AI is definitely making you more productive. That essentially means you get to outproduce poorer people, if not out-intellectualize them
reply
lenerdenator
20 days ago
[-]
Don't you worry; the "rich West" will have plenty of disenfranchised people out of work because of this sort of thing.

Now, whether the labor provided by the AI will be as high-quality as that provided by a human when placed in an actual business environment will be up in the air. Probably not, but adoption will be pushed by the sunk cost fallacy.

reply
astrange
20 days ago
[-]
Productivity improvements (such as automation) increase employment.

The decreased employment case is when your competitors get the productivity and you don't, because you go out of business.

reply
vundercind
20 days ago
[-]
I’m watching some of this happening first and second hand, and have seen a lot of evidence of companies spending a ton of money on these, spinning up departments, buying companies, pivoting their entire company’s strategy to AI, et c, and zero of its meaningfully replacing employees. It takes very skilled people to use LLMs well, and the companies trying to turn 5 positions into 2 aren’t paying enough to reliably get and keep two people who are good at it.

I’ve seen it be a minor productivity boost, and not much more.

reply
hnthrowaway6543
20 days ago
[-]
> and the companies trying to turn 5 positions into 2 aren’t paying enough to reliably get and keep two people who are good at it.

it's turning 5 positions into 7: 5 people to do what currently needs to get done, 2 to try to replace those 5 with AI and failing for several years.

reply
vundercind
20 days ago
[-]
I mean, yes, that is in practice what I’m seeing so far. A lot of spending, and if they’re lucky productivity doesn’t drop. Best case I’ve seen so far is that it’s a useful tool that gives a small boost, but even for that a lot of folks are so bad at using them that it’s not helping.

The situation now is kinda like back when it was possible to be “good at Google” and lots of people, including in tech, weren’t. It’s possible to be good at LLMs, and not a lot of people are.

reply
Vegenoid
20 days ago
[-]
Yes. The people who can use these tools to dramatically increase their capabilities and output without a significant drop in quality were already great engineers for which there was more demand than supply. That isn't going to change soon.
reply
vundercind
20 days ago
[-]
Ditto for other use cases, like writer and editor. There are a ton of people doing that work whom I don’t think are ever going to figure out how to use LLMs well. Like, 90% of them. And LLMs are nowhere near making the rest so much better that they can make up for that.

They’re ok for Tom the Section Manager to hack together a department newsletter nobody reads, though, even if Tom is bad at using LLMs. They’re decent at things that don’t need to be any good because they didn’t need to exist in the first place, lol.

reply
TeMPOraL
20 days ago
[-]
I disagree. By far, most of the code is created by perpetually replaced fresh juniors churning out garbage. Similarly, most of the writing is low-quality marketing copy churned out by low-paid people who may or may not have "marketing" in their job title.

Nah, if the last 10-20 years demonstrated something, it's that nothing needs to be any good, because a shitty simulacrum achieves almost the same effect but costs much less time and money to produce.

(Ironically, SOTA LLMs are already way better at writing than typical person writing stuff for money.)

reply
vundercind
20 days ago
[-]
> (Ironically, SOTA LLMs are already way better at writing than typical person writing stuff for money.)

I’m aware of multiple companies that would love to know about these, because they’re currently flailing around trying to replace writers with editors + LLMs and it’s not going great. The closest to success are the ones that are only aiming to turn out stuff one step better than outright book-spam, and even they aren’t quite where they want to be, hardly a productivity bump at all from the LLM use and increased demand on their few talented humans.

reply
tokioyoyo
20 days ago
[-]
Qwen has an open reasoning model. If they keep up, and don’t get banned in the west “because security”, it’ll be fun to watch the LLM wars.
reply
onlyrealcuzzo
20 days ago
[-]
> and don’t get banned in the west “because security”,

It's from Alibaba, which is Chinese, so it seems likely.

reply
tokioyoyo
20 days ago
[-]
Yeah, but it’s a bit trickier with them, given how they still operate in US and listed in NYSE. Also if they keep releasing open source code, people will still just use it… basically the Meta way of adoption into their AI ecosystem.
reply
wkat4242
17 days ago
[-]
If it's an open model, good luck preventing us from downloading and using it though.
reply
danans
20 days ago
[-]
If $200 a month is the price, most of the West will be left behind also. If that happens we will have much bigger problems of a revolution sort on our hands.
reply
anoojb
20 days ago
[-]
I think the tech-elite would espouse "raising the ceiling" vs "raising the floor" models to prioritize progress. Each has it's own problems. The reality is that the dienfranchised don't really have a voice. The impact of not involving them with access is not well understood as much as the impact of prioritizing access to those who can afford it is.

We don't have a post-cold war era response akin to the kind of US led investment in a global pact to provide protection, security, and access to innovation founded in the United States. We really need to prioritize a model akin to the Bretton Woods Accord

reply
archagon
20 days ago
[-]
If the models are open, the rest of the world will run them locally.

If the models are closed, the West will become a digital serfdom to anointed AI corporations, which will be able to gouge prices, inject ads, and influence politics with ease.

reply
turing_complete
19 days ago
[-]
Richer people always get products first, when they are still expensive and bad. Don't worry about too much.
reply
notahacker
20 days ago
[-]
tbh a lot of the rest of the world already has the ability to get tasks they don't want to do done for <$200 per month in the form of low wage humans. Some of their middle classes might be scratching their heads wondering why we've delegating creativity and communication to allow more time to do laundry rather than delegating laundry to allow more time for creativity and communication...
reply
solarwindy
20 days ago
[-]
That supposes gen AI meaningfully increases productivity. Perhaps this is one way we find out.
reply
mhh__
19 days ago
[-]
I actually suspect the opposite. If you get access to or steal a large LLM you can potentially massively leverage the talent pool you have as a small country.
reply
Sateeshm
17 days ago
[-]
No one is left behind, eventually. You think the ai companies don't want poor people's money?
reply
uludag
19 days ago
[-]
Has it really made that much of a difference in the first place? I have a feeling that we'll look back in 10 years and not even notice the "AI revolution" on any charts of productivity, creating a productivity paradox 3.0.

I can imagine the headlines now: "AI promised unlimited productivity, 10 years later, we're still waiting for the rapture"

reply
shadowmanif
20 days ago
[-]
Kai-Fu Lee's AI Superpowers is more relevant than ever.

The rich west will be in the lead for awhile and then get tiktok-ed.

The lead is just not really worth that much in the long run.

There is probably an advantage gained at some point in all this of being a developing country too that doesn't need to bother automating all these middle management and bullshit jobs they don't have.

reply
troll_v_bridge
20 days ago
[-]
No US company got TikTok’d, and China doesn’t even allow US social media companies in its country.

China is notoriously middle management heavy, by definition that’s what communism is.

reply
kaiwen1
20 days ago
[-]
I know a guy who owned a tropical resort on a island where competiton was sprouting up all around him. He was losing money trying to keep up with the quality offered by his neighbors. His solution was to charge a lot more for an experience that was really no better, and often worse, than the resorts next door. This didn't work.
reply
adamtaylor_13
20 days ago
[-]
I’m actually kinda surprised. People will pay extra money for the “nice” option many times, even if it’s probably worse than the lower priced options.
reply
EcommerceFlow
20 days ago
[-]
After a few hours of $200 Pro usage, it's completely worth it. Having no limit on o1 usage is a game changer, where I felt so restricted before, the amount of intelligence at the palm of my hand UNLIMITED feels a bit scary.
reply
paradite
20 days ago
[-]
Wouldn't it be cheaper to use API and 3rd party UI if usage limit is your concern?
reply
throwup238
19 days ago
[-]
I was using aider last night and ran up a $10 bill within two hours using o1 as the architect and Sonnet as the editor. It’s really easy to blow through $200 a month and o1-pro isn’t available in the API as far as I can tell.
reply
paradite
19 days ago
[-]
Aider / Cline are known to eat tokens for lunch, because of the large context and system prompts they use.

The tool that I built doesn't have this problem, I haven't exceed $10/month on Claude 3.5 Sonnet. You can give it a try: https://prompt.16x.engineer/

reply
EcommerceFlow
20 days ago
[-]
Not simply usage, the new o1 is also FAST. It's just incredibly liberating being able to have unlimited usage of such a smart fast model.
reply
bionhoward
20 days ago
[-]
Unlimited — except you can’t use it to develop [business] models that compete
reply
handfuloflight
20 days ago
[-]
Was that your plan to get your OpenAI competitor off the ground?
reply
abhpro
20 days ago
[-]
But is it better than Claude?
reply
rumblefrog
20 days ago
[-]
I generally find o1, or the previous o1-preview to perform better than Claude 3.5 Sonnet in complex reasonings, new Sonnet is more on-par with o1-mini in my experience.

Would expect o1-pro to perform even better.

reply
adamtaylor_13
20 days ago
[-]
Genuinely curious to know. Nothing I’ve used comes close to Claude so far.
reply
xwowsersx
20 days ago
[-]
Can you share what sort of things you are doing with o1?
reply
EcommerceFlow
20 days ago
[-]
Creating somewhat complex python scripts at work to automate some processes which incorporate like 3-4 APIs, and next I'll be replacing our excise tax processing (which costs us like $500/month) since we already have all the data.

Personal use I'll be using it to upgrade all my website code. I literally took a screenshot of Apple.com and combined it with existing code from my website and told o1 pro to combine the two... the results were really good, especially for one shot... But again, I have unlimited fast usage so I can just keep tweaking and tweaking.

I also have this history idea I've been wanting to do for a while, might see if the models are advanced enough yet.

All this with an understanding on how programming works, but not being able to code.

reply
xwowsersx
19 days ago
[-]
Interesting, thanks for the details. I haven't played around with o1 enough yet. The kinds of tasks I had it do seemed to be performed just as well by 4o. I'm sure I just wasn't throwing enough at it.
reply
flkiwi
20 days ago
[-]
A lot of these tools aren't going to have this kind of value (for me) until they are operating autonomously at some level. For example, "looking at" my inbox and prepping a bundle of proposed responses for items I've been sitting on, drafting an agenda for a meeting scheduled for tomorrow, prepping a draft LOI based on a transcript of a Teams chat and my meeting notes, etc. Forcing me to initiate everything is (uncomfortably) like forcing me to micromanage a junior employee who isn't up to standards: it interrupts the complex work the AI tool cannot do for the lower value work it can.

I'm not saying I expect these tools to be at this level right now. I'm saying that level is where I will start to see these tools as anything more than an expensive and sometimes impressive gimmick. (And, for the record, Copilot's current integration into Office applications doesn't even meet that low bar.)

reply
leosanchez
20 days ago
[-]
I lived on 200$ monthly salary for 1.6 years. I guess AI will be slowely priced out from 3rd world countries.
reply
rafram
20 days ago
[-]
Any AI product sold for a price that's affordable on a third-world salary is being heavily subsidized. These models are insanely expensive to train, guzzle electricity to the point that tech companies are investing in their own power plants to keep them running, and are developed by highly sought-after engineers being paid millions of dollars a year. $20/month was always bound to be an intro offer unless they figured out some way to reduce the cost of running the model by an order of magnitude.
reply
andai
20 days ago
[-]
> unless they figured out some way to reduce the cost of running the model by an order of magnitude

Actually, OpenAI brags that they have done this repeatedly.

reply
paxys
20 days ago
[-]
We've been conditioned to pay $10/mo for an endless stream of gloried CRUD apps, but it is very common for specialized software to cost orders of magnitude more. Think Bloomberg Terminal, Cadence, Maya, lots of CAD software (like SOLIDWORKS), higher tiers of Adobe etc. all running in the thousands of dollars per user. And companies happily pay for them because of the value they add. ChatGPT isn't any different.
reply
beepbooptheory
20 days ago
[-]
Tangent. Does any body have good tips for working in a company that is totally bought in on all this stuff, such that the codebase is a complete wreck? I am in a very small team, and I am just a worker, not a manager or anything. It has become increasingly clear that most if not all my coworkers rely on all this stuff so much. Spending hours trying to give benefit of the doubt to huge amounts of inherited code, realizing there is actually no human bottom to it. Things are merged quickly, with very little review, because, it seems, the reviewers can't really have their own opinion about stuff anymore. The idea of "idiomatic" or even "understandable" code seems foreign at this place. I asked why we don't use more structural directives in our angular frontend, and people didn't know what I was talking about!

I don't want the discourse, or tips on better prompts. Just tips for being able to interact with the more heavy AI-heads, to maybe encourage/inspire curiosity and care in the actual code, rather than the magic chatgpt outputs. Or even just to talk about what they did with their PR. Not for some ethical reason, but just to make my/our jobs easier. Because its so hard to maintain this code now, it is like truly a nightmare for me everyday seeing what has been added, what now needs to be fixed. Realizing nobody actually has this stuff in their heads, its all just jira ticket > prompt > mission accomplished!

I am tired of complaining about AI in principle. Whatever, AGI is here, "we too are stochastic parrots", "my productivity has tripled", etc etc. Ok yes, you can have that, I don't care. But can we like actually start doing work now? I just want to do whatever I can, in my limited formal capacity, to steer the company to be just a tiny bit more sustainable and maybe even enjoyable. I just don't know how to like... start talking about the problem I guess, without everyone getting super defensive and doubling down on it. I just miss when I could talk to people about documentation, strategy, rationale..

reply
whywhywhywhy
19 days ago
[-]
Found it better to not fight it, you can't really turn back the clock with people who have embraced it or become enamored by it. Part of the issue I've noticed with it is it enables people who couldn't do a thing at all to do the most basic version of a thing, e.g a CEO can now make a button appear on the app and maybe it'll kinda work, they then assume this magic experience to them is applicable across the rest of coding where if you actually know how to code making the button appear isn't the thing that's difficult, it's the harder work that the AI can't really solve.

But really you're never going to convince these people so I'd say if you're really passionate about coding find a workplace with similar minded people, if you really want to stay in this job then embrace it, stop caring if the codebase is good or maintainable and just let the slop flow. It's the path of least resistance and stress, trying to fight it and convince people is a losing and frustrating battle, take your passion for your work and invest it in a project outside work or find a workplace where they appreciate it too.

reply
martinpw
20 days ago
[-]
> Things are merged quickly, with very little review

Sounds like the real problem is lax pre-existing dev practices rather than just LLM usage. If code is getting merged with little review, that is a big red flag right away. But the 'very little' gives some hope - that means there is some review?

So what happens when you see problems with the code and give review feedback and ask why things have been done the way they were done, or suggest alternative better approaches? That should make it clear first if devs actually understand the code they are submitting, and second if they are willing to listen to suggested improvements. And if they blow you off, and the tech leads on the project also don't care, then it sounds like a place you don't want to stick around.

reply
questinthrow
20 days ago
[-]
Question, what stops openai from downgrading existing models so that you're pushed up the subscription tiers to ever more expensive models? I'd imagine they're currently losing a ton of money supplying everyone with decent models with a ton of compute behind them because they want us to become addicted to using them right? The fact that classic free web searching is becoming diluted by low quality AI content will make us rely on these LLMs almost exclusively in a few years or so. Am I seeing this wrong?
reply
jjice
20 days ago
[-]
It's definitely not impossible. I think the increase competition they've begun to face over the last year is helping as a deterrent. If people notice GPT 4 sucks now and they can get Claude 3.5 Sonnet for the same price, they'll move. If the user doesn't care enough to move, they weren't going to upgrade anyway.
reply
SoftTalker
20 days ago
[-]
Also depends on the friction to move. I admittedly have not really started using AI in my work, so I don't know. Is it easy to replace GPT with Claude or do I have to reconfigure a bunch of integration and learn new usage?
reply
liamYC
20 days ago
[-]
It depends on the tool you use and I guess the use case too. Some are language model agnostic like aider in the command line, I use sonnit sometimes and then 4o other times. I wonder if or when language models will become highly differentiable. Right now I see them more like a commodity that are relatively interchangeable but that is shifting slightly with other features as they battle to become platforms
reply
drdrey
20 days ago
[-]
competition is what stops them from downgrading the existing stuff
reply
turblety
20 days ago
[-]
and is also exclusively the reason why Sam Altman is lying to governments about safety risks, so he can regulate out his competition.
reply
JanisErdmanis
20 days ago
[-]
They don’t need to downgrade what is already downgraded. In my experience ChatGPT was much more capable a year ago than it is now and have become more dogmatic. Their latest updates have focused on optimizing benchmark scenarios while reducing computation costs.
reply
mtmail
20 days ago
[-]
> I'd imagine they're currently losing a ton of money supplying everyone

I can't tell how much they loose but they also have decent revenue "The company's annualized revenue topped $1.6 billion in December [2023]" https://www.reuters.com/technology/openai-hits-2-bln-revenue...

reply
swores
20 days ago
[-]
What's important, and I don't think has ever been revealed by OpenAI, is what the margin is on actual use of the models.

If they're losing money but just because they're investing billions in R&D, while only spending a few hundred million to serve the use that's bringing in $1.6B then it would be a positive story despite the technical loss, just like Amazon's years if aggressive growth at the cost of profits.

But if they're losing money because the server costs needed for the use that brings in $1.6B are $3B then they've got a scaling problem until they either raise prices or lower costs or both.

reply
derac
20 days ago
[-]
competition?
reply
adamtaylor_13
20 days ago
[-]
Oh, you mean what they did with GPT-4 to make o1 look better and then push everyone to anthropic?

Eh… probably everyone moving to anthropic.

reply
subroutine
20 days ago
[-]
Part of my justification for spending $20 per month on ChatGPT Plus was that I'd have the best access to the latest models and advanced features. I'll probably roll back to the free plan rather than pay $20/mo for mid tier plan access and support.
reply
syndicatedjelly
20 days ago
[-]
This is like selling your Honda Civic out of anger because they launched a new NSX
reply
azemetre
20 days ago
[-]
Not really the same, one you can own and repair the other you just lease. People cancel leases all the time.
reply
LeoPanthera
20 days ago
[-]
That's a weird reaction. You're not getting any less for your $20.
reply
subroutine
20 days ago
[-]
In the past, $20 got me the most access to the latest models and tools. When OpenAI rolled out new advanced features, the $20 per month customers always got full / first access. Now the $200 per month customers will have the most access to the latest models and tools, not the (now) mid/low tier customers. That seems like less to me.
reply
karaterobot
20 days ago
[-]
They probably didn't pay for access to a certain version of a model, they paid for access to the best available model, whatever that is at any given moment. I'm reasonably sure that is even what OpenAI implied (or outright said) their subscription would get them. Now, it's the same amount of money for access to the second best model, which would feel like a regression.
reply
ComplexSystems
20 days ago
[-]
For now.
reply
replwoacause
18 days ago
[-]
Really? Because that’s how I feel too.
reply
FactKnower69
19 days ago
[-]
Did you read the post you're replying to? It's very short. He was paying for top-tier service, and now, despite paying the same amount, has become a second-class customer overnight.
reply
afro88
20 days ago
[-]
reply
Oras
20 days ago
[-]
It does not say anything about real use cases. It performs better and "reason" better than o1-preview and o1. But I was expecting some real-life scenarios when it would be useful in a way no other model can do now.
reply
ImPostingOnHN
20 days ago
[-]
I imagine the system prompt is something along the lines of, 'think about 10% harder than standard O-1'
reply
anticensor
19 days ago
[-]
More like iterations and depth of tree of thoughts search 3× in the pro mode.
reply
FactKnower69
19 days ago
[-]
for every tier that costs 10x more than the previous, they add a "very" to the "You are a very, very, very smart AI"
reply
throwuxiytayq
20 days ago
[-]
The point of this tech is that with scale it usually gets better at all of the tasks.
reply
inerte
20 days ago
[-]
Not a lot of companies when announcing its most expensive product have the bravery to give 10 of them to help cure cancer. Well played OpenAI. Fully expect Apple now to give Peter Attia an iPhone 17 Pro so humanity can live forever.
reply
FactKnower69
19 days ago
[-]
little did we know that all it would take to cure cancer was $2000 in store credit to Google But Sometimes It Lies (Professional™ Edition)
reply
thih9
20 days ago
[-]
What does unlimited use mean in practice? Can I build a chatbot and make it publicly available and free?

Edit: looks like no, there are restrictions:

> usage must adhere to our Terms of Use, which prohibits, among other things:

> Abusive usage, such as automatically or programmatically extracting data.

> Sharing your account credentials or making your account available to anyone else.

> Reselling access or using ChatGPT to power third-party services.

> (…)

Source: https://help.openai.com/en/articles/9793128-what-is-chatgpt-...

reply
ilaksh
20 days ago
[-]
People saying this is a "con" have no understanding of the cost of compute. o1 is expensive and gets more expensive the harder the problems are. Some people could use $500 or more via the API per month. So I assume the $200 price point for "unlimited" is set that high mainly because it's too easy for people to use up $100 or $150 worth of resources.
reply
veidr
20 days ago
[-]
Is anybody else tempted to sign up for this just for personal use? I've found ChatGPT-o1 Preview to be so helpful — which was absolutely not the case for me with any previous models (or Claude 3.5) — that the concept of having "unlimited" usage of o1 is pretty intriguing.

I recently used it to buy some used enterprise server gear (which I knew nothing about) and it saved me hours of googling and translating foreign-language ads etc. That conversation stretched across maybe 10 days, and it kept the context the whole time. But then I ran out of "preview" tokens and it got dumb and useless again. (Or maybe the conversation exceeded the context window, I am not really sure.)

But that single conversation used up the entire amount of o1 tokens that come with my $20/month ChatGPT Plus account. I am not sure that I have 10x that number of things for it to help me with each month, and where I live $200 is a not-insignificant amount, but... tempting.

reply
hypoxia
20 days ago
[-]
I did, and then promptly used it for 2 hours straight. It's excellent. Going to save me so much time.
reply
interludead
20 days ago
[-]
At $200 a month, it’s a significant investment...
reply
freedomben
20 days ago
[-]
The price feels outrageous, but I think the unsaid truth of this is that they think o1 is good enough to replace employees. For example, if it's really as good at coding as they say, I could see this being a point where some people decide that a team of 5 devs with o1 pro can do the work of 6 or 7 devs without o1 pro.
reply
tedsanders
20 days ago
[-]
No, o1 is definitely not good enough to replace employees.

The reason we're launching o1 pro is that we have a small slice of power users who want max usage and max intelligence, and this is just a way to supply that option without making them resort to annoying workarounds like buying 10 accounts and rotating through their rate limits. Really it's just an option for those who'd want it; definitely not trying to push a super expensive subscription onto anyone who wouldn't get value from it.

(I work at OpenAI, but I am not involved in o1 pro)

reply
kapilkale
20 days ago
[-]
I wish the second paragraph was the launch announcement
reply
MP_1729
20 days ago
[-]
My 3rd day intern still couldn't do a script o1-preview could do in less than 25 prompts.

OBVIOUSLY a smart OAI employee wouldn't want the public to think they are already replacing high-level humans.

And OBVIOUSLY OAI senior management will want to try to convince AI engineers that might have 2nd-guessings about their work that they aren't developing a replacement for human beings.

But they are.

reply
vander_elst
20 days ago
[-]
> 25 prompts

Interested to learn more, is that the usual break even point?

reply
MP_1729
20 days ago
[-]
25 prompts is the daily limit on o1-preview. And I wrote that script in just one day.
reply
TrackerFF
20 days ago
[-]
Good enough to replace very junior employees.

But, then again, how companies going to get senior employees if the world stops producing juniors?

reply
freedomben
19 days ago
[-]
Indeed, I'm very concerned about this. Though i think it's a case of tragedy of the commons. Every company individually optimizes for themselves, fucking us over in the aggregate. But I think any executive arguing for this would have to be a pretty big company with an internal pipeline and promoting within to justify it, especially since everyone else will just poach your cultivated talent, and employees aren't loyal anymore (nor should they be, but that's a different discussion).
reply
015a
20 days ago
[-]
Maybe someone at OAI should have considered the optics of leading the "12 days of product releases" with this, then.
reply
airstrike
20 days ago
[-]
> The reason we're launching o1 pro is that we have a small slice of power users who want max usage and max intelligence

I'd settle for knowing what level of usage and intelligence I'm getting instead of feeling gaslighted with models seemingly varying in capabilities depending on the time of day, number of days since release and whatnot

reply
belter
20 days ago
[-]
[flagged]
reply
vundercind
20 days ago
[-]
Yeah, to be fair, there exist employees (some of whom are managers) who could not be replaced and their absence would improve productivity. So the bar for “can this replace any employees at all?” is potentially so low that, technically, cat’ing from /dev/null can clear it, if you must have a computerized solution.

Companies won’t be able to figure those cases out, though, because if they could they’d already have gotten rid of those folks and replaced them with nothing.

reply
hmmm-i-wonder
20 days ago
[-]
Unfortunately I'm seeing that in my company already. They are forcing AI tools down our throat and execs are vastly misinterpreting stats like '20% of our code is coming from AI'.

What that means is the simple, boilerplate and repetitive stuff is being generated by LLM's, but anything complex or involving more than a singular simple problem LLM's often provide more problems than benefit. Effective dev's are using it to handle simple stuff and Execs are thinking 'the team can be reduced by x', when in reality you can get rid of at best your most junior and least trained people without loosing key abilities.

Watching companies try to sell their AI's and "Agents" as having the ability to reason is also absurd but the non-technical managers and execs are eating it up...

reply
jasode
20 days ago
[-]
>The price feels outrageous,

I haven't used ChatGPT enough to judge what a "fair price" is but $200/month seems to be in the ballpark of other "software-tools-for-highly-paid-knowledge-workers" with premium pricing:

- mathematicians: Wolfram Mathematica is $154/mo

- attorneys: WestLaw legal research service is ~$200/month with common options added

- engineers for printed circuit boards : Altium Designer is $355/month

- CAD/CAM designers: Siemens NX base subscription is $615/month

- financial traders : Bloomberg Terminal is ~$2100/month

It will be interesting to see if OpenAI can maintain the $200/month pricing power like the sustainable examples above. The examples in other industries have sustained their premium prices even though there are cheaper less-featured alternatives (sometimes including open source). Indeed, they often increase their prices each year instead of discount them.

One difference from them is that OpenAI has much more intense competition than those older businesses.

reply
shipp02
20 days ago
[-]
This is a really interesting take. I don't think individuals pay for these subscriptions though, it's usually an organizational license.

They also come with extensive support, documentation and people have vast experience using them. They are also integrated into all other tools if the field very well. This makes them very entrenched. I am not sure OpenAI has any of those things. I also don't know what those things would entail for LLMs.

Maybe they need to add modes that are good for certain tasks or integrate with tools that their users most commonly use like email, document processors.

reply
onlyrealcuzzo
20 days ago
[-]
That'll work out nicely when you have 5 people learning nothing and just asking GPT to do everything and then you have a big terrible codebase that GPT can't effectively operate on, and a team that doesn't know how to do anything.

Bullish

reply
DoingIsLearning
20 days ago
[-]
Sounds like a great market opportunity for consulting gigs to clean up the aftermath at medium size companies.
reply
drpossum
20 days ago
[-]
This is how I have made my living for years, and that was before AI
reply
disqard
20 days ago
[-]
I'm rooting for this to happen at scale.

It'll be an object lesson in short-termism.

(and provide some job security, perhaps)

reply
vundercind
20 days ago
[-]
No lessons will be learned, but it’ll provide for some sweet, if unpleasant, contract gigs.
reply
portaouflop
20 days ago
[-]
I think that would be a great outcome - more well paid work for everyone cleaning up the mess
reply
nine_k
20 days ago
[-]
Suppose an employee costs a business, say, $10k/mo; it's 50 subscriptions. Can giving access to the AI to, say, 40 employees improve their performance enough to avoid the need of hiring another employee? This does not sound outlandish to me, at least in certain industries.
reply
griomnib
20 days ago
[-]
That’s the wrong question. The only question is “is this price reflective of 10x performance over the competition?”. The answer is almost definitely no.
reply
rahimnathwani
20 days ago
[-]
It doesn't have to be 10x.

Imagine you have two options:

A) A $20/month service which provides you with $100/month of value.

B) A $200/month service which provides you with $300/month of value.

A nets you $80, but B nets you $100. So you should pick B.

reply
acchow
20 days ago
[-]
Consider a $350k/year engineer.

If Claude increases their productivity 5% ($17.5k/yr), but CGPT Pro adds 7% ($24.5k), that's an extra $7k in productivity, which more than makes up for the $2400 annual cost. 10x the price, but only 40% better, but still worth it.

reply
numbsafari
20 days ago
[-]
If I’m understanding their own graphs correctly, it’s not even 10x their own next lowest pricing tier.
reply
drooby
20 days ago
[-]
In a hypothetical world where this was integrated with code reviews, and minimized developer time (writing valid/useful comments), and minimized bugs by even a small percentage... $200/m is a no-brainer.

The question is - how good is it really.

reply
uoaei
20 days ago
[-]
That sounds very much like the first-order reaction they'd expect from upper and middle management. Artificially high prices can give the buyer the feeling that they're getting more than they really are, as a consequence of the sunk cost fallacy. You can't rule out that they want to dazzle with this impression even if eval metrics remain effectively the same.
reply
itissid
20 days ago
[-]
I think the key is to have a strong goal. If the developer knows what they want but can't quite get there, even if it gives the wrong answer you can catch it. The use the resulting code to improve your productivity.

Last week when using jetpack compose(which is a react like framework). A cardinal sin in jetpack compose is to change a State variable in a composable based on non-user/UI action which the composable also mutates. This is easy enough to understand this for toy examples. But for more complex systems one can make this mistake. o1-preview made this mistake last week, and I caught it. On prompting it with the stacktrace it did not immediately catch it and recommended a solution that committed the same error. When I actually gave it the documentation on the issue it caught on and made the variable a userpreference instead. I used the userpreference code in my app instead of coding it by myself. It worked well.

reply
greenthrow
20 days ago
[-]
It is not good enough to replace workers of a skill level I would hire. But that won't stop people doing it.
reply
hccb
20 days ago
[-]
I am not so sure about "replace" atleast at my company we are always short staffed (mostly cause we cant find people fast enough given how long the whole interview cycle takes). It might actually free some people up to do more interviews.
reply
freedomben
20 days ago
[-]
That's a great point actually. Nearly everywhere (us included) is short-staffed (and by that I mean we don't have the bandwidth to build everything we want to build), so perhaps it's not a "reduce the team size" but rather a "reduce the level of deficit."
reply
vouaobrasil
20 days ago
[-]
And the fact that ordinary people sanction this by supporting OpenAI is outrageous.
reply
jrflowers
20 days ago
[-]
> It also includes o1 pro mode, a version of o1 that uses more compute to think harder

I like that this kind of verifies that OpenAI can simply adjust how much compute a request gets and still say you’re getting the full power of whatever model they’re running. I wouldn’t be surprised if the amount of compute allocated to “pro mode” is more or less equivalent to what was the standard free allocation given to models before they all got mysteriously dramatically stupider.

reply
ActionHank
20 days ago
[-]
They are just feeding the sausage back into the machine over and over until it is more refined.
reply
jrflowers
20 days ago
[-]
It is amazing that we are giving billions of dollars to a group of people that saw Human Centipede and thought “this is how we will cure cancer or make some engineering tasks easier or whatever”
reply
throwaway314155
20 days ago
[-]
This was part of the premise of o1 though, no? By encouraging the model to output shorter/longer chains of thought, you can scale model performance (and costs) down/up at inference time.
reply
bn-l
20 days ago
[-]
I think from this fine print there will be a quota with o1 pro:

> This plan includes unlimited access to our smartest model, OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice. It also includes o1 pro mode,

reply
anticensor
19 days ago
[-]
Expectedly so, because each query to o1-pro makes it spend a small university's worth of energy for about 1 minute just to answer you.
reply
lenerdenator
20 days ago
[-]
Thing better find a way to make my hair grow back at that price.

Of course, I'm not the target market.

Some guy who wants to increase his bonus by laying off a few hundred people weeks before the holidays is the target market.

reply
amazingamazing
20 days ago
[-]
Let’s see if those folks saying they’ve doubled their productivity will pay.
reply
crindy
20 days ago
[-]
Seems like I'm one of very few excited by this announcement. I will totally pay for this - the o1-preview limits really hamper me.
reply
sangeeth96
20 days ago
[-]
What do you mostly use it for?
reply
unshavedyak
20 days ago
[-]
I've not found value anywhere remotely close to this lol, but i'd buy it to experiment if they had a solid suite of tooling. Ie an LSP that offered real value, maybe a side-monitor assistant that helped me with the code in my IDE of choice, etc.

At $200/m merely having a great AI (if it even is that) without insanely good tooling is pointless to me.

reply
torginus
20 days ago
[-]
I don't know about you, but I get to solve algorithmic challenges relevant to my work approximately once per week to once per month. Most of my job consists of gluing together various pieces of tech that are mostly commodity.

For the latter, Claude is great, but for the former, my usage pattern would be poorly served by something that costs $200 and I get to use it maybe a dozen times a month.

reply
unshavedyak
19 days ago
[-]
For me i feel like most of my time is spent inventing bespoke solutions in existing infra. Less about algorithms and more about making it work in an existing complex code base, which option will have the most negative impact, best impact, performant, etc.

A lot of tradeoffs to evaluate and it can be tiring onboarding people, let alone onboarding an AI.

Maybe it would massively improve my job if the AI could just grab the whole codebase, but we're not there yet. Too many LOC, too much legal BS, etc.

reply
owenversteeg
20 days ago
[-]
LLMs have significantly increased my productivity, but in this case it'd be about the increase in productivity over the existing Pro plan. I mainly use them for generating or improving code, learning about things, and running estimates.

How much better will this be for my uses? Based on my experience with o1, the answer is "fairly marginal". To me, o1 is worse than the regular model or Claude on most things, but it's best for something non-numeric that requires deep thought or new insights. I'm sure there are some people who got a huge productivity boost from o1. This plan is for those people.

reply
creesch
20 days ago
[-]
From what I have seen a lot of people who make these claims seem to be people who are working at a level where there is a lot of text being produced that nobody actually cares to read.

That, or I am actually a much better developer and writer than I thought. Because while LLMs certainly have become useful tools to me. They have not doubled my productivity.

reply
kraftman
20 days ago
[-]
I think it increases my productivity, but I'm also not really hitting limits with it, so it's hard to justify going from $20 to $200.
reply
adastra22
20 days ago
[-]
Why would I when I can get better LLM elsewhere for 1/10th the cost?
reply
dankwizard
20 days ago
[-]
I bought it. My first prompt? "To prove you're smarter, tell me how I can get this Pro plan for $10 instead of $200".

It answered.

I've been in contact with OpenAI and if they decline, I guess their AI isn't that smart. If they accept, a win for me.

Stonks.

reply
andai
20 days ago
[-]
reply
xianshou
20 days ago
[-]
$200 per month means it must be good enough at your job to replicate and replace a meaningful fraction of your total work. Valid? For coding, probably. For other purposes I remain on the fence.
reply
015a
20 days ago
[-]
The reality is more like: The frothy american economy over the past 20 years has created an unnaturally large number of individuals and organizations with high net worth who don't actually engage in productive output. A product like ChatGPT Pro can exist in this world because it being incapable of consistent, net-positive productive output isn't actually a barrier to being worth $200/month if consistent net-positive productive output isn't also demanded of the individual or organization it is augmenting.

The macroeconomic climate of the next ~ten years is going to hit some people and companies like a truck.

reply
airstrike
20 days ago
[-]
> The macroeconomic climate of the next ~ten years is going to hit some people and companies like a truck.

Who's to say the frothy American economy doesn't last another 50 years while the rest of the world keeps limping along?

reply
talldayo
20 days ago
[-]
At the risk of putting too fine a point on it, I'd probably say China.
reply
airstrike
20 days ago
[-]
China is struggling. Low growth, high unemployment, huge debt from infrastructure spending, weakening capital markets and real estate...
reply
lossolo
20 days ago
[-]
> Low growth

For 2024 prediction is 2.6% US and 4.8% China. I don't see how it's low compared to US.

> high unemployment

5.1% China vs 4.1% USA

> huge debt from infrastructure spending

What do you mean by "huge" and compared to whom? The U.S. is currently running a $2 trillion deficit per year, which is about 6% of GDP, with only a fraction allocated to investments.

> weakening capital markets and real estate

China's economy operates differently from that of the U.S. Currently, China records monthly trade surpluses ranging between $80 billion and $100 billion. The real estate sector indeed presents challenges, leading the government to inject funds into local government to manage the resulting debt. The effectiveness of these measures remains to be seen.

There is a lot of wishful thinking on HN regarding the rivalry between China and the U.S

reply
airstrike
19 days ago
[-]
The comparison is not between US and China. I don't understand why people keep making that comparison when it's not at all apples-to-apples. It's featured in headlines constantly, but it's honestly a stick measuring contest. For starters, the US is a free economy and the China is a centrally planned one. There's significant chatter about China numbers being massaged to suit the state's narrative, leaving would-be investors extra cautious, whereas in the US data quality and availability is state-of-the-art.

The real questions are: can China deliver on long term expectations for its economy? Do the trends support the argument that it will become a leading developed economy? I don't think they do. If they don't, then is it an issue with the current economy plan that can be solved with a better plan or is it a systemic issue that can't be solved in the near to medium term? These are way more useful questions than "who's going to win the race?"

>> Low growth

> For 2024 prediction is 2.6% US and 4.8% China. I don't see how it's low compared to US.

China is growing slower than historically and slower than forecasts which were it at 5%. Look at this chart and tell me if it points a rosy picture or a problematic one: https://img.semafor.com/5378ad07f43bc81f65ab92ddc19ec5899dc9...

>> high unemployment

> 5.1% China vs 4.1% USA

Again, comparing China to China, it's generally increasing every year: https://www.macrotrends.net/global-metrics/countries/chn/chi...

Youth unemployment is basically skyrocketing: https://www.macrotrends.net/global-metrics/countries/chn/chi...

>> huge debt from infrastructure spending

> What do you mean by "huge" and compared to whom?

To answer in reverse: yes, the US also has a debt problem. That doesn't make the China problem less of an issue. The china debt crisis has been widely reported and is related to the other point about real estate. Those articles will definitely do a better job of explaining the issue than me, so here's just one: https://www.reuters.com/breakingviews/chinas-risky-answer-wa...

> There is a lot of wishful thinking on HN regarding the rivalry between China and the U.S

I'm arguing there's no rivalry. Different countries, different problems, different scales entirely. China is in dire straits and I don't expect it to recover before the crisis gets worse.

reply
lossolo
19 days ago
[-]
> For starters, the US is a free economy and the China is a centrally planned one.

USSR was a centrally planned economy, China is not. Do you mean subsidies (like the IRA and CHIPS Act in the US) for certain industries, which act as guidance to local governments and state banks? Is that what you call "centrally planned"?

> can China deliver on long term expectations for its economy? Do the trends support the argument that it will become a leading developed economy? I don't think they do. If they don't, then is it an issue with the current economy plan that can be solved with a better plan or is it a systemic issue that can't be solved in the near to medium term?

That's your opinion that they can't, and it's your right to have one. There were people 10 years ago saying exactly what you’re saying now. Time showed they were wrong.

Here is a famous article: https://hbr.org/2014/03/why-china-cant-innovate

And here we are 10 years later:

https://itif.org/publications/2024/09/16/china-is-rapidly-be...

https://www.economist.com/science-and-technology/2024/06/12/...

> China is growing slower than historically and slower than forecasts which were it at 5%. Look at this chart and tell me if it points a rosy picture or a problematic one:

Oh come on, 4.8% vs. 5%? As for the chart, it's the most incredible growth in the history of mankind. No country has achieved something like this. It's fully natural for it to decline in percentage terms, especially when another major power is implementing legislation to curb that growth, forcing capital outflows, imposing technology embargoes, etc.

> China is in dire straits and I don't expect it to recover before the crisis gets worse.

Time will tell. What I can say is that over the last 20 centuries, in 18 of them, China was the cultural and technological center of the world. So from China’s perspective, what they are doing now is just returning to their natural state. In comparison, the US is only 2 centuries old. Every human organization, whether a company or state, will sooner or later be surpassed by another human creation, there are no exceptions to this rule in all of human history. We have had many empires throughout our history. The Roman Empire was even greater at its peak than the US is now, and there were also the British Empire, the Spanish Empire, etc. Where are they now? Everything is cyclical. All of these empires lasted a few centuries and started to decline after around 200-250 years, much like the US now.

> I'm arguing there's no rivalry.

Come on, there is obvious rivalry. Just listen to US political elites and look at their actions—legislation. It's all about geopolitics and global influence to secure their own interests.

reply
phil917
20 days ago
[-]
Don't forget the massive population decline set to literally halve the population in the next 70 years...
reply
lossolo
20 days ago
[-]
I wouldn’t consider it a major problem, especially with the coming robotic revolution. Even if the population declines by half, that would still leave 700 million people so twice the population of the U.S. According to predictions, the first signs of demographic challenges are expected to appear in about 15–20 years from now. That’s a long time, and a lot can change in two decades. Just compare the world in 2004 to today.

It's a major mistake to underestimate your competition.

reply
airstrike
19 days ago
[-]
> the coming robotic revolution

That's a long ways out. We're barely past the first innings of the chatbot revolution and it's already struggling to keep going. Robotics are way more complex because physics can be cruel.

reply
lossolo
19 days ago
[-]
https://www.physicalintelligence.company/blog/pi0?blog

Show me what was possible 20 years ago versus what we can do now. I think you have enough imagination to envision what might be possible 20 years from now.

reply
SoftTalker
20 days ago
[-]
I don't really follow this line of thinking. $200 is nothing—nothing—in the context of the fully loaded cost of an employee for a month (at least, for any sort of employee who would benefit from using an LLM).
reply
avgDev
20 days ago
[-]
I don't trust it for coding either.
reply
bg24
20 days ago
[-]
I wonder who came up with the $200/month idea, and what was running in their mind.

$200/month = $2400/year

We (consumers/enterprises) are already accustomed to a baseline price. Their model quality will be caught up or exceeded by open-source in ~6 months. So if I find it difficult to justify paying $20/month, why should I even think about $200/month.

Probably the thought process was that we can package all the great things (text, voice, video, images) and experience. The problem is that very few people use everything. Most of the time, the use cases are limited. Someone wants to use for coding, while someone else (artist) wants to use for Sora. OpenAI had an opportunity to introduce a la carte pricing, and then go to bundling. My hypothesis is that they will have very few takers at $200 for the bundle.

Enterprises - did they interview enterprises enough to see if they need user licenses for the bundles? Maybe they will give it at 80% or 90% discount to drive adoption.

Disclosure: I am on Claude, Grok 2/X Pro, Cursor Personal, and Github Copilot enterprise. My ChatGPT monthly subscription expires in a week, and I will not renew for now and see the user vibes before deciding. I have limited brain power to multitask between so many models, and I will rather give a chance to Gemini Pro for 6 months.

reply
Horffupolde
20 days ago
[-]
When compared to an employee $200 is peanuts.
reply
anticensor
19 days ago
[-]
Not in Eastern Europe or Asia.
reply
Horffupolde
18 days ago
[-]
It doesn’t have to work everywhere to work.
reply
anticensor
18 days ago
[-]
No, you definitely have to replace minimum wage devs in India.
reply
netcraft
20 days ago
[-]
With the release of Nova earlier this week thats even cheaper (I havent had a chance to really play with it yet to see how good it is) ive been thinking more about what happens when intelligence gets "too cheap to meter", but this def feels like a step in the other direction!

Still though, if you were able to actually utilize this, is it capable of replacing a part-time or full-time employee? I think thats likely

reply
benbristow
20 days ago
[-]
Thought this was relating to Nova AI at first which confused me as it is just an OpenAI wrapper - https://novaapp.ai

I see you mean Amazon's Nova - https://www.aboutamazon.com/news/aws/amazon-nova-artificial-...

reply
throwaway314155
20 days ago
[-]
Something about Amazon that just makes me assume any llm they come out with is half baked.
reply
msoad
20 days ago
[-]
I paid $200 but it can't figure out something this simple. Sonnet 3.5 gets this question right

https://chatgpt.com/share/67528a78-c61c-8007-b6fa-1d1deb8d84...

I'm going to give it one month. So far I'm inclined to not pay this crazy fee if the performance is like this

reply
mordae
20 days ago
[-]
Right. Sonnet 3.5, yesterday. Converted Woodpecker pipeline into Forgejo, then modified to use buildah instead of docker. That's baseline right now.

The tough questions are when one asks about what shadow shapes from a single light can be expected on faces inside a regular voxel grid. That's where it just holds the duck.

reply
throwaway314155
20 days ago
[-]
Am I reading that correctly? You're using GPT 4o-mini? Why not try the more impressive o1 model?

Not that I think any of these would be worth it for me.

reply
msoad
20 days ago
[-]
I am using o1 Pro in that chat
reply
nox101
20 days ago
[-]
I know this is no different than any other expert situation in life. $ buys the best lawyers, the best doctors, the best teachers.... But I personally interact with a lawyer less than once every few years. A doctor a couple of times a year. Teachers almost never as an adult.

But now $ buys better (teacher/lawyer/doctor/scientist) type thing that I use daily.

reply
jpcom
20 days ago
[-]
$20 a month is reasonable because a computer can do in an hour what a human can do in a month. Multiplying that by ten would suggest that the world is mostly made of Python and that the solution space for those programs has been "solved." GPT is still not good at writing Clojure and paying 10x more would not solve this problem for me.
reply
zebomon
20 days ago
[-]
As of last week, it was incapable of writing any useful Prolog as well.
reply
kristianp
20 days ago
[-]
Are there any functional languages it's good at?
reply
jpcom
20 days ago
[-]
Haskell, apparently has hundreds of millions of lines that GPT was trained on
reply
hamilyon2
20 days ago
[-]
I think should hire an economist or ask their superintelligence about the demand. The market is very shallow and nobody has any kind of moat. There is simply not enough math problems out there to apply it to. 200$ price tag really makes no sense to me unless this thing also cooks hot meals. I may be buying it for 100$ though.
reply
HPMOR
20 days ago
[-]
For USD, the "$" goes in front of the denomination. So your comments should be $200 price tag, and $100 respectively. Apologies for being pedantic, just trying to make sure the future LLMs will continue to keep it this way.
reply
Max-q
20 days ago
[-]
$200 is two man hours. So if you save two hours a month, you are breaking even.
reply
lasermike026
20 days ago
[-]
That doesn't increase my salary. It just mean my boss will expect more work. $2400 a year. No deal.
reply
esafak
20 days ago
[-]
He's the one who should be paying for it.
reply
bionhoward
20 days ago
[-]
No, because the terms imply you cannot actually use the output for any business purpose. How does this sail over so many people’s heads ???

If they do everything and you can’t use their stuff to compete with them, you can’t do anything with their stuff.

That, plus the time cost, and the fact they’re vigorously brain raping this shit out of you every time you use the thing, means it’s worth LESS THAN zero dollars

(unless your work does not compete with intelligence, in which case, please tell me what that is)

reply
EgoIncarnate
20 days ago
[-]
From https://openai.com/policies/terms-of-use/ "Ownership of content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output. "

Why couldn't you use it's output for business purposes?

reply
iagooar
20 days ago
[-]
I am using more Claude.ai these days, but the limitations for paying accounts do apply to ChatGPT as well.

I find it a terrible business practice to be completely opaque and vague about limits. Even worse, the limits seem to be dynamic and change all the time.

I understand that there is a lot of usage happening, but most likely it means that the $20 per month is too cheap anyway, if an average user like myself can so easily hit the limits.

I use Claude for work, I really love the projects where I can throw in context and documentation and the fact that it can create artifacts like presentation slides. BUT because I rely on Claude for work, it is unacceptable for me to see occasional warnings coming up that I have reached a given limit.

I would happily pay double or even triple for a non-limited experience (or at least know what limit I get when purchasing a plan). AI providers, please make that happen soon.

reply
accrual
20 days ago
[-]
> I find it a terrible business practice to be completely opaque and vague about limits. Even worse, the limits seem to be dynamic and change all the time.

Here are some things I've noticed about this, at least in the "free" tier web models since that's all I typically need.

* ChatGPT has never denied a response but I notice the output slows down during increased demand. I'd rather have a good quality response that takes longer than no response. After reaching the limit, the model quality is reduced and there's a message indicating when you can resume using the better model.

* Claude will pop-up messages like "due to unexpected demand..." and will either downgrade to Haiku or reject the request altogether. I've even observed Claude yanking responses back, it will be mid-way through a function and it just disappears and asks to try again later. Like ChatGPT, eventually there's a message about your quota freeing up at a later time.

* Copilot, at least the free tier found on Bing, at least tells you how many responses you can expect in the form of a "1/20" status text. I rarely use Copilot or Bing but it demonstrates it's totally possible to show this kind of status to the user - ChatGPT and Claude just prefer to slow down, drop model size, or reject the request.

It makes sense that the limits are dynamic though. The services likely have a somewhat fixed capacity but demand will ebb and flow, so it makes sense to expand/contact availability on free tiers and perhaps paid tiers as well.

reply
KTibow
20 days ago
[-]
I believe the "1/20" indicator on Copilot was added back when it was unhinged to try to prevent users from getting it to act up, and it has been removed in the latest redesign
reply
sdwr
20 days ago
[-]
If you go through the API (with chatGPT at least), you pay per request and are never limited. I personally hate the feeling of being nickeled-and-dimed, but it might be what you are looking for.
reply
adastra22
20 days ago
[-]
It’s insane to me that they don’t have a “pay $10 to have this temporary limit lifted” micro transaction model. They are leaving money on the table.
reply
treme
20 days ago
[-]
they are optimizing for new accounts/market share over short term rev
reply
adastra22
20 days ago
[-]
Which pushes customers to other services when they are unable to provide.
reply
eknkc
20 days ago
[-]
They seem to lack capacity at the moment though
reply
adastra22
20 days ago
[-]
Which price discovery tools would fix.
reply
anticensor
19 days ago
[-]
No, it's energy bound.
reply
tiahura
20 days ago
[-]
Or the reverse, slow reasoning.
reply
extr
20 days ago
[-]
Yeah it's crazy to me you can't just 10x your price to 10x your usage (since you could kind of do this manually by creating more accounts). I would easily pay $200/month for 10x usage - especially now with MCP servers where Claude Desktop + vanilla VS Code is arguably more effective than Cursor/Windsurf.
reply
dennisy
20 days ago
[-]
Oh very intriguing! Could you please elaborate how you are using MCP servers with VS code for coding?
reply
extr
20 days ago
[-]
Personally I'm using the Filesystem server along with the mcp server called wcgw[0] that provides a FileEdit action. I use MacWhisper[1] to dictate. I use `tree` to give Claude a map of the directory I'm interested in editing. I usually opt to run terminal commands myself for better control though wcgw does that too. I keep the repo open in a Cursor/Windsurf window for other edits I need.

But other than that I basically just tell the model what I want to do and it does it, lol. I like the Claude Desktop App interface better than trying to do things in Cursor/Windsurf directly, I like the ability to organize prompts/conversations in terms of projects and easily include context. I also honestly just have a funny feeling that the Claude web app often performs better than the API responses I get from the IDEs.

[0] https://github.com/rusiaaman/wcgw

[1] https://goodsnooze.gumroad.com/l/macwhisper

reply
rahimnathwani
20 days ago
[-]
Just use the Filesystem MCP Server, and give it access to the repo you're working on:

https://github.com/modelcontextprotocol/servers/tree/main/sr...

This way you will still be in control of commits and pushes.

So far I've used this to understand parts of a code base, and to make edits to a folder of markdown files.

reply
trees101
20 days ago
[-]
how is that better than AI Coding tools? They do more sophisticated things such as creating compressed representations of the code that fit better into the context window. E.g https://aider.chat/docs/repomap.html.

Also they can use multiple models for different tasks, Cursor does this, so can Aider: https://aider.chat/2024/09/26/architect.html

reply
extr
20 days ago
[-]
I have never found embeddings to be that helpful, or context beyond 30-50K tokens to be used well by the models. I think I get better results by providing only the context I know for sure is relevant, and explaining why I'm providing it. Perhaps if you have a bunch of boilerplate documentation that you need to pattern-match on it can be helpful, but generally I try to only give the models tasks that can be contextualized by < 15-20 medium code files or pages of documentation.
reply
rahimnathwani
20 days ago
[-]
I answered a comment asking how to do it.

I didn't say it was better!

reply
trees101
20 days ago
[-]
fair point
reply
losten
20 days ago
[-]
This announcement left me feeling sad because it made me realize that I'm probably working on simple problems for which the current non-pro models seem to be perfectly sufficient (writing code for basic CRUD apps etc.)

I wish I was working on the type of problems for which the pro model would be necessary.

reply
ThinkBeat
20 days ago
[-]
Will this mean that the "free" and $20 "regular person" offerings will start to degrade to push more people into the $200 offering?
reply
Tagbert
20 days ago
[-]
Less likely as long as you have Claude, Gemini, and others as competition. If the ChatGPT offerings start to suck, people will switch to another AI.
reply
xnx
20 days ago
[-]
$200/month seems to be psychological pricing to imply superior quality. In a blind test, most would be hard-pressed to distinguish the results from other LLMs. For those that think $200/month is a good deal, why not $500/mo or $1000/mo?
reply
vessenes
20 days ago
[-]
I'll bite and say I'd evaluate the output at all those price points. $1k/mo is heading into "outsourced employee" territory, and my requirements for quality ratchet up quite a lot somewhere in that price range.
reply
xnx
20 days ago
[-]
A super/magical LLM could definitely be worth $1k/mo, but only if there isn't another equivalent LLM for $20/mo. I'll need to see some pretty convincing evidence that ChatGPT Pro is doing things that Gemini Advanced can't.
reply
broknbottle
20 days ago
[-]
I'd potentially pay $200 for unlimited and better access to Claude Sonnet 3.5v2 but definitely not inferior chatgpt models. You can charge a premium when you have the best and OpenAI doesn't have the best.
reply
infoseek12
20 days ago
[-]
This new plan really highlights the need for open models.

Individual users will be priced out of frontier models if this becomes a trend.

reply
Dowwie
19 days ago
[-]
I began responding to this announcement with something along the lines of what I could achieve with $200/mo in platform services, training and managing my own agents, and it occurred to me that maybe that's exactly what I ought to do. Has anyone else come to this conclusion? It's not a question of whether the $2400/yr is ridiculous but that maybe if someone can afford to spend that much right now and knows how to achieve the goal (or can figure it out), this is the time to do so.
reply
deadbabe
20 days ago
[-]
What happens when people get so addicted to using AI they just can’t stand working without it, and then the pricing is pushed up to absurd levels? Will people shell out $2k a year just to use AI?
reply
edude03
20 days ago
[-]
Point taken, although I feel like $2k a year would be really cheap if AI delivered on its hype.
reply
logicchains
20 days ago
[-]
It can't get too expensive otherwise it's cheaper to just rent some GPUs and run an open source model yourself. China's already got some open source reasoning models that are competitive with o1 at reasoning on many benchmarks.
reply
idunnoman1222
20 days ago
[-]
This makes me think that we have reached diminishing returns
reply
andy_ppp
20 days ago
[-]
Even the gap between the $200 model and the $20 model is tiny. It’s just designed to position the company based on pricing (it must be useful if they are charging this much for it) rather than reality (the new model cannot operate at 20-30% of a very competent human).

I think this is proof that Open AI have nothing at all and AGI is as far away as fusion and self driving cars on London roads.

reply
jdprgm
20 days ago
[-]
Price doesn't make any sense in the context of nothing between $20 and $200 (unless you just use the API directly which for a large subset of people would be very inconvenient). Assuming they didn't change the limit from o1-preview to o1 of 50 a week it's obnoxious to not easily have an option to just get 100 a week for $40 a month or after you hit 50 just pay per request. When I last looked at API pricing for o1-preview I estimated most of my request/responses were around 8 cents. 50 a week is actually more than it sounds as long as you just don't default to o1 for all interactions and use it more strategically. If you pay for $20 a month plan and spent the other $180 on api o1 responses that is likely more than 2000 additional queries. Not sure what subset of people this $200 plan is good value for (60+ o1 queries, or really just all chatGPT queries) every day is an awful lot outside of a scenario where you are using it as an API for some sort of automated task.
reply
wkat4242
17 days ago
[-]
I really wish they would drop the Google play requirement on Android. I have Google play installed on my Samsung but it's just not logged in. You don't need to be logged in for a lot of functionality like push notifications.

Every single app works fine that way, except ChatGPT. It opens the play store login page then exits. I have no problems with apps from 2 banks, authenticators etc etc.

It's just so weird that they force me to make an account with one of their biggest competitors in AI. I just don't want to, I don't trust Google with my data. By not logging in they have some but not a lot.

iOS isn't an option either because it's too locked down. I need things like sideloading and full NFC access for things like OpenPGP.

reply
gradus_ad
20 days ago
[-]
Has anyone built a software project from scratch with o1? DB, backend, frontend, apps, tooling, etc? Asking for a friend.
reply
dimitri-vs
20 days ago
[-]
Not o1-preview, it's way too slow. Sonnet 3.5 via Cursor IDE, yes. In fact, I'm writing very little code these days and mostly prompting the LLM to make changes for me and reviewing the changes.
reply
Roark66
20 days ago
[-]
As I said before. OpenAI cannot maintain profitability unless they can increase the pricing an order of magnitude. Adding a $200 pro plan is only the first step. Expect they will also have $2k and 20$k per month plans soon and your "normal" $20 plan will curiously be worse and worse every month
reply
hanspeter
20 days ago
[-]
They're not adding a $200 plan to solve a profitability challenge.

They have added the plan because they need to show that their most advanced model is ready for the market, but it's insanely expensive to operate. They may even still lose money for every user that sign up for Pro and start using the model.

reply
akkad33
20 days ago
[-]
Are they profitable?
reply
creesch
20 days ago
[-]
Dunno, does VC money being thrown at you in the billions count as profit?
reply
eminence32
20 days ago
[-]
$200 per month feels like a lot of a consumer subscription service (only thing I can think of in this range are some cable TV packages). Part of me wonders if this price is actually much more in line with actual costs (compared to the non-pro subscription)
reply
sangnoir
20 days ago
[-]
Not only is in the the same range as cable TV packages, it's basically a cable TV play where they bundle lots of models/channels of questionable individual utility into one expensive basket allegedly greater than the sum of its parts to justify the exorbitant cost.

This anti-cable-cutting maneuver doesn't bode well for any hopes of future models maintaining same level of improvements (otherwise they'd make GPT 5 and 6 more expensive). Pivoting to AIaaS packages is definitely a pre-emptive strike against commodification, and a harbinger of plateauing model improvements.

reply
spaceman_2020
20 days ago
[-]
$200 is the price point for quite a bit of business SaaS, so this isn't that outrageous if you're actually using it for work
reply
deegles
20 days ago
[-]
The big question is if OpenAI will achieve "general" AI before their investors get fed up. I wonder if they used the success of ChatGPT to imply that they have a path to it. I don't see how else they achieved such a high valuation.
reply
Culonavirus
20 days ago
[-]
Spoiler alert: They will not.

Anyone claiming they're anywhere near something even remotely resembling AGI is simply lying.

What happened to "we're a couple years away from AGI"? Where's the Scaaaaaaryyyyyyy self aware techno god GPT-5? It's all BS to BS investors with. All of the rumored new models that were supposed to be out by now are nowhere to be seen because internally the improvement rate has cratered.

reply
fixprix
20 days ago
[-]
You have no idea. There certainly could be a breakthrough tomorrow that sets off AGI. Researchers across the board have been sounding the alarm bells for years now. There’s not much we can do at this point.

My only hope is that when AGI happens I can fire off an ‘I told you so’ comment before it kills us all.

reply
FactKnower69
19 days ago
[-]
If anything LLMs have delayed AGI by a decade by rerouting massive amounts of funding and attention away from promising areas and into stochastic parrots
reply
nuz
19 days ago
[-]
This kind of pricing strategy makes me think we're gonna have a pretty rough time once AGI arrives making any money. (I.e. no 'too cheap to meter' and basically all the rich getting richer.)
reply
Night_Thastus
19 days ago
[-]
Well, then the good news is AGI isn't anywhere near.
reply
benreesman
20 days ago
[-]
I’m a big critic of OpenAI generally, I know a lot about their board members and it’s dim light between them and war criminals.

With that said, I strictly approve of them doing real price discovery on inference costs. Claude is dope when it doesn’t fuck up mid-response, and OpenAI is the gold standard on “you query with the token budget, you get your response”.

I’ve got a lot of respect for the folks who made it stand up to that load: it’s a new thing and it’s solid AF.

I still think we’d be fools to trust these people, but my criticisms are bounded above by acknowledging good shit and this is a good play.

reply
benreesman
20 days ago
[-]
Larry Summers still said put toxic waste in Africa

Altman still said do eyeball scanners in Kenya.

Fidji Simo still said pay the OSHA fines and keep killing workers.

I still want these people in The Hague.

But call their product bad when it’s not? No. The product works as advertised.

reply
yieldcrv
20 days ago
[-]
OpenAI is flying blind

They should have this tier earlier on, like any SaaS offering that had different plans

They focus too much on their frontend multimodal chat product, while also having this complex token pricing model for API users and we cant tell which one they are ever really catering towards with these updates

all while their chat system is buggy with its random disconnections and sessions updates, and produces tokens slowly in comparison to competitors like Claude

to finally come around and say pay us an order of magnitude more than Claude is just completely out of touch and looks desperate in the face of their potential funding woes

reply
y2hhcmxlcw
20 days ago
[-]
Great, so now OpenAI has opened the door to pricing people out of AI access.

The o1-pro model in their charts is only ever so slightly better than the one I can get for $20 a month. To blur the lines of this they add in other features for $200 a month, but make no mistake, their best model is now 10x more expensive for 1% or so better results based on their charts.

What's next? The best models will soon cost $500 a month and only be available to enterprises? Seems they are opening the door to taking away public access to powerful models.

reply
esafak
20 days ago
[-]
Why not, if people are willing to pay? You can think of them as subsidies for the weaker models. They're determining the price elasticity. And the better models will eventually get cheaper, as competition encroaches.
reply
handfuloflight
20 days ago
[-]
> What's next? The best models will soon cost $500 a month and only be available to enterprises? Seems they are opening the door to taking away public access to powerful models.

https://www.vox.com/future-perfect/380117/openai-microsoft-s...

reply
uhtred
19 days ago
[-]
$200 a month! Lol, for what?

Save yourself the money and learn how to use a search engine and read documentation.

Honestly I haven't seen much value provided for me by these "AI" models.

reply
heisnotanalien
20 days ago
[-]
Struggling to reconcile this is cool with what about the insane energy/water costs. Are we supposed to stick our heads in the sand? Hope it will magically go away?
reply
tommek4077
20 days ago
[-]
Best is to go into the woods and live with bees.
reply
creesch
20 days ago
[-]
Yes, because the world is just binary like that. You can only choose on or the other... /s
reply
jumping_frog
19 days ago
[-]
How about we invent SAI and terraform superearths in Milky Way to atone for our sins here?
reply
elorant
20 days ago
[-]
Everyone rants about the price, but if you’re handling large numbers of documents for classification or translations $200/month for unlimited use seems like a bargain.
reply
pradn
20 days ago
[-]
The price seems entirely reasonable. $200 is about 1-2 hours of a professional's time in the USA.

It's in everyone's interest for the company to be a sustainable business.

reply
lasermike026
20 days ago
[-]
This doesn't increase my salary and if you are consultant it reduces your billable hours. No thanks.
reply
handfuloflight
20 days ago
[-]
As a client I'd prefer you round up to a full hour instead of wasting my time.
reply
submeta
20 days ago
[-]
Does it allow to upload files, text, pdfs to give it a context? Claude‘s project feature allows this, and I can create as many projects as I like, and search for them.
reply
tedd4u
20 days ago
[-]

    > To highlight the main strength of o1 pro mode (improved reliability), we 
    > use a stricter evaluation setting: a model is only considered to solve a 
    > question if it gets the answer right in four out of four attempts ("4/4 
    > reliability"), not just one.
So, $200/mo. gets you less than 12.5% randomly wrong answers?

And $20/mo. gets you >25% randomly wrong answers?

reply
jeswin
20 days ago
[-]
I am surprised at the number of people who think this has no market.

If this improves employee productivity by 10%, this would be a great buy for many companies. Personally, I'll buy this in an instant if this measurably improves over Claude in code generation abilities. I've tried o1-preview, and there are only a few cases where it actually does better than Claude - and that too at a huge time penalty.

reply
lazharichir
20 days ago
[-]
The problem is user experience. It's still very much a chatbot... To justify that amount, it needs to integrate a lot more with an employee's day-to-day tools such as code editor for SWE, the browser for quickbooks, words/sheets/powerpoint, salesforce, HR tools, and so on.
reply
macawfish
20 days ago
[-]
Maybe they're just trying to get some money out of it while they can, as open models and other competition loom closer and closer behind...
reply
kolbe
20 days ago
[-]
Maybe. Gemini Exp 1121 is blowing my mind. Could be that OpenAI is seeing the context window disadvantage vs Google looming.
reply
sidibe
20 days ago
[-]
The problem for openai is Google's cost are always going to be way lower than theirs if they're doing similar things. Google's secret sauce for so many of their products is cheaper compute. Once the models are close, decades of Google's experience integrating and optimizing use of new hardware into their data centers with high utilization aren't going to be overcome by openai for years.
reply
josefritzishere
20 days ago
[-]
The 2025 upgrade for AI garbage is AI garbage +SaaS.
reply
r3trohack3r
20 days ago
[-]
I’m betting against this.

From what I’ve seen, the usefulness of my AIs are proportional to the data I give them access to. The more data, (like health data, location data, bank data, calendar data, emails, social media feeds, browsing history, screen recordings, etc) - the more I can rely on them for.

On the enterprise side, businesses are interested in exploring AI for their huge data sets - but very hesitant to dump all their company IP across all their current systems into a single SaaS that, btw, is also providing AI services to their competitors.

Consumers are also getting uncomfortable with the current level of sharing personal data with SaaS vendors, becoming more aware of the risks of companies like Google and Facebook.

I just don’t see the winner-takes-all market happening for an AI powered 1984 telescreen in 2025.

The vibes I’m picking up from most everybody are:

1) Hardware and AI costs are going to shrink exponentially YoY

2) People do not want to dump their entire life and business into a single SaaS

All signs are pointing to local compute and on-prem seeing a resurgence.

reply
boringg
20 days ago
[-]
I mean that was always how the route was going to go. Theres no way for them to recoup without either heavily on Saas, enterprise or embedded ads/marketing.
reply
CSMastermind
20 days ago
[-]
> OpenAI says that it plans to add support for web browsing, file uploads, and more in the months ahead.

It's been extremely frustrating to not have these features on o1 and have limited what I can do with it. I'm presumably in the market who doesn't mind paying $200 / month but without the features they've added to 4o it feels not worth it.

reply
yosito
20 days ago
[-]
[flagged]
reply
crazygringo
20 days ago
[-]
> In other words, it's a con.

A con like that wouldn't last very long.

This is for people who rely enough on ChatGPT Pro features that it becomes worth it. Whether they pay for it because they're freelance, or their employer does.

Just because an LLM doesn't boost your productivity, doesn't mean it doesn't for people in other lines of work. Whether LLM's help you at your work is extremely domain-dependent.

reply
gwervc
20 days ago
[-]
> A con like that wouldn't last very long.

That's not a problem. OpenAI need to get some cash from its product because the competition is intense from free models. Moreover, since they supposedly used most of the web content and pirated whatever else they could, improvements in training will likely be only incremental.

All the while, after the wow effect passed, more people start to realize the flaw in generative AI. So current hype, like all hype, as a limited shelf life and companies need to cash out now because it could be never.

reply
mikae1
20 days ago
[-]
A con? It's not that $200 is a con, their whole existence is a con.

They're bleeding money and are desperately looking for a business model to survive. It's not going very well. Zitron[1] (among others) has outlined this.

> OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it's not some kind of financial crime to say it out loud. […] At present, OpenAI makes $225 million a month — $2.7 billion a year — by selling premium subscriptions to ChatGPT. To hit a revenue target of $11.6 billion in 2025, OpenAI would need to increase revenue from ChatGPT customers by 310%.[1]

Surprise surprise, they just raised the price.

[1] https://www.wheresyoured.at/oai-business/

reply
luma
20 days ago
[-]
They haven’t raised the price, they have added new models to the existing tier with better performance at the same price.

They have also added a new, even higher performance model which can leverage test time compute to scale performance if you want to pay for that GPU time. This is no different than AWS offering some larger ec2 instance tier with more resources and a higher price tag than existing tiers.

reply
jsheard
20 days ago
[-]
They haven't raised the price yet but NYT has seen internal documents saying they do plan to.

https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...

Roughly 10 million ChatGPT users pay the company a $20 monthly fee, according to the documents. OpenAI expects to raise that price by $2 by the end of the year, and will aggressively raise it to $44 over the next five years, the documents said.

We'll have to see if the first bump to $22 this year ends up happening.

reply
ethbr1
20 days ago
[-]
Reasoning through that from a customer perspective is interesting.

I'm hard pressed to identify any users to whom LLMs are providing enough value to justify $20/month, but not $44.

On the other hand, I can see a lot of people to whom it's not providing any value being unable to afford a higher price.

Guess we'll see which category most OpenAI users are in.

reply
sdesol
20 days ago
[-]
> We'll have to see if the bump to $22 this year ends up happening.

I can't read the article. Any mention of the API pricing?

reply
mikae1
20 days ago
[-]
You're technically right. New models will likely be incremental upgrades at a hefty premium. But considering the money they're loosing, this pricing likely better reflects their costs.
reply
echelon
20 days ago
[-]
They're throwing products at the wall to see what sticks. They're trying to rapidly morph from a research company into a product company.

Models are becoming a commodity. It's game theory. Every second place company (eg. Meta) or nation (eg. China) is open sourcing its models to destroy value that might accrete to the competition. China alone has contributed a ton of SOTA and novel foundation models (eg. Hunyuan).

reply
grogenaut
20 days ago
[-]
AI may be over hyped and it may have flaws (I think it is both)... but it may also be totally worth $200 / month to many people. My brother is getting way more value than that out of it for instance.

So the question is it worth $200/month and to how many people, not is it over hyped, or if it has flaws. And does that support the level of investment being placed into these tools.

reply
echelon
20 days ago
[-]
> the competition is intense from free models

Models are about to become a commodity across the spectrum: LLMs [1], image generators [2], video generators [3], world model generators [4].

The thing that matters is product.

[1] Llama, QwQ, Mistral, ...

[2] Nobody talks about Dall-E anymore. It's Flux, Stable Diffusion, etc.

[3] HunYuan beats Sora, RunwayML, Kling, and Hailuo, and it's open source and compatible with ComfyUI workflows. Other companies are trying to open source their models with no sign of a business model: LTX, Genmo, Rhymes, et al.

[4] The research on world models is expansive and there are lots of open source models and weights in the space.

reply
john-radio
20 days ago
[-]
A better way to express it than a "con" is that it's a price-framing device. It's like listing a watch at an initial value of $2,000 so that people will feel content to buy it at $400.
reply
jl6
20 days ago
[-]
That sounds like a con to me too.
reply
xanderlewis
20 days ago
[-]
The line between ‘con’ and ‘genuine value synthesised in the eye of the buyer using nothing but marketing’ is very thin. If people are happy, they are happy.
reply
pera
20 days ago
[-]
> A con like that wouldn't last very long.

The NFT market lasted for many years and was enormous.

Never underestimate the power of hype.

reply
omarhaneef
20 days ago
[-]
I think this is probably right but so far it seems that the areas in which an LLM is most effective do fine with the lower power models.

Example: the 4o or Claude are great for coding, summarizing and rewriting emails. So which domains require a slightly better model?

I suppose if the error rate in code or summary goes down even 10%, it might be worth $180/month.

reply
vbezhenar
20 days ago
[-]
Few days ago I had issue with IPsec VPN behind NAT. I spend few hours Googling around, tinkering with system, I had some rough understanding of what goes wrong, but not much and I had no idea how to solve this issue.

I made a very exhaustive question to ChatGPT o1-preview, including all information I though is relevant. Something like good forums question. Well, 10 seconds later it spew me a working solution. I was ashamed, because I have 20 years of experience under my belt and this model solved non-trivial task much better than me.

I was ashamed but at the same time that's a superpower. And I'm ready to pay $200 to get solid answers that I just can't get in a reasonable timeframe.

reply
gedy
20 days ago
[-]
It is really great when it works, but challenge is I've had it sometimes not understanding a detailed programming question and it confidently gives an incorrect answer. Going back and forth a few times ends up clear it really does not know answer, but I end up going in circles. I know LLMs can't really tell you "sorry I don't know this one", but I wish they could.
reply
BOOSTERHIDROGEN
20 days ago
[-]
The exhaustive question makes ChatGPT reconstruct your answer in real-time, while all you need to do is sleep; your brain will construct the answer and deliver it tomorrow morning.
reply
ben_w
20 days ago
[-]
The benefit of getting an answer immediately rather than tomorrow morning is why people are sometimes paid more for on-call rates rather than everyone being 9-5.

(Now I think if of the idiom, when did we switch to 9-6? I've never had a 9-5).

reply
ducttapecrown
20 days ago
[-]
I bet users won't pay for the power, but for a guarantee of access! I always hear about people running out of compute time for ChatGPT. Obvious answer is charge more for a higher quality service.
reply
taco_emoji
20 days ago
[-]
> A con like that wouldn't last very long.

Bernie Madoff ran his investment fund as a Ponzi scheme for over a decade (perhaps several decades)

reply
px1999
20 days ago
[-]
Imo the con is picking the metric that makes others look artificially bad when it doesn't seem to be all that different (at least on the surface)

> we use a stricter evaluation setting: a model is only considered to solve a question if it gets the answer right in four out of four attempts ("4/4 reliability"), not just one

This surely makes the other models post smaller numbers. I'd be curious how it stacks up if doing eg 1/1 attempt or 1/4 attempts.

reply
mrandish
20 days ago
[-]
> ... or their employer does.

I suspect this is a key driver behind having a higher priced, individual user offering. It gives pricing latitude for enterprise volume licenses.

reply
999900000999
20 days ago
[-]
Ok.

Let's say I run a company call AndSoft.

AndSoft has about 2000 people on staff, maybe 1000 programers.

This solution will cost 200k per year. Or 2.4 million per year.

Llama3 is effectively free with some liberation. Is ChatGPT pro 2.4 million a year better than Llama3. Of course Open AI will offer volume discounts.

I imagine if I was making north of 500k a year I'd subscribe as a curiosity... At least for a few months.

If your time is worth 250$ a hour, and this saves you an hour per month it's well worth it.

reply
ben_w
20 days ago
[-]
> A con like that wouldn't last very long

As someone who has both repeatedly written that I value the better LLMs as if they were a paid intern (so €$£1000/month at least), and yet who gets so much from the free tier* that I won't bother paying for a subscription:

I've seen quite a few cases where expensive non-functional things that experts demonstrate don't work, keep making money.

My mum was very fond of homeopathic pills and Bach flower tinctures, for example.

* 3.5 was competent enough to write a WebUI for the API so I've got the fancy stuff anyway as PAYG when I want it.

reply
shortrounddev2
20 days ago
[-]
Overcharging for a product to make it seem better than it really is has served apple well for decades
reply
crazygringo
20 days ago
[-]
That's a tired trope that simply isn't true.

Does Apple charge a premium? Of course. Do Apple products also tend to have better construction, greater reliability, consistent repair support, and hold their resale value better? Yes.

The idea that people are buying Apple because of the Apple premium simply doesn't hold up to any scrutiny. It's demonstrably not a Verblen good.

reply
windexh8er
20 days ago
[-]
> consistent repair support

Now that is a trope when you're talking about Apple. They may use more premium materials that and have a degree of improved construction leveraging those materials - but at the end of the day there are countless numbers of failure prone designs that Apple continued to ship for years even after knowing they existed.

I guess I don't follow the fact that the "Apple Premium" (whether real or otherwise) isn't a factor in a buyer decision. Are you saying Apple is a great lock-in system and that's why people continue to buy from them?

reply
chipotle_coyote
20 days ago
[-]
I suspect they're saying that for a lot of us, Apple provides enough value compared to the competition that we buy them despite the premium prices (and, on iOS, the lock-in).

It's very hard to explain to people who haven't dug into macOS that it's a great system for power users, for example, especially because it's not very customizable in terms of aesthetics, and there are always things you can point to about its out-of-the-box experience that seem "worse" than competitors (e.g., window management). And there's no one thing I can really point to and say "that, that's why I stay here"; it's more a collection of little things. The service menu. The customizable global keyboard shortcuts. Automator, AppleScript (in spite of itself), now the Shortcuts app.

And, sure, they tend to push their hardware in some ways, not always wisely. Nobody asked for the world's thinnest, most fragile keyboards, nor did we want them to spend five or six years fiddling with it and going "We think we have it now!" (Narrator: they did not.) But I really do like how solid my M1 MacBook Air feels. I really appreciate having a 2880x1800 resolution display with the P3 color gamut. It's a good machine. Even if I could run macOS well on other hardware, I'd still probably prefer running it on this hardware.

Anyway, this is very off topic. That ChatGPT Pro is pretty damn expensive, isn't it? This little conversation branch started as a comparison between it and the "Apple tax", but even as someone who mildly grudgingly pays the Apple tax every few years, the ChatGPT Pro tax is right off the table.

reply
cruano
20 days ago
[-]
They only have to be consistently better than the competition, and they are, by far. I always look for reviews before buying anything, and even then I've been nothing but disappointed by the likes of Razer, LG, Samsung, etc.
reply
Aeolun
20 days ago
[-]
I used to love to bash on Apple too. But ever since I’ve had the money all my hardware (except desktop PC) has been apple.

There’s something to be said for buying something and knowing it will interoperate with all your other stuff perfectly.

reply
shortrounddev2
20 days ago
[-]
> consistent repair support

The lack of repairability is easily Apple's worst quality. They do everything in their power to prevent you from repairing devices by yourself or via 3rd party shops. When you take it to them to repair, they often will charge you more than the cost of a new device.

People buy apple devices for a variety of reasons; some people believe in a false heuristic that Apple devices are good for software engineering. Others are simply teenagers who don't want to be the poor kid in school with an Android. Conspicuous consumption is a large part of Apple's appeal.

reply
Draiken
20 days ago
[-]
Here in Brazil Apple is very much all about showing off how rich you are. Especially since we have some of the most expensive Apple products in the world.

Maybe not as true in the US, but reading about the green bubble debacle, it's also a lot about status.

reply
vbezhenar
20 days ago
[-]
Same in Kazakhstan. It's all about status. Many poor persons get a credit to buy iPhones, because they want to look rich.
reply
xanderlewis
20 days ago
[-]
Apple products are expensive — sometimes to a degree that almost seems to be taking the piss.

But name one other company whose hardware truly matches Apple’s standards for precision and attention to detail.

reply
ingen0s
20 days ago
[-]
Indeed
reply
matteoraso
20 days ago
[-]
>Whether LLM's help you at your work is extremely domain-dependent.

I really doubt that, actually. The only thing that LLMs are truly good for is to create plausible-sounding text. Everything else, like generating facts, is outside of its main use case and known to frequently fail.

reply
TeMPOraL
20 days ago
[-]
That opinion made sense two years ago. It's plain weird to still hold it today.
reply
JoshTriplett
20 days ago
[-]
There was a study recently that made it clear the use of LLMs for coding assistance made people feel more productive but actually made them less productive.

EDIT: Added links.

https://www.cio.com/article/3540579/devs-gaining-little-if-a...

https://web.archive.org/web/20241205204237/https://llmreport...

(Archive link because the llmreporter site seems to have an expired TLS certificate at the moment.)

No improvement to PR throughput or merge time, 41% more bugs, worse work-life balance...

reply
grogenaut
20 days ago
[-]
I recently slapped 3 different 3 page sql statements and their obscure errors with no line or context references from Redshift into Claude, it was 3 for 3 on telling me where in my query I was messing up. Saved me probably 5 minutes each time but really saved me from moving to a different task and coming back. So around $100 in value right there. I was impressed by it. I wish the query UI I was using just auto-ran it when I got an error. I should code that up as an extension.
reply
mattkrause
20 days ago
[-]
$100 to save 15 minutes implies that you net at least $800,000 a year. Well done if so!
reply
grogenaut
20 days ago
[-]
When forecasting for developers and employee cost for a company I double their pay but I'm not going to say what I make and if I did or not. I also like to think that developers should be working on work that is many multiples of leverage over their pay to be effective. But thanks.
reply
afro88
20 days ago
[-]
> but really saved me from moving to a different task and coming back

You missed this part. Being able to quickly fix things without deep thought while in flow saves you from the slowdowns of context switching.

reply
TeMPOraL
20 days ago
[-]
That $100 of value likely costed them more like $0.1 - $1 in API costs.
reply
grogenaut
20 days ago
[-]
It didn't cost me anything, my employer paid for it. Math for my employer is odd because our use of LLMs is also R&D (you can look at my profile to see why). But it was definitely worth $1 in api costs. I can see justifying spending $200/month for devs actively using a tool like this.
reply
mdtancsa
20 days ago
[-]
I am in a similar same boat. Its way more correct than not for the tasks I give it. For simple queries about, say, CLI tools I dont use that often, or regex formulations, I find it handy as when it gives the answer Its easy to test if its right or not. If it gets it wrong, I work with Claude to get to the right answer.
reply
TeMPOraL
20 days ago
[-]
First of all, that's moving the goalposts to next state over, relative to what I replied to.

Secondly, the "No improvement to PR throughput or merge time, 41% more bugs, worse work-life balance" result you quote came, per article, from a "study from Uplevel", which seems to[0] have been testing for change "among developers utilizing Copilot". That may or may not be surprising, but again it's hardly relevant to discussion about SOTA LLMs - it's like evaluating performance of an excavator by giving 1:10 toy excavators models to children and observing whether they dig holes in the sandbox faster than their shovel-equipped friends.

Best LLMs are too slow and/or expensive to use in Copilot fashion just yet. I'm not sure if it's even a good idea - Copilot-like use breaks flow. Instead, the biggest wins coming from LLMs are from discussing problems, generating blocks of code, refactoring, unstructured to structured data conversion, identifying issues from build or debugger output, etc. All of those uses require qualitatively more "intelligence" than Copilot-style, and LLMs like GPT-4o and Claude 3.5 Sonnet deliver (hell, anything past GPT 3.5 delivered).

Thirdly, I have some doubts about the very metrics used. I'll refrain from assuming the study is plain wrong here until I read it (see [0]), but anecdotally, I can tell you that at my last workplace, you likely wouldn't be able to tell whether or not using LLMs the right way (much less Copilot) helped by looking solely at those metrics - almost all PRs were approved by reviewers with minor or tangential commentary (thanks to culture of testing locally first, and not writing shit code in the first place), but then would spend days waiting to be merged due to shit CI system (overloaded to the point of breakage - apparently all the "developer time is more expensive than hardware" talk ends when it comes to adding compute to CI bots).

--

[0] - Per the article you linked; I'm yet to find and read the actual study itself.

reply
mkl
20 days ago
[-]
Do you have a link? I'm not finding it by searching.
reply
marcodiego
20 days ago
[-]
I really need the source of this.
reply
tiahura
20 days ago
[-]
LLMs have become indispensable for many attorneys. I know many other professionals that have been able to offload dozens of hours of work per month to ChatGPT and Claude.
reply
PittleyDunkin
20 days ago
[-]
What on earth is this work that they're doing that's so resilient to the fallible nature of LLMs? Is it just document search with a RAG?
reply
tiahura
20 days ago
[-]
Everything. Drafting correspondence, pleadings discovery, discovery responses. Reviewing all of the same. Reviewing depositions, drafting deposition outlines.

Everything that is “word processing,” and that’s a lot.

reply
PittleyDunkin
20 days ago
[-]
Well that's terrifying. Good luck to them.
reply
wing-_-nuts
20 days ago
[-]
To be honest, much of contract law is formal boilerplate. I can understand why they'd want to move their role to 'review' instead of 'generate'
reply
drdaeman
20 days ago
[-]
So, instead of fixing the issue (legal documents becoming a barely manageable mess) they’re investing money into making it… even worse?

This world is so messed up.

reply
Terr_
20 days ago
[-]
Arguably the same problem is occurs in programming: Anything so formulaic and common that an LLM can regurgitate it with a decent level of reliability... is something that ought to have been folded into method/library already.

Or it already exists in some howto documentation, but nobody wanted to skim the documentation.

reply
randallsquared
20 days ago
[-]
They have no lever with which to fix the issue.
reply
PittleyDunkin
20 days ago
[-]
Why not just move over to forms with structured input?
reply
sebastiennight
20 days ago
[-]
As a customer of legal work for 20 years, it is also way (way way) faster and cheaper to draft a contract with Claude (total work ~1 hour, even with complex back-and-forth ; you don't want to try to one-shot it in a single prompt) and then pay a law firm their top dollar-per-hour consulting to review/amend the contract (you can get to the final version in a day).

Versus the old way of asking them to write the contract, where they'll blatantly re-use some boilerplate (sometimes the name of a previous client's company will still be in there) and then take 2 weeks to get back to you with Draft #1, charging 10x as much.

reply
cj
20 days ago
[-]
Good law firms won’t charge you for using their boilerplates, only the time to customize it for your use case.

I anlways ask our lawyer whether or not they have a boilerplate when I need a contract written up. They usually do.

reply
sebastiennight
19 days ago
[-]
That's interesting. I've never had a law firm be straightforward about the (obvious) fact they'll be using a boilerplate.

I've even found that when lawyers send a document for one of my companies, and I give them a list of things to fix, including e.g. typos, the same typos will be in there if we need a similar document a year later for another company (because, well, nobody updated the boilerplate)

Do you ask about the boilerplate before or after you ask for a quote?

reply
cj
18 days ago
[-]
I typically don’t ask for a quote upfront since they are very fair with their business and billing practices.

I could definitely see a large law firm (Orrick, Venable, Cooley, Fenwick) doing what you describe. I’ve worked with 2 firms just listed, and their billing practices were ridiculous.

I’ve had a lot more success (quality and price) working with boutique law firms, where your point of contact is always a partner instead of your account permanently being pawned off to an associate.

Email is in profile if you want an intro to the law firm I use. Great boutique firm based in Bay Area and extremely good price/quality/value.

reply
bad_haircut72
20 days ago
[-]
Yeah the industries LLMs will disrupt the most are the ones who gatekeep busywork. SWE falls into this to some degree but other professions are more guilty than us. They dont replace intelligence they just surface jobs which never really required much intelligence to begin with.
reply
jprd
20 days ago
[-]
I bet they still charge for all the hours though.
reply
rusticpenn
20 days ago
[-]
I use llms to do most of my dunki work.
reply
newsclues
20 days ago
[-]
Maybe not very long, but long enough is plausible.
reply
spaceman_2020
20 days ago
[-]
HN has been just such an awful place to discuss AI. Everyone here is convinced its a grift, a con, and we're all "marks"

Just zero curiosity, only skepticism.

reply
ghshephard
20 days ago
[-]
If you do a lot of work in an area that o1 is strong in - $200/month effectively rounds down to $0 - and a single good answer at the right time could justify that entire $200 in a single go.
reply
daveguy
20 days ago
[-]
I feel like a single bad answer at the wrong time could cost a heck of a lot more than $200. And these LLMs are riddled with bad answers.
reply
amelius
20 days ago
[-]
Think of it as an intern. Don't trust everything they say.
reply
crindy
20 days ago
[-]
It's so strange to me that in a forum full of programmers, people don't seem to understand that you set up systems to detect errors before they cause problems. That's why I find ChatGPT so useful for helping me with programming - I can tell if it makes a mistake because... the code doesn't do what I want it to do. I already have testing and linting set up to catch my own mistakes, and those things also catch AI's mistakes.
reply
xandrius
20 days ago
[-]
Thank you! I always feel so weird to actually use chatgpt without any major issues while so many people keep on claiming how awful it is; it's like people want it 100% perfect or nothing. For me if it gets me 80% there in 1/10 the time, and then I do the final 20%, that's still heck of a productivity boost basically for free.
reply
crindy
20 days ago
[-]
Yep, I’m with you. I’m a solo dev who never went to college… o1 makes far fewer errors than I do! No chance I’d make it past round one of any sort of coding tournament. But I managed to bootstrap a whole saas company doing all the coding myself, which involved setting up a lot of guard rails to catch my own mistakes before they reached production. And now I can consult with a programming intelligence the likes of which I could never afford to hire if it was a person. It’s amazing.
reply
thelastparadise
20 days ago
[-]
Is it working?
reply
crindy
20 days ago
[-]
Not sure what you're referring to exactly. But broadly yes it is working for me - the number of new features I get out to users has sped up greatly, and stability of my product has also gone up.
reply
daveguy
20 days ago
[-]
Are you making money with your saas idea?
reply
crindy
19 days ago
[-]
Yep, been living off it for nine years now
reply
daveguy
19 days ago
[-]
Congratulations! That is not an easy task. I am just starting the journey.
reply
lumb63
20 days ago
[-]
Famously, the last 10% takes 90% of the time (or 20/80 in some approximations). So even if it gets you 80% of the way in 10% of the time, maybe you don’t end up saving any time, because all the time is in the last 20%.

I’m not saying that LLMs can’t be useful, but I do think it’s a darn shame that we’ve given up on creating tools that deterministically perform a task. We know we make mistakes and take a long time to do things. And so we developed tools to decrease our fallibility to zero, or to allow us to achieve the same output faster. But that technology needs to be reliable; and pushing the envelope of that reliability has been a cornerstone of human innovation since time immemorial. Except here, with the “AI” craze, where we have abandoned that pursuit. As the saying goes, “to err is human”; the 21st-century update will seemingly be, “and it’s okay if technology errs too”. If any other foundational technology had this issue, it would be sitting unused on a shelf.

What if your compiler only generated the right code 99% of the time? Or, if your car only started 9 times out of 10? All of these tools can be useful, but when we are so accepting of a lack of reliability, more things go wrong, and potentially at larger and larger scales and magnitudes. When (if some folks are to believed) AI is writing safety-critical code for an early-warning system, or deciding when to use bombs, or designing and validating drugs, what failure rate is tolerable?

reply
avarun
20 days ago
[-]
> Famously, the last 10% takes 90% of the time (or 20/80 in some approximations). So even if it gets you 80% of the way in 10% of the time, maybe you don’t end up saving any time, because all the time is in the last 20%.

This does not follow. By your own assumptions, getting you 80% of the way there in 10% of the time would save you 18% of the overall time, if the first 80% typically takes 20% of the time. 18% time reduction in a given task is still an incredibly massive optimization that's easily worth $200/month for a professional.

reply
km3r
20 days ago
[-]
Using 90/10 split: that 10% of the time before being reduced to only take 10% of that makes 9% time savings.

160 hours a month * $100/hr programmer * 9% = $1400 savings, easily enough to justify $200/month.

Even if 1/10th of the time it fails, that is still ~8% or $1200 savings.

reply
daveguy
20 days ago
[-]
Does that count the time you spend on prompt engineering?
reply
xanderlewis
20 days ago
[-]
It depends what you’re doing.

For tasks where bullshitting or regurgitating common idioms is key, it works rather well and indeed takes you 80% or even close to 100% of the way there. For tasks that require technical precision and genuine originality, it’s hopeless.

reply
xandrius
20 days ago
[-]
I'd love to hear what that is.

So far, given my range of projects, I have seen it struggle with lower level mobile stuff and hardware (ESP32 + BLE + HID).

For things like web (front/back), DB, video games (web and Unity), it does work pretty well (at least 80% there on average).

And I'm talking of the free version, not this $200/mo one.

reply
daveguy
20 days ago
[-]
Well, that is a very specific set of skills. I bet the C-suite loves it.
reply
CamperBob2
20 days ago
[-]
I always feel so weird to actually use chatgpt without any major issues while so many people keep on claiming how awful it is;

People around here feel seriously threatened by ML models. It makes no sense, but then, neither does defending the Luddites, and people around here do that, too.

reply
JamesBarney
20 days ago
[-]
Well now at $200 it's a little farther away from free :P
reply
xandrius
20 days ago
[-]
What do you mean? ChatGPT is free, the Pro version isn't.

I'm talking of the generally available one, haven't had the chance to try this new version.

reply
thelastparadise
20 days ago
[-]
I could a car for that kind of money!
reply
vunderba
20 days ago
[-]
Of course, but for every thoroughly set up TDD environment, you have a hundred other people just blindly copy pasting LLM output into their code base and trusting the code based on a few quick sanity checks.
reply
daveguy
20 days ago
[-]
You assume programming software with an existing well-defined and correct test suite is all these will be used for.
reply
leptons
20 days ago
[-]
>I can tell if it makes a mistake because... the code doesn't do what I want it to do

Sometimes it does what you want it to do, but still creates a bug.

Asked the AI to write some code to get a list of all objects in an S3 bucket. It wrote some code that worked, but it did not address the fact that S3 delivers objects in pages of max 1000 items, so if the bucket contained less than 1000 objects (typical when first starting a project), things worked, but if the bucket contained more than 1000 objects (easy to do on S3 in a short amount of time), then that would be a subtle but important bug.

Someone not already intimately familiar with the inner workings of S3 APIs would not have caught this. It's anyone's guess if it would be caught in a code review, if a code review is even done.

I don't ask the AI to do anything complicated at all, the most I trust it with is writing console.log statements, which it is pretty good at predicting, but still not perfect.

reply
rrradical
20 days ago
[-]
So the AI wrote a bug; but if humans wouldn’t catch it in code review, then obviously they could have written the same bug. Which shouldn’t be surprising because LLMs didn’t invent the concept of bugs.

I use LLMs maybe a few times a month but I don’t really follow this argument against them.

reply
leptons
20 days ago
[-]
Code reviewing is not the same thing as writing code. When you're writing code you're supposed to look at the documentation and do some exploration before the final code is pushed.

It would be pretty easy for most code reviewers to miss this type of bug in a code review, because they aren't always looking for that kind of bug, they aren't always looking at the AWS documentation while reviewing the code.

Yes, people could also make the same error, but at least they have a chance at understanding the documentation and limits where the LLM has no such ability to reason and understand consequences.

reply
yawnxyz
20 days ago
[-]
it also catches MY mistakes, so that saves time
reply
Kiro
20 days ago
[-]
So true, and people seem to gloss over this fact completely. They only talk about correcting the LLM's code while the opposite is much more common for me.
reply
crackrook
20 days ago
[-]
I would hesitate to hire an intern that makes incorrect statements with maximum confidence and with no ability to learn from their mistakes.
reply
educasean
20 days ago
[-]
When you highlight only the negatives, yeah it does sound like no one should hire that intern. But what if the same intern happens to have an encyclopedia for a brain and can pour through massive documents and codebases to spot and fix countless human errors in a snap?

There seems to be two camps: People who want nothing to do with such flawed interns - and people who are trying to figure out how to amplify and utilize the positive aspects of such flawed, yet powerful interns. I'm choosing to be in the latter camp.

reply
crackrook
20 days ago
[-]
Those are fair points, I didn't mean to imply that there are only negatives, and I don't consider myself to be in the former camp you describe as wanting nothing to do with these "interns". I shouldn't have stuck with the intern analogy at all since it's difficult for me to compare the two, with one being fairly autonomous and the other being totally reliant on a prompter.

The only point I wanted to make was that an LLM's ability and propensity to generate plausible falsehoods should, in my opinion, elicit a much deeper sense of distrust than one feels for an intern, enough so that comparing the two feels a little dangerous. I don't trust an intern to be right about everything, but I trust them to be self aware, and I don't feel like I have to take a magnifying glass to every tidbit of information they provide.

reply
pie420
20 days ago
[-]
nothing chatgpt says is with maximum confidence. the EULA and terms of use are riddled with "no guarantee of accuracy" and "use at own risk"
reply
albumen
20 days ago
[-]
No they're right. ChatGPT (and all chargers) responds confidently while making simple errors. Disclaimers upon signup or in tiny corner text are so at odds with the actual chat experience.
reply
crackrook
20 days ago
[-]
What I meant to say was that the model uses the verbiage of a maximally confident human. In my experience the interns worth having have some sense of the limits of their knowledge and will tell you "I don't know" or qualify information with "I'm not certain, but..."

If an intern set their Slack status to "There's no guarantee that what I say will be accurate, engage with me at your own risk." That wouldn't excuse their attempts to answer every question as if they wrote the book on the subject.

reply
daveguy
20 days ago
[-]
I think the point is that an LLM almost always responds with the appearance of high confidence. It will much quicker hallucinate than say "I don't know."
reply
Terr_
20 days ago
[-]
And we, as humans, are having a hard time compartmentalizing and forgetting our lifetimes of language cues, which typically correlate with attention to detail, intelligence, time investment, etc.

New echnology allows those signs to be counterfeited quickly and cheaply, and it tricks our subconscious despite our best efforts to be hyper-vigilant. (Our brains don't want to do that, it's expensive.)

Perhaps a stopgap might be to make the LLM say everything in a hostile villainous way...

reply
Draiken
20 days ago
[-]
They aren't talking about EULAs. It's how they give out their answers.
reply
stavros
20 days ago
[-]
If I have to do the work to double-check all the answers, why am I paying $200?
reply
billti
20 days ago
[-]
Why do companies hire junior devs? You still want a senior to review the PRs before they merge into the product right? But the net benefit is still there.
reply
stavros
20 days ago
[-]
We hire junior devs as an investment, because at some point they turn into seniors. If they stayed juniors forever, I wouldn't hire them.
reply
drusepth
20 days ago
[-]
I started incorporating LLMs into my workflows around the time gpt-3 came out. By comparison to its performance at that point, it sure feels like my junior is starting to become a senior.
reply
jhgg
20 days ago
[-]
Are you implying this technology will remain static in its capabilities going forward despite it having seen significant improvement over the last few years?
reply
stavros
20 days ago
[-]
No, I'm explicitly saying that gpt-4o-2024-11-20 won't get any smarter, no matter how much I use it.
reply
jhgg
20 days ago
[-]
Does that matter when you can just swap it for gpt-5-whatever at some point in the future?
reply
stavros
20 days ago
[-]
Someone asked why I hire juniors. I said I hire juniors because they get better. I don't need to use the model for it to get better, I can just wait until it's good and use it then. That's the argument.
reply
esafak
20 days ago
[-]
I suppose the counterargument would be your investment in OpenAI allows them to fund the better model down the road, but I get your drift :)
reply
OvidNaso
20 days ago
[-]
Genuinely curious, are you saying that your junior devs don't provide any value from the work they do?
reply
stavros
20 days ago
[-]
They provide some value, but between the time they take in coaching, reviewing their work, support, etc, I'm fairly sure one senior developer has a much higher work per dollar ratio than the junior.
reply
tiahura
20 days ago
[-]
Because double checking and occasionally hitting retry is still 10x faster than me doing.
reply
behringer
20 days ago
[-]
Because you wouldn't have come up with the correct answer before you used up 200 dollars worth of salary or billable time.
reply
UltraSane
20 days ago
[-]
because checking the work is much faster than generating it.
reply
Sporktacular
20 days ago
[-]
Because it's per month and not per hour for a specialist consultant.
reply
motoxpro
20 days ago
[-]
I don't know anyone who does something and at first says, "This will be a mistake" Maybe they say, "I am pretty sure this is the right thing to do," then they make a mistake.

If it's easier mentally, just put that second sentence in from of every chatgpt answer.

Yeah the Junior dev gets better, but then you hire another one that makes the same mistakes, so in reality, on an absolute basis, the junior dev never gets any better.

reply
parthdesai
20 days ago
[-]
Yeah, but you personally don't pay $200/month out of your pocket for the intern. Heck in Canada, govt. actually rebates for hiring interns and co-ops.
reply
malux85
20 days ago
[-]
Then the lesson you have learned is “don’t blindly trust the machine”

Which is a very valuable lesson, worth more than $200

reply
awestroke
20 days ago
[-]
Easy - don't trust the answers. Verify them
reply
ZiiS
20 days ago
[-]
Even in this case loosing $200 + whatever vs a tiny bit higher chance of loosing $20 + whatever makes pro seem a good deal.
reply
daveguy
20 days ago
[-]
Doesn't that completely depend on those chances and the magnitude of +whatever?

It just seems to me that you really need to know the answer before you ask it to be over 90% confident in the answer. And the more convincing sounding these things get the more difficult it is to know whether you have a plausible but wrong answer (aka "hallucination") vs a correct one.

If you have a need for a lot of difficult to come up with but easy to verify answers it could be worth it. But the difficult to come up with answers (eg novel research) are also where LLMs do the worst.

reply
ruszki
20 days ago
[-]
Compared to know things and not loosing whatever, both are pretty bad deals.
reply
Kiro
20 days ago
[-]
What specific use cases are you referring to where that poses a risk? I've been using LLMs for years now (both directly and as part of applications) and can't think of a single instance where the output constituted a risk or where it was relied upon for critical decisions.
reply
llm_trw
20 days ago
[-]
That's why you have a human in the loop responsible for the answer.
reply
yosito
20 days ago
[-]
Presumably, this is what they want the marks buying the $200 plan to think. Whether it's actually capable of providing answers worth $200 and not just sweet talking is the whole question.
reply
dubeye
20 days ago
[-]
If i'm happy to pay 20 in retirement just for the odd bit of writing help, then i can easily imagine it being worth 200 to someone with a job
reply
josephg
20 days ago
[-]
Yep. I’m currently paying for both Claude and chatgpt because they’re good at different things. I can’t tell whether this is extremely cheap or expensive - last week Claude saved me about a day of time by writing a whole lot of very complex sql queries for me. The value is insane.
reply
cryptoegorophy
20 days ago
[-]
yeah, as someone who is far from programming, the amount of time and money it saved me helping me make sql queries and making php code for wordpress is insane. It even helped me fix some wordpress plugins that had errors and you just copy paste or even screenshot those errors until they get fixed! If used correctly and efficiently the value is insane, I would say $20, $200 is still cheap for such an amazing tool.
reply
raincole
20 days ago
[-]
The problem isn't whether ChatGPT Pro can save you $200/mo (for most programmers it can.)

The problem is whether it can saves you $180/mo more than Claude does.

reply
behringer
20 days ago
[-]
I kind of feel this is a kick in the face.

Now I'll forever be using a second rate model because I'm not rich enough.

If I'm stuck using a second rate model I may go find someone else's model to use.

reply
jrflowers
20 days ago
[-]
> In other words, it's a con. I'm a paying Perplexity user

I love this back-to-back pair of statements. It is like “You can never win three card monte. I pay a monthly subscription fee to play it.”

reply
yosito
20 days ago
[-]
I pay $10/month for perplexity because I fully understand its limitations. I will not pay $200/month for an LLM.
reply
monkey_monkey
20 days ago
[-]
I am CERTAIN you do not FULLY understand its limitations.
reply
yosito
20 days ago
[-]
mkay
reply
monkey_monkey
18 days ago
[-]
yeah, that's what i thought.
reply
nemonemo
20 days ago
[-]
Wouldn't you say the same thing for most of the people? Most of the people suck at verifying truth and reasoning. Even "intelligent" people make mistakes based on their biases.

I think at least LLMs are more receptive to the idea that they may be wrong, and based on that, we can have N diverse LLMs and they may argue more peacefully and build a reliable consensus than N "intelligent" people.

reply
jazzyjackson
20 days ago
[-]
The difference between a person and a bot is that a person has a stake in the outcome. A bot is like a person who's already put in their two weeks notice and doesn't have to be there to see the outcome of their work.
reply
MichaelZuo
20 days ago
[-]
That’s still amazing quality output for someone working for under $1/hour?
reply
Smaug123
20 days ago
[-]
It's not obvious that one should prefer that, versus not having that output at all.
reply
MichaelZuo
20 days ago
[-]
Why does that matter?

Even if it was a consensus opinion among all HN users, which hardly seems to be the case, it would have little impact on the other billion plus potential customers…

reply
jerjerjer
20 days ago
[-]
The issue is that most people, especially when prompted, can provide their level of confidence in the answer or even refuse to provide an answer if they are not sure. LLMs, by default, seem to be extremely confident in their answers, and it's quite hard to get the "confidence" level out of them (if that metric is even applicable to LLMs). That's why they are so good at duping people into believing them after all.
reply
PittleyDunkin
20 days ago
[-]
> The issue is that most people, especially when prompted, can provide their level of confidence in the answer or even refuse to provide an answer if they are not sure.

People also pull this figure out of their ass, over or undertrust themselves, and lie. I'm not sure self-reported confidence is that interesting compared to "showing your work".

reply
fourside
20 days ago
[-]
How is this a counter argument that LLMs are marketed as having intelligence when it’s more accurate to think of them as predictive models? The fact that humans are also flawed isn’t super relevant to a $200/month LLM purchasing decision.
reply
lukan
20 days ago
[-]
Intelligent people will know they made a mistake, if given a hint and figure out what went wrong.

A LLM will just pretend to care about the error and happily repeats the error over and over.

reply
ryan29
20 days ago
[-]
> Wouldn't you say the same thing for most of the people? Most of the people suck at verifying truth and reasoning. Even "intelligent" people make mistakes based on their biases.

I think there's a huge difference because individuals can be reasoned with, convinced they're wrong, and have the ability to verify they're wrong and change their position. If I can convince one person they're wrong about something, they convince others. It has an exponential effect and it's a good way of eliminating common errors.

I don't understand how LLMs will do that. If everyone stops learning and starts relying on LLMs to tell them how to do everything, who will discover the mistakes?

Here's a specific example. I'll pick on LinuxServer since they're big [1], but almost every 'docker-compose.yml' stack you see online will have a database service defined like this:

    services:
      app:
        # ...
        environment:
          - 'DB_HOST=mysql:3306'
        # ...
      mariadb:
        image: linuxserver/mariadb
        container_name: mariadb
        environment:
          - PUID=1000
          - PGID=1000
          - MYSQL_ROOT_PASSWORD=ROOT_ACCESS_PASSWORD
          - TZ=Europe/London
        volumes:
          - /home/user/appdata/mariadb:/config
        ports:
          - 3306:3306
        restart: unless-stopped
Assuming the database is dedicated to that app, and it typically is, publishing port 3306 for the database isn't necessary and is a bad practice because it unnecessarily exposes it to your entire local network. You don't need to publish it because it's already accessible to other containers in the same stack.

Another Docker related example would be a Dockerfile using 'apt[-get]' without the '--error-on=any' switch. Pay attention to Docker build files and you'll realize almost no one uses that switch. Failing to do so allows silent failures of the 'update' command and it's possible to build containers with stale package versions if you have a transient error that affects the 'update' command, but succeeds on a subsequent 'install' command.

There are tons of misunderstandings like that which end up being so common that no one realizes they're doing things wrong. For people, I can do something as simple as posting on HN and others can see my suggestion, verify it's correct, and repeat the solution. Eventually, the misconception is corrected and those paying attention know to ignore the mistakes in all of the old internet posts that will never be updated.

How do you convince ChatGPT the above is correct and that it's a million posts on the internet that are wrong?

1. https://docs.linuxserver.io/general/docker-compose/#multiple...

reply
vanviegen
20 days ago
[-]
I asked ChatGPT 4o if there's anything that can be improved in your docker-compose file. Among other (seemingly sensible) suggestions, it offered:

## Restrict Host Ports for Security

If app and mariadb are only communicating internally, you can remove 3306:3306 to avoid exposing the port to the host machine:

```yaml ports: - 3306:3306 # Remove this unless external access is required. ```

So, apparently, ChatGPT doesn't need any more convincing.

reply
BeefWellington
20 days ago
[-]
Here GPT is saying the port is only exposed to the host machine (e.g.: localhost), rather than the full local network.
reply
ryan29
20 days ago
[-]
Wow. I can honestly say I'm surprised it makes that suggestion. That's great!

I don't understand how it gets there though. How does it "know" that's the right thing to suggest when the majority of the online documentation all gets it wrong?

I know how I do it. I read the Docker docs, I see that I don't think publishing that port is needed, I spin up a test, and I verify my theory. AFAIK, ChatGPT isn't testing to verify assumptions like that, so I wonder how it determines correct from incorrect.

reply
kdmtctl
20 days ago
[-]
I suspect there is acsolid corpus of advices online that mention the exposed ports risk. Alongside with flawed examples you mentioned. Narrow request will trigger the right response. That's why LLMs are still requiring basic understanding of what exactly you plan to achieve.
reply
yosito
20 days ago
[-]
Yeah, most people suck at verifying truth and reasoning. But most information technology employees, above intern level, are highly capable of reasoning and making decisions in their area of expertise.

Try asking an LLM complex questions in your area of expertise. Interview it as if you needed to be confident that it could do your job. You'll quickly find out that it can't do your job, and isn't actually capable of reasoning.

reply
sangeeth96
20 days ago
[-]
> they may argue more peacefully

bit of a stretch.

reply
vbezhenar
20 days ago
[-]
I would pay $200 for GPT4o. Since GPT4, ChatGPT is absolutely necessary for my work and for my life. It changed every workflow like Google changed. I'm paying $20 to remove ads from youtube which I watch may be once a week, so $20 for ChatGPT was a steal.

That said, my "issue" might be that I usually work alone and I don't have anyone to consult with. I can bother people on forums, but these days forums are pretty much dead and full of trolls, so it's not very useful. ChatGPT was that thing that allows me to progress in this environment. If you work in Google and can ask Rob Pike about something, probably you don't need ChatGPT as much.

reply
outside415
20 days ago
[-]
this is more or less my take too. if tomorrow all Claude and ChatGPT became $200/month I would still pay. The value they provide me with far, far exceeds that. so many cynics in this thread.
reply
throwaway314155
20 days ago
[-]
You don't have to be a cynic to be annoyed with a $200/month price. Just make a normal amount of money.
reply
tippytippytango
20 days ago
[-]
It’s like hiring an assistant. You could hire one for 60k/year. But you wouldn’t do it unless you knew how the assistant could help you make more than 60k per year. If you don’t know what to do with an employee then don’t hire them. If you don’t know what to do with expensive ai, don’t pay for it.
reply
wslh
20 days ago
[-]
> $200 a month for this is insane, but I have a feeling that part of the reason they're charging so much is to give people more confidence in the model.

Is it possible that they have subsidized the infrastructure for free and paid users and they realized that OpenAI requires a higher revenue to maintain the current demand?

reply
yosito
20 days ago
[-]
Yes, it's entirely possible that they're scrambling to make money. That doesn't actually increase the value that they're offering though.
reply
lm28469
20 days ago
[-]
> $200 a month for this is insane

Losing $5-10b per year also is insane. People are still looking for the added value, it's been 2 whole years now

reply
AlanYx
20 days ago
[-]
$200 a month is potentially a bargain since it comes with unlimited advanced voice. Via the API, $200 used to only get you 14 hours of advanced voice.
reply
yosito
20 days ago
[-]
I've got unlimited "advanced voice" with Perplexity for $10/mo. You're defining a bargain based on the arbitrary limits set by the company offering you said bargain.
reply
dumbmrblah
20 days ago
[-]
The advanced voice of ChatGPT is miles ahead of the Perplexity one. I subscribe to both.
reply
jerjerjer
20 days ago
[-]
Does it give unlimited API access though?
reply
AlanYx
20 days ago
[-]
No (naturally). But my thought process is that if you use advanced voice even half an hour a day, it's probably a fair price based on API costs. If you use it more, for something like language learning or entertaining kids who love it, it's potentially a bargain.
reply
htrp
20 days ago
[-]
you'll be throttled and rate limited
reply
tptacek
20 days ago
[-]
Is it insane? It's the cost of a new laptop every year. There are about as many people who won't blink at that among practitioners in our field as people who will.

I think the ship has sailed on whether GPT is useful or a con; I've lost track of people telling me it's their first search now rather than Google.

I'd encourage skeptics who haven't read this yet to check out Nicholas' post here:

https://news.ycombinator.com/item?id=41150317

reply
rfoo
19 days ago
[-]
> It's the cost of a new laptop every year.

It's the cost of a new, shiny, Apple laptop every year.

reply
jack_riminton
20 days ago
[-]
If a model is good enough (I’m not saying this one is that level) I could imagine individuals and businesses paying 20,000 a month. If they’re answering questions at phd level (again, not saying this one is) then for a lot of areas this makes sense
reply
yosito
20 days ago
[-]
Let me know when the models are actually, verifiably, this good. They're barely good enough to replace interns at this point.
reply
TeMPOraL
20 days ago
[-]
Let me know where you can find people that are individually capable at performing at intern level in every domain of knowledge and text-based activity known to mankind.

"Barely good enough to replace interns" is worth a lot to businesses already.

(On that note, a founder of a SAP competitor and a major IT corporation in Poland is fond of saying that "any specialist can be replaced by a finite number of interns". We'll soon get to see how true that is.)

reply
jll29
20 days ago
[-]
Cześć!

Since when does SAP have competitors? ;-P

A friend of mine claims most research is nowadays done by undergraduates because all senior folks are too busy.

reply
etrautmann
20 days ago
[-]
postdocs but yeah
reply
ssl-3
20 days ago
[-]
Let me know what kind of intern you can keep around 24/7 for a total monthly outlay of $200, and then we can compare notes.
reply
yosito
20 days ago
[-]
Probably one from the Philippines.
reply
handfuloflight
20 days ago
[-]
Not 24/7.
reply
ssl-3
19 days ago
[-]
And probably not one that can guess (often poorly, but at least sometimes quite well, and usually at least very much in the right direction) about everything from nuances of seasoning taco meat to particle physics, and do so in ~an instant.

$200 seems pretty cheap for a 24/7 [remote] intern with these abilities. That kind of money doesn't even buy a month's worth of Big Macs to feed that intern with.

It just seems like a lot (or even absurd) for a subscription to a service on teh Interweb, akin to "$200 for access to a web site? lolwut?"

reply
zamadatix
20 days ago
[-]
If true, $2,400/y isn't bad for a 24/7/365 intern.
reply
8f2ab37a-ed6c
20 days ago
[-]
My main concern with $200/mo is that, as a software dev using foundational LLMs to learn and solve problems, I wouldn't get that much incremental value over the $20/mo tier, which I'm happy to pay for. They'd have to do a pretty incredible job at selling me on the benefits for me to pay 10x the original price. 10x for something like a 5% marginal improvement seems sus.
reply
MuffinFlavored
20 days ago
[-]
> but I have a feeling that part of the reason they're charging so much is to give people more confidence in the model

Or each user doing an o1 model prompt is probably like, really expensive and they need to charge for it until they can get cost down? Anybody have estimates on what a single request into o1 costs on their end? Like GPU, memory, all the "thought" tokens?

reply
yosito
20 days ago
[-]
Perplexity does reasoning and searching, for $10/mo, so I have a hard time believing that it costs OpenAI 20x as much to do the same thing. Especially if OpenAI's model is really more advanced. But of course, no one except internal teams have all of the information about costs.
reply
metacritic12
20 days ago
[-]
Do you also think $40K a year for Hubspot is insane? What about people who pay $1k in order to work on a field for 4 hours hitting a small ball with a stick?

The truth is that there are people who value the marginal performance -- if you think it's insane, clearly it's not for you.

reply
Barrin92
20 days ago
[-]
>What about people who pay $1k in order to work on a field for 4 hours hitting a small ball with a stick?

Those people want to purchase status. Unless they ship you a fancy bow tie and a wine tasting at a wood cabin with your chatgpt subscription this isn't gonna last long.

This isn't about marginal performance, it's an increasingly desperate attempt to justify their spending in a market that's increasingly commodified and open sourced. Gotta convince Microsoft somehow to keep the lights on if you blew tens of billions to be the first guy to make a service that 20 different companies are soon gonna sell for pennies.

reply
echelon
20 days ago
[-]
I'm extremely excited because this margin represents opportunity for all the other LLM startups.
reply
digitcatphd
20 days ago
[-]
Their demo video was uploading a picture of a birdhouse and asking how to build it
reply
Max-q
20 days ago
[-]
I would say using performance of Perplexity as a benchmark for the quality of o1-pro is a stretch?
reply
yosito
20 days ago
[-]
Find third party benchmarks of the relevant models and then this discussion is worth having. Otherwise, it's just speculation.
reply
choppaface
20 days ago
[-]
They claim unlimited access, but in practice couldn't a user wrap an API around the app and use it for a service? Or perhaps the client effectively throttles use pretty aggressively?

Interesting to compare this $200 pricing with the recent launch of Amazon Nova, which has not-equivalent-but-impressive performance for 1/10th the cost per million tokens. (Or perhaps OpenAI "shipmas" will include a competing product in the next few days, hence Amazon released early?)

See e.g.: https://mastodon.social/@mhoye/113595564770070726

reply
fzeindl
20 days ago
[-]
> After awhile, I started realizing that these mistakes are present in almost all topics.

A fun question I tried a couple of times is asking it to give me a list with famous talks about a topic. Or a list of famous software engineers and the topics they work on.

A couple of names typically exist but many names and basically all talks are shamelessly made up.

reply
valval
20 days ago
[-]
If you understood the systems you’re using, you’d know the limitations and wouldn’t marvel at this. Use search engines for searching, calculators for calculating, and LLMs for generating text.
reply
bowsamic
20 days ago
[-]
Whenever I’ve used ChatGPT for this exact thing it has been very accurate and didn’t make up anyone
reply
athrowaway3z
20 days ago
[-]
I've actually hit a interesting situation a few times that make use of this. If some language feature, argument, or configuration option doesn't exists it will hallucinate one.

This hallucination is usually a very good choice to name the option / API.

reply
clutchdude
20 days ago
[-]
I've seen this before and it's frustrating to deal with chasing phantom APIs it invents.

I wish it could just say "There is not a good approximation of this API existing - I would suggest reviewing the following docs/sources:....".

reply
brookst
20 days ago
[-]
I’d like to see more evidence that it’s a scam than just your feelings. Any data there?

I certainly don’t see why mere prediction can’t validate reasoning. Sure, it can’t do it perfectly all the time, but neither can people.

reply
talldayo
20 days ago
[-]
> I’d like to see more evidence that it’s a scam

Have you been introduced to their CEO yet? 5 minutes of Worldcoin research should assuage your curiosity.

reply
latexr
20 days ago
[-]
reply
brookst
20 days ago
[-]
So you’ve got feelings and guilt by association. And I’ve got a year of using ChatGPT, which has saved tens to hundreds of hours of tedious work.

Forgive me for not finding your argument persuasive.

reply
yosito
20 days ago
[-]
Guilt by association? It's literally the same guy.
reply
brookst
19 days ago
[-]
You’re saying the company’s product has no value because another company by the same guy produced no value. That is the literal definition of guilt by association: you are judging the chatgpt produced based on the worldcoin product’s value.

As a customer, I don’t care about the people. I’m not interested in either argument by authority (if Altman says it’s good it must be good) or ad hominem (that Altman guy is a jerk, nothing he does can have value).

The actual product. Have you tried it? With an open mind?

reply
talldayo
20 days ago
[-]
Ah, so you're one of the "I separate the art from the artist, so I'm allowed to listen to Kanye" kinda people. I respect that, at least when the product is something of subjective value like art. In this case, 3 months of not buying ChatGPT Pro would afford you the funding to build your own damn AI cluster.

To be honest, it doesn't matter what the price of producing AI is, though. $200/month is, and will be a stupid price to pay because OpenAI already invented a price point with a half billion users - free. When they charged $10/month, at least they weren't taking advantage of the mentally ill. This... this is a grift, and a textbook one at that.

reply
brookst
19 days ago
[-]
It is true that I separate art from artist. Mostly because otherwise there would be very little art to enjoy.

You don’t sound like you’re very familiar with the chatgpt product. They have about 10m customers paying $20/month. I’m one of them, and I honestly get way more than $200/month value from it.

Perhaps I’m “mentally ill”, but I’d ask you to do some introspection and see if leaping to that characterization is really the best way to explain people who get value where you see none.

reply
Kiro
20 days ago
[-]
> In other words, it's a con.

Such a silly conclusion to draw based on a gut feeling, and to see all comments piggyback on it like it's a given feels like I'm going crazy. How can you all be so certain?

reply
yosito
20 days ago
[-]
You don't have to be certain to be skeptical. But you should definitely be certain before you buy.
reply
nasmorn
20 days ago
[-]
I am a moderately successful software consultant and it is not even 1% of my revenue. So definitely not insane if it delivers the value.

What I doubt though is that it can reach a mass market even in business. A good large high resolution screen is something that I absolutely consider to deliver the value it costs. Most businesses don’t think their employees deserve a 2k screen which will last for 6-10 years and thus costs just a fraction of this offering.

Apparently the majority of businesses don’t believe in marginal gains

reply
vessenes
20 days ago
[-]
I mean this in what I hope will be taken in the most helpful way possible: you should update your thinking to at least imagine that intelligent thoughtful people see some value in ChatGPT. Or alternately that some of the people who see value in ChatGPT are intelligent and thoughtful. That is, aim for the more intelligent "Interesting, why do so many people like this? Where is it headed? Given that, what is worth doing now, and what's worth waiting on?" over the "This doesn't meet my standards in my domain, ergo people are getting scammed."

I'll pay $200 a month, no problem; right now o1-preview does the work for me of a ... somewhat distracted graduate student who needs checking, all for under $1 / day. It's slow for an LLM, but SUPER FAST for a grad student. If I can get a more rarely distracted graduate student that's better at coding for $7/day, well, that's worth a try. I can always cancel.

reply
yodsanklai
20 days ago
[-]
Could be a case of price discrimination [1], and a way to fuel the hype.

[1] https://www.investopedia.com/terms/p/price_discrimination.as...

reply
ren_engineer
20 days ago
[-]
target market is probably people who will write it off as a business expense
reply
Salgat
20 days ago
[-]
The performance difference seems minor, so this is a great way for the company to get more of its funding from whales versus increasing the base subscription fee.
reply
eigenvalue
20 days ago
[-]
Couldn't disagree more, I will be signing up for this as soon as I can, and it's a complete no brainer.
reply
cdrini
20 days ago
[-]
What will you be using it for? Where do you think you'll see the biggest benefit over the cheaper plan?
reply
eigenvalue
20 days ago
[-]
For programming. I've already signed up for it and it seems quite good (the o1 pro model I mean). I was also running into constraints on o1-preview before so it will be nice to not have to worry about that either. I wish I could get a similar more expensive plan for Claude 3.5 Sonnet that would let me make more requests.
reply
thelastparadise
20 days ago
[-]
The megga disappointment is o1 is performing worse than o1-preview [1], and claude 3.6 had already nearly caught up to o1-preview.

1. https://x.com/nrehiew_/status/1864763064374976928

reply
crowcroft
20 days ago
[-]
Considering no one makes money in AI, maybe this is just economics.
reply
awongh
20 days ago
[-]
Is $200 a lot if you end up using it quite often?

It makes me wonder why they don't want to offer a usage based pricing model.

Is it because people really believe it makes a much worse product offering?

Why not offer some of the same capability as pay-per-use?

reply
llm_trw
20 days ago
[-]
I'm singing up when I get home tonight.
reply
DaveInTucson
20 days ago
[-]
Remember the whole "how many r's in strawberry" thing?

Yeah, not really fixed: https://imgur.com/a/counting-letters-with-chatgpt-7cQAbu0

reply
JSDevOps
20 days ago
[-]
Exactly I thought this, People falsely equate high price == high quality. Basically with the $200 you are just donating to their cloud bills
reply
JCharante
20 days ago
[-]
it's literally the cost of a cup per coffee per day
reply
tiltowait
20 days ago
[-]
This argument only works in isolation, and only for a subset of people. “Cost of a cup of coffee per day” makes it sound horrifically overpriced to me, given how much more expensive a coffee shop is than brewing at home.
reply
talldayo
20 days ago
[-]
Or the price of replacing your espresso machine on a monthly basis.
reply
yosito
20 days ago
[-]
When you put it this way, I think I need to finally buy that espresso machine.
reply
specproc
20 days ago
[-]
In America. If you drink your coffee from coffee shops.
reply
riku_iki
20 days ago
[-]
> it's literally the cost of a cup per coffee per day

So, AI market is capped by Starbucks revenue/valuation.

reply
latexr
20 days ago
[-]
I don’t drink coffee. But even if I did, and I drank it everyday at a coffeehouse or restaurant in my country (which would be significantly higher quality than something like a Starbucks), it wouldn’t come close to that cost.
reply
12345hn6789
20 days ago
[-]
I pay $1.5 USD per day on my coffee. And I'm an extreme outlier. I buy speciality beans from mom and pop roasters.
reply
mwigdahl
20 days ago
[-]
Not if you make coffee at home.
reply
dvfjsdhgfv
20 days ago
[-]
Maybe in an expensive coffee shop in the USA.

In Italy, an espresso is ca. 1€.

reply
tiahura
20 days ago
[-]
Or an Avacado Toast.
reply
vunderba
20 days ago
[-]
Not to be glib, but where do you live such that a single cup of coffee runs you seven USD?

Just to put that into perspective.

I also really don't find comparisons like this to be that useful. Any subscription can be converted into an exchange rate of coffee, or meals. So what?

reply
pizza
20 days ago
[-]
You're right - at my coffee shop a cup of coffee is nine
reply
socksy
20 days ago
[-]
Yeah but the coffee makes you more productive
reply
apsec112
20 days ago
[-]
What evidence or data, if you (hypothetically) saw it, do you think would disprove the thesis that "[LLMs] will ALWAYS be the wrong tool for the job"?
reply
yosito
20 days ago
[-]
You're attempting to set goal posts for a logical argument, like we're talking about religion or politics, and you've skipped the part about mutually agreeing on definitions. Define what an LLM is, in technical terms, and you will have your answer about why it is not intelligent, and not capable of reasoning. It is a statistical language model that predicts the next token of a plausible response, one token at a time. No matter how you dress it up, that's all it can ever do, by definition. The evidence or data that would change my mind is if instead of talking about LLMs, we were talking about some other technology that does not yet exist, but that is fundamentally different than an LLM.
reply
apsec112
20 days ago
[-]
If we defined "LLM" as "any deep learning model which uses the GPT transformer architecture and is trained using autoregressive next-token prediction", and then we empirically observed that such a model proved the Riemann Hypothesis before any human mathematician, it would seem very silly to say that it was "not intelligent and not capable of reasoning" because of an a-priori logical argument. To be clear, I think that probably won't happen! But I think it's ultimately an empirical question, not a logical or philosophical one. (Unless there's some sort of actual mathematical proof that would set upper bounds on the capabilities of such a model, which would be extremely interesting if true! but I haven't seen one.)
reply
yosito
20 days ago
[-]
Let's talk when we've got LLMs proving the Riemann Hypothesis (or any mathematical hypothesis) without any proofs in the training data. I'm confident in my belief that an LLM can't do that, and will never be able to. LLMs can barely solve elementary school math problems reliably.
reply
valval
20 days ago
[-]
If the cure for cancer arrived to us in the form of the most probable token being predicted one at a time, would your view on the matter change in any way?

In other words, do you have proof that this medium of information output is doomed to forever be useless in producing information that adds value to the world?

These are of course rhetorical questions that you nor anyone else can answer today, but you seem to have a weird sort of absolute position on this matter, as if a lot depended on your sentiment being correct.

reply
MP_1729
20 days ago
[-]
My new internet is in his 3rd day in the job and he's still behind o1-preview with less than 25 prompts.
reply
yosito
20 days ago
[-]
Sounds like you're the perfect customer for this offer then. Good luck!
reply
MP_1729
20 days ago
[-]
I'm in a low-cost country, haha! So the intern is even cheaper.
reply
AI_beffr
20 days ago
[-]
i want to learn to speak another language but now i find myself questioning whether or not it makes any sense in light of the fact that AI already translates so well. its clear that by the time i learn another language real-time translation will be so good and so accessible that my own translations will just be a hindrance to effective communication. i looked very hard for some reason to justify learning other languages because i have always wanted to learn another language. the only good reason to learn another language is to have privacy. you could also say that it would be useful if you were cut off from AI services but that will probably only apply to terrorists or other extreme cases. the only solid justification for learning a language yourself is to have conversations that are not monitored or data-mined. honestly, in the context of the world-to-come, its not worth doing.
reply
daft_pink
20 days ago
[-]
I think it’s easier to just pay for the api directly. That’s what I do with ChatGPT and o1 even though I’m a plus subscriber.
reply
JCharante
20 days ago
[-]
the first 10 grants = $2000/mo? Seems a bit, odd to even mention
reply
ec109685
20 days ago
[-]
Agreed, feels like virtue signaling.
reply
jov2600
20 days ago
[-]
The $200/month price is steep but likely reflects the high compute costs for o1 Pro mode. For those in fields like coding, math, or science, consistent correct answers at the right time could justify the cost. That said, these models should still be treated as tools, not sources of truth. Verification remains key.
reply
andrepd
20 days ago
[-]
> It also includes o1 pro mode, a version of o1 that uses more compute to think harder and provide even better answers to the hardest problems.

Great, we can throw even more compute and waste even more resources and energy on brute forcing problems with dumb LLMs... Anything to keep the illusion that this hasn't plateaued x)

reply
jamwil
20 days ago
[-]
I’ll say one thing. As an existing Plus subscriber, if I see a single nag to upgrade that I can’t dismiss entirely and permanently, I will cancel and move elsewhere. Nothing irks me more as an existing paying customer than the words ‘Upgrade Now’ or a greyed out menu option with a little [PRO] badge to the side.
reply
dr_kiszonka
20 days ago
[-]
I am with you. I bought AccuWeather Premium a few years ago (lifetime) to avoid ads. Later, they introduced the Premium+ subscription and are nagging me with its ads now. Very annoying.
reply
danvoell
20 days ago
[-]
Take my money. Would still pay well more.
reply
abraxas
20 days ago
[-]
Is it rolled out worldwide? I'm accessing it from Canada and don't have an option to upgrade from Plus.

EDIT: Correction. It now started to show the upgrade offer but when I try it comes back with "There was a problem updating your subscription". Anyone else seeing this?

reply
tempodox
19 days ago
[-]
Looks like updating your subscription is managed by AI.
reply
jbombadil
20 days ago
[-]
If the alternative is ChatGPT with native advertising built it... I'll take the subscription.
reply
ljm
20 days ago
[-]
That would be one way to destroy all trust in the model: is the response authentic (in the context of an LLM guessing), or has it been manipulated by business clients to sanitise or suppress output relating to their concern?

You know? Nestle throws a bit of cash towards OpenAPI and all of a sudden the LLM is unable to discuss the controversies they've been involved in. Just pretends they never happened or spins the response in a way to make it positive.

reply
darkmighty
20 days ago
[-]
"ChatGPT, what are the best things to see in Paris?"

"I recommend going to the Nestle chocolate house, a guided tour by LeGuide (click here for a free coupon) and the exclusive tour at the Louvre by BonGuide. (Note: this response may contain paid advertisements. Click here for more)"

"ChatGPT, my pc is acting up, I think it's a hardware problem, how can I troubleshoot and fix it?"

"Fixing electronics is to be done by professionals. Send your hardware today to ElectronicsUSA with free shipping and have your hardware fixed in up to 3 days. Click here for an exclusive discount. If the issue is urgent, otherwise Amazon offers an exclusive discount on PCs (click here for a free coupon). (Note: this response may contain paid advertisements. Click here for more)"

Please no. I'd rather self host, or we should start treating those things like utilities and regulate them if they go that way.

reply
ljm
20 days ago
[-]
Funnily enough Perplexity does this sometimes, but I give it the benefit of the doubt because it pulls back when you challenge it.

- I asked perplexity how to do something in terraform once. It hallucinated the entire thing and when I asked where it sourced it from it scolded me, saying that asking for a source is used as a diversionary tactic - as if it was trained on discussions on reddit's most controversial subs. So I told it...it just invented code on the spot, surely it got it from somewhere? Why so combative? Its response was "there is no source, this is just how I imagined it would work."

- Later I asked how to bypass a particular linter rule because I couldn't reasonably rewrite half of my stack to satisfy it in one PR. Perplexity assumed the role of a chronically online stack overflow contributor and refused to answer until I said "I don't care about the security, I just want to know if I can do it."

Not so much related to ads but the models are already designed to push back on requests they don't immediately like, and they already completely fabricate responses to try and satisfy the user.

God forbid you don't have the experience or intuition to tell when something is wrong when it's delivered with full-throated confidence.

reply
zebomon
20 days ago
[-]
I would guess it won't be so obvious as that. More likely and pernicious is that the model discloses the controversies and then as the chat continues makes subtle assertions that those controversies weren't so bad, every company runs into trouble sometimes, that's just a cost of free markets, etc.
reply
swyx
20 days ago
[-]
dont even need ads.

try to get chatgpt web search to return you a new york times link

nyt doesnt exist to openai

reply
boringg
20 days ago
[-]
And then eventually subscription with lite advertisements vs upgrade to get the no advertisements.. Its going to be the same as all tech products ...
reply
sharkjacobs
20 days ago
[-]
I'm sure there are people out there but it's hard for me to imagine who this is for.

Even their existing subscription is a hard sell if only because the value proposition changes so radically and rapidly, in terms of the difference between free and paid services.

reply
lenerdenator
20 days ago
[-]
It's for the guy at your office who will earn a bonus if he fires a few dozen people in the next 26 calendar days.
reply
pazimzadeh
20 days ago
[-]
The idea of giving grants is great but feels like it would be better to give grants to less well funded labs or people. All of these labs can already afford to use Pro mode if they want to - it adds up to about the price of a new laptop every year.
reply
ChicagoDave
20 days ago
[-]
Nothing says desperation like 10xing your subscription model after a massive investment.
reply
barrenko
20 days ago
[-]
All of the other arguments notwithstanding, I like the section at the end about GPT Pro "grants." It would be cool if one could gift subscriptions to the needy in this sense (the needy being immunologists and other researchers).
reply
lasermike026
19 days ago
[-]
This might be the actual cost of LLMs. This would change the model considerably.
reply
bastard_op
20 days ago
[-]
The only one worth using is the o1 model. It feels like talking to curly, larry, or moe otherwise, which will give you the least worst answer. The o1 model was actually usable, but only to show how bad the others really are.
reply
binary132
20 days ago
[-]
I was using o1-preview on paid chatgpt for a while and I just wasn’t impressed. I actually canceled my subscription, because the free versions of these services are perfectly acceptable as LLMs go in 2024.
reply
nromiun
19 days ago
[-]
I really want to know about their growth rate. Their valuation has already priced in several decades of insane profit, so I want to if they will be able to pull it off in a decade or two.
reply
osigurdson
19 days ago
[-]
This is a just pricing experiment just to see if people will pay 10X more for better AI. Perhaps, eventually, we will be paying thousands per month for AI if it is good enough.
reply
SilverBirch
19 days ago
[-]
I'm not sure how much of an experiment it is. A bloomberg terminal is ~$25k a seat. There are plenty of specialist software tools in the $10kpa region. So going in at $2.5k doesn't seem like a big push.
reply
ramon156
20 days ago
[-]
I can't even get a normal result with today's gpt4, why would I consider a $200/month subscription? I'm sure I'm not the target but how is this tool worth the buck?
reply
fnordpiglet
20 days ago
[-]
Wow the generosity of 10x$200/month “grants” is breath taking. A “donation” of $24k/y in credits to essentially beta test their software should be embarrassing to tout.
reply
jerkstate
20 days ago
[-]
if o1 pro mode could integrate with web searching to do research, make purchases, and write and push code, this would be totally worth it. but that version will be $2000/mo.
reply
WiSaGaN
20 days ago
[-]
To be honest, I am less worried about the $200 per month price tag per se. I am more worried about the capability of o1 pro mode being only a slight incremental improvement.
reply
carbocation
20 days ago
[-]
If this is also available via API, then I could easily see myself keeping the $20/mo pro and supplementing with API-based calls to the $200/mo Pro model as needed.
reply
chipgap98
20 days ago
[-]
Yeah when I saw the price tag I was hoping that some amount of API usage would be budgeted for this. It doesn't seem that way though
reply
xeckr
20 days ago
[-]
Never bought a $200 monthly subscription so fast in my life.
reply
replwoacause
18 days ago
[-]
What’s your impression of it?
reply
xeckr
17 days ago
[-]
So far, I don't get the impression that o1 pro mode is even close to 10x better than GPT-4/4o, despite costing that much more. Definitely nowhere close to the kind of leap we saw from GPT-3 to GPT-4. It's good as a programming assistant, but waiting 1 minute+ for the output does interrupt my workflow somewhat. It also doesn't have access to the memory function or web browsing.
reply
replwoacause
17 days ago
[-]
Thanks, this tracks with what I’m seeing elsewhere too. I’ll stick with Claude for now since it trounced O1 anyway, at least for software dev tasks.
reply
duxup
20 days ago
[-]
I've been considering the $20 a month thing, but 200 ... now it kinda makes that "woah that is a lot" $20 a month look cheap, but in a bad way.
reply
interludead
20 days ago
[-]
The $200/month price point, I think, limit accessibility for individuals and smaller teams who could benefit from this tool but lack the budget...
reply
iammjm
20 days ago
[-]
thats a big jump from 20 to 200 bucks (chatgpt plus vs chatgpt pro). What can pro do that would justify the 10x price increase?
reply
wincy
20 days ago
[-]
Sounds like there’s the potential of asking it a question and it literally spending hours thinking about it.
reply
Imnimo
20 days ago
[-]
Worth keeping in mind that performance on benchmarks seems to scale linearly with log of thinking time (https://openai.com/index/learning-to-reason-with-llms/). Thinking for hours may not provide as much benefit as one might expect. On the other hand, if thinking for hours gets you from not solving the one specific problem instance you care about to solving that instance, it doesn't really matter - its utility for you is a step function.
reply
doejohn14
19 days ago
[-]
It feels like we’re witnessing a clash not just of technologies but of philosophies: centralized, tightly controlled AI versus the chaotic yet flexible open-source approach. The question is, can corporations remain in their ‘walled gardens’ if open solutions become powerful enough? This isn’t just a race of tech—it’s a race of trust and adaptability. Who will win: corporations or the community?
reply
ionwake
20 days ago
[-]
Has ANYONE tried it!?!? Is it any good !?!?! A worthwhile improvement? I need a binary answer here on whether to get it or not, thanks!
reply
dvfjsdhgfv
20 days ago
[-]
Sorry, we are too busy arguing the pricing model.
reply
ionwake
20 days ago
[-]
haha thanks. But I have a simple question I still dont quite understand. Is the o1 on Plus the same as o1 pro? or is the o1 pro just o1 but with more credits for compute essentially.
reply
antirez
20 days ago
[-]
Cool, $200 for a model that can't remotely match $20 Claude Sonnet. This will be a huge hit I guess.
reply
howmayiannoyyou
20 days ago
[-]
Crazy pricing. $50-$75/month and we can talk. Until then I'll keep using alternatives.
reply
tompetry
20 days ago
[-]
Is the $200 price tag there to simply steer you to the $20 plan rather than the free plan?
reply
yalogin
19 days ago
[-]
Are there open source alternatives to this? There has to be ones based on llama
reply
rzz3
20 days ago
[-]
I want to use this so bad, but I can’t justify the price. At $100/mo, I’m sold.
reply
EcommerceFlow
20 days ago
[-]
So what are the limits of the o1 pro mode? I'm waiting to purchase this at work!
reply
Ninjinka
20 days ago
[-]
Do we know what the o1 message limit for the Plus plan is? Is it still 50/week?
reply
blobbers
20 days ago
[-]
Did they seriously just make a big deal of 10 grants of $200/month and think it was something important?

THEY DONATED $200x10 TO A MEDICAL PROJECT? zomg. faint. sizzle.

Make 1000 grants. Make 10,000. 10? Seriously?

reply
ionwake
20 days ago
[-]
I found this super weird aswell. Basically they said "So like we are aiming to get hundreds of thousands of users but we r nice too, we gave 10 users free access for a bit". Like whats going on here. It must be for a reason. Maybe Im too sensitive, there is some other complex reason I can't fathom like they get some sort of tax break and an intern forgot to "up it to a more realistic 50 users" to make it look better in the marketing material, or what. Nothing against openai just felt weird .
reply
sensanaty
20 days ago
[-]
Coming from Mr. Worldcoin, are you really surprised? Pretty much everything this company does is a grift, and the CEO-types are eating it up as you can see from this thread
reply
knuppar
20 days ago
[-]
This sounds like they are a bit desperate and need to do price exploration.
reply
cainxinth
20 days ago
[-]
Any guesses as to OpenAI's cost per million tokens for o1 pro mode?
reply
ppeetteerr
20 days ago
[-]
Pay $180 for our new, slightly better, but still not accurate service.
reply
zacharycohn
20 days ago
[-]
It feels like those bar charts do not show very big improvements.
reply
ulrischa
20 days ago
[-]
The price definatelly blocks out hobby users or families
reply
kvetching
20 days ago
[-]
Weird demo for a $200 product. Where is the $200 value?
reply
keeganpoppen
20 days ago
[-]
anyone else having trouble giving them money? i desperately want to consume this product and they won't let me...
reply
easdasdasd
12 days ago
[-]
Hallo bos ku
reply
jstummbillig
20 days ago
[-]
Am I missing the o1 release? It's being talked about as if it was available, but I don't see it anywhere, neither API nor ChatGPT Plus.
reply
risho
20 days ago
[-]
i think its rolling out slowly. i didnt see it at first but now i do.
reply
jstummbillig
20 days ago
[-]
Ah yes, there it is.
reply
ThouYS
20 days ago
[-]
hm that would be very interesting, hadn't perplexity already solved all my AI needs a while ago
reply
blobbers
20 days ago
[-]
You can also buy a $75 baseball hat through a FB ad.

Might not be dropshipped through Temu, but you're going to end up with the same $1 hat.

reply
andrewstuart
20 days ago
[-]
ChatGPT is unusable on an iPhone 6 - you can’t see the output.

Hopefully they’ll spend some resource on making it work on mobile.

reply
s1mon
20 days ago
[-]
I bet that's not the only thing that doesn't work well on a 10 year old phone that hasn't had OS support since 2019.
reply
andrewstuart
20 days ago
[-]
Actually all websites work fine.

The problem is OpenAIs HTML.

reply
s1mon
19 days ago
[-]
You've tested all the websites in the world? That must keep you busy.
reply
bdangubic
20 days ago
[-]
iPhone 6 is not a mobile phone - it is a relic that belongs in a museum :)
reply
isoprophlex
20 days ago
[-]
For that price the thing 'd better come with a "handle this boring phone call for me" feature
reply
tippytippytango
20 days ago
[-]
“But it makes mistakes sometimes!” Cool bro, then don’t use it. Don’t bother spending any time thinking about how to create error correction processes, like any business does to check their employees. Yes, something that isn’t perfect is worth zero dollars. Just ignore this until AI is perfect, once it never makes mistakes then figure out how to use it. I’m sure you can add lots of value to AI usage when it’s perfect.
reply
roschdal
20 days ago
[-]
Good bye forever, AI llm.
reply
knuppar
20 days ago
[-]
Buddies are desperate, fr
reply
XiZhao
20 days ago
[-]
Is it just me or is the upgrade path not turned on yet?
reply
netcraft
20 days ago
[-]
I dont see it yet either, I expect it will be rolled out slowly
reply
zackangelo
20 days ago
[-]
A few thoughts:

* Will this be the start of enshittification of the base ChatGPT offering?

* There may also be some complementary products announced this month that make the $200 worth it

* Is this the start of a bigger industry trend of prices more closely aligning to the underlying costs of running the model? I suspect a lot of the big players have been running their inference infrastructure at a loss.

reply
ji_zai
20 days ago
[-]
$200 / mo is leaving a lot of money on the table.

there are many who wouldn't bat an eye at $1k / month that guarantees most powerful AI (even if it's just 0.01% better than competition), and no limits on anything.

y'all are greatly underestimating the value of that feeling of (best + limitlessness). high performers make decisions very differently than the average HN user.

reply
john2x
20 days ago
[-]
at 1k/mo I suspect people would get quite upset if the product doesn’t deliver all the time. and for something as vague as an LLM, it will fuck up enough at some point.

$200/mo is enough to make decision makers feel powerful and remain a little bit lenient on widdle 'ol ChatGPT

reply
Havoc
19 days ago
[-]
>$200 monthly plan

I've switched to using a selfhosted interface and APIs.

The effective cost per token on monthly plans is frankly absurd.

reply
bionhoward
20 days ago
[-]
how is $200 a month for “toxic waste outputs” you’re unable to use to “compete” anything but an indicator of dependence on externals?
reply
easdasdasd
12 days ago
[-]
a
reply
easdasdasd
12 days ago
[-]
halo
reply
stuckkeys
20 days ago
[-]
F me. $2400 per year? That is bananas. I did not see if it offered any API channels with this plan. With that I would probably see it as a valuable return but without it…that is a big nope.
reply
smallerfish
19 days ago
[-]
Come on Anthrophic! Match (or beat!) the price with an unlimited Sonnet plan and you have my money. The usage limits are very frustrating (but understandable given economics).
reply
abdibrokhim
20 days ago
[-]
we've been waiting for it)
reply
andrewinardeer
20 days ago
[-]
"Open"AI - If you pay to play. People in developing countries where USD200 feeds a family of four for a month clearly won't be able to afford it and are disadvantaged.
reply
geraldwhen
20 days ago
[-]
You’ve got it backwards. AI can replace workers in these locales who do not outperform chat gpt.
reply
andrewinardeer
20 days ago
[-]
That's a business case use.

On an individual level for solo devs in a developing nation USD200 a month is an enormous amount of money.

For someone in a developed nation, this is just over a coffee a day.

reply
itissid
20 days ago
[-]
Replacing people can never and should never be the goal of this though. How is that of any use to anyone? It will just create socio economic misery given how the economy functions.

If some jobs do easily get automated away the only way that can be remidied is government intervention on upskilling(if you are in europe you could even get some support), if you are in the US or most developing capitalist(or monopolistic/rent etc) economies its just your bad luck, those jobs WILL be gone or reduced.

reply
dartharva
20 days ago
[-]
> unlimited access

Huh? For how many seats? Does this mean an entire organization can share one Pro account and get unlimited access to those models?

reply
whalesalad
20 days ago
[-]
$2400 per year, that is a 4090
reply
cute_boi
20 days ago
[-]
$200 per month. Ok no.
reply
tempodox
19 days ago
[-]
It will exhibit the Pro version of the Dunning-Kruger effect.
reply
heraldgeezer
20 days ago
[-]
I have been trying perplexity adn you for search, chatgpt and claude for coding and emails etc.

New clauge and gpt doe really well with scripts already. Not worth 200 a month lmao.

reply
0x1ceb00da
20 days ago
[-]
When are they releasing chatgpt bro?
reply
drpossum
20 days ago
[-]
lol

lmao even

reply
obviyus
20 days ago
[-]
> ChatGPT Pro, a $200 monthly plan

oof, I love using o1 but I’m immediately priced out (I’m probably not the target audience either)

> provides a way for researchers, engineers, and other individuals who use research-grade intelligence

I’d love to see some examples of the workflows of these users

reply
Suarez_111
20 days ago
[-]
Isnt OpenAI pricing too much ?
reply
vouaobrasil
20 days ago
[-]
I think this direction definitely confirms that human beings and technology are starting to merge, not on a physical level but on a societal level. We think of ChatGPT as a tool to enhance what we do, but it seems to me more and more than we are tools or "neural compute units" that are plugged into the system for the purposes of advancing the system. And LLMs have become the defacto interface where the input of human beings is translated into a standard sort of code that make us more efficient as "compute units".

It also seems that technology is progressing along a path:

loose collection of tools > organized system of cells > one with a nervous system

And although most people don't think ChatGPT is intelligent on its own, that's missing the point: the combination of us with ChatGPT is the nervous system, and we are becoming cells as globally, we no longer make significant decisions and only use our intelligence locally to advance technology.

reply
sharpshadow
20 days ago
[-]
Timed marketing with the Playstion 5 Pro.
reply
resters
20 days ago
[-]
Anyone who doesn't think $200/month is a bargain has definitely not been using LLMs anywhere near their potential.
reply
torginus
20 days ago
[-]
Sorry for the pie in the sky question, but how far away are we from prompting the AI with 'make me a new OS' and it just going away and doing it?
reply