After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber
72 points
2 hours ago
| 10 comments
| techcrunch.com
| HN
2ndorderthought
1 hour ago
[-]
"my model is the most dangerous"

"No mine is the most dangerous"

"Nuh uh mine is"

"Mine could kill everyone!"

"Mine could do it faster!"

"Prove it!!!"

This is where we are

reply
davidgrenier
1 hour ago
[-]
Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don't see themselves making money any time soon, they have to turn every future model into a weird fascination.
reply
cyanydeez
14 minutes ago
[-]
think about it in the form of who can pay. theyre at b2b. and swiftly moving to government.
reply
2ndorderthought
10 minutes ago
[-]
All that user data is a huge asset for government contracts.
reply
concinds
1 hour ago
[-]
These models demonstrably have good vulnerability research capabilities.

I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.

reply
authnopuz
17 minutes ago
[-]
Good but not necessarily better that was is already pay-as-you-go available today. ref. https://www.flyingpenguin.com/the-boy-that-cried-mythos-veri...

This AISLE benchmark is interesting in this matter: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...

And the recently discovered Copy Fail by Xint code is another proof that the gating is overblown: https://xint.io/blog/copy-fail-linux-distributions

reply
ZyanWu
1 hour ago
[-]
> demonstrably

I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities

reply
2ndorderthought
45 minutes ago
[-]
You are up to date. Mythos had unauthorized access because of poor security but that's it as far as I know. Not exactly a good sign for something being advertised as a weapon...
reply
SpicyLemonZest
16 minutes ago
[-]
It’s easy to end up with no public-trusted third parties if we arbitrarily distrust third parties who say the capabilities match what’s promised. Mozilla for example says it found hundreds of Firefox vulnerabilities, and I think it’s pretty unlikely they’re lying to cover Anthropic’s back.
reply
brikym
1 hour ago
[-]
It's like that phone call in The Big Short where Goldman suddenly change their mind once they hold a position.
reply
vasco
1 hour ago
[-]
Would AGI start by hacking competing labs to hamper their progress?
reply
Avicebron
1 hour ago
[-]
You'll have to define what you mean by AGI
reply
fodkodrasz
1 hour ago
[-]
AGI: Automatically Generating Income
reply
gordonhart
16 minutes ago
[-]
This is a surprisingly concrete and defensible definition of AGI.
reply
Avicebron
6 minutes ago
[-]
Is it defensible? It sounds like a thin disguise over "income for me but not for thee"?
reply
jwr
1 hour ago
[-]
I have no idea why people still even attempt to believe anything that comes out of Altman's mouth. Do we not learn from the past?
reply
apples_oranges
1 hour ago
[-]
Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..
reply
michelb
1 hour ago
[-]
Has there been a single positive post about Altman?
reply
giwook
27 minutes ago
[-]
I wonder what that says about Altman.
reply
GuB-42
1 hour ago
[-]
Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.

People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...

reply
giwook
26 minutes ago
[-]
That's far from the only reason why he's "a bad guy" now.
reply
xandrius
1 hour ago
[-]
You missed literally every single post/article about the guy?
reply
giwook
26 minutes ago
[-]
More likely that confirmation bias acted as a filter.
reply
ilia-a
7 minutes ago
[-]
Silly move since combo of skills/agents can achieve same results on most recent models anyway
reply
pluc
1 hour ago
[-]
My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren't people that go for second best or care about the state of the world.
reply
0123456789ABCDE
3 minutes ago
[-]
they are already getting paid for opus 4.7, why would they release mythos?

assuming mythos is a paper tiger: great marketing, keep going

assuming mythos is for real: err, does this have to be explained?

reply
xandrius
1 hour ago
[-]
Make it sound "scary good", tell everyone and their mom, charge gullible companies $$$$$ for its premium access and then move on.
reply
lossolo
1 hour ago
[-]
And government contracts.
reply
Xmd5a
1 hour ago
[-]
>Me: ok but you did not answer my question: is it possible to engineer paranoia ?

>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.

reply
sexylinux
28 minutes ago
[-]
Is this a model that will finally work without creating errors?
reply
cmiles8
1 hour ago
[-]
It’s a marketing move, pure and simple.

Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.

Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.

reply
mnmnmn
52 minutes ago
[-]
OpenAI is such trash. Worked with them on a project, they blew off meetings, lied to us, etc
reply
le-mark
53 minutes ago
[-]
It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.

I personally am ready to buy the drop when this bubble pops.

reply
bryancoxwell
32 minutes ago
[-]
I’m not up to date on local models, but is that clear?
reply
literalAardvark
16 minutes ago
[-]
Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.

Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.

The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.

But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.

reply
le-mark
18 minutes ago
[-]
Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.
reply
feverzsj
1 hour ago
[-]
With subsidy gone, token price goes sky high. The biggest shit show is about to happen.
reply