Trusted access for the next era of cyber defense
56 points
5 hours ago
| 14 comments
| openai.com
| HN
Avicebron
2 hours ago
[-]
I don't think they've added enough cyber. My cyber workflow demands more trusted access for cyber so that I can use these cyber-permissive models for my cybersecurity.
reply
Jedd
1 hour ago
[-]
It's a source of minor, but persistent, annoyance that security people have tried to abscond with the prefix cyber, morphing it into a synonym for security.

Having grown up reading cyberpunk novels about life in cyberspace, a passing interest in cybernetics (though not of the Sirius Cybernetics Corporation variety), it's frustrating to lose a 'this means computer or internet related' prefix.

reply
bee_rider
1 hour ago
[-]
Hmm, I guess this puts the unregulated banking enthusiasts’ stealing of the crypto prefix in a new light.
reply
ofjcihen
1 hour ago
[-]
Whoa hey now, if they just give out all the cyber all at once they might run out or worse, the bad guys will horde all the cyber for themselves!

No no, best to have them distribute the cyber to us responsibly.

reply
SoftTalker
48 minutes ago
[-]
Just wait until you meet the Cybermen.
reply
swyx
1 hour ago
[-]
you make fun of it but i kind of like that the security community has just embraced this kinda old school hokey term. its a short hand. leave them be.
reply
cshimmin
1 hour ago
[-]
Incidentally, I recently learned the origin of the term. Cyber - short for cybernetic - is from the greek κυβερνήτης (kybernetes), meaning helmsman. The original use of cybernetics is in the context of automated control systems, so steering a rudder was a good analogy. It is also the origin for the name k8s.
reply
twoodfin
1 hour ago
[-]
In the early days of socialization on the Internet it had a very different meaning!!
reply
alopha
4 hours ago
[-]
That's a lot of waffle to try and say 'we've got a really scary next model coming too real soon, promise!'
reply
guzfip
3 hours ago
[-]
More like they realized how much money they were wasting letting the proles generate slop and vibe code the same CRUD app they rewrote in 5 different JavaScript frameworks a few years back.

The money is in enterprise and government. The consumer market doesn’t remotely pay enough. It’s just the same story with Microsoft purposely making Windows an unusable mess because that’s not where they make their money. It was good to establish themselves, but that market is getting dumped.

reply
flyinglizard
3 hours ago
[-]
Wait six months, get the Chinese version.
reply
everlier
3 hours ago
[-]
Changes as we speak, z.ai is the first one to show differential pricing
reply
ofjcihen
4 hours ago
[-]
I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.

These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.

reply
gavinray
3 hours ago
[-]
I completed the "Trusted Access" verification, but it seems to have unlocked nothing in the OpenAI API or Codex models.

Just FYI for others.

reply
ofjcihen
4 minutes ago
[-]
So it seems like you just…have it once you get approved. I’m testing it now and nothing indicates I’m running a different model but it just doesn’t fight me on cybersecurity stuff
reply
hoss1474489
2 hours ago
[-]
I see a Security button in the what’s new box in the Codex section of the ChatGPT website. It appears to allow me to run vulnerability scans against my connected GitHub repositories.

Direct link: https://chatgpt.com/codex/cloud/security

reply
gavinray
2 hours ago
[-]
I also have access to this but can't be certain if it was there before or not.

Anyone else who hasn't verified able to access?

reply
alphabettsy
1 hour ago
[-]
That’s been there for awhile.
reply
bunnywantspluto
3 hours ago
[-]
It seems like local LLMs will get popular for cybersecurity if this trend of locking access to models continues.
reply
alephnerd
2 hours ago
[-]
Not really. Not performant enough. Most organizations who would be interested in using a foundation model for security would either purchase the model directly or purchase a vendor who adds their special sauce or context to the model
reply
iammjm
3 hours ago
[-]
"trusted" + openai just simply doesn't compute for me any more
reply
Havoc
3 hours ago
[-]
>democratized access

>partner with a limited set of organizations for more cyber-permissive models.

I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement

reply
0x3f
3 hours ago
[-]
It must be representative democracy! And our representative is... Larry Ellison. Oh no.
reply
greatgib
1 hour ago
[-]
All of that reminds me about how gpt2 was almost too dangerous to be released to the world...
reply
onoesworkacct
1 hour ago
[-]
It was. The internet really has been filled with abject shit and social media is bots talking to bots.
reply
keyle
39 minutes ago
[-]
It's not like the world is a better place since...
reply
2001zhaozhao
3 hours ago
[-]
Requiring verified access is a good idea to mitigate risks from hacking while still giving people access to the latest models. Take notes, Anthropic.
reply
striking
2 hours ago
[-]
A 5.4 spin with slightly different guardrails is not "access to the latest models". We know this to be true from the article because they have a section entitled "Looking ahead to our upcoming model release and beyond". I wonder if they didn't just feel like they were caught out by Mythos.
reply
nullc
1 hour ago
[-]
Make cyber not cyber.
reply
ACCount37
3 hours ago
[-]
Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?

ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.

And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.

What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.

reply
jruz
3 hours ago
[-]
That’s the whole point of this variant of the model, it won’t have those guardrails.
reply
ACCount37
3 hours ago
[-]
Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.
reply
lebovic
1 hour ago
[-]
It seems reasonable for a company to require KYC for a product that's dual use – especially a novel one that's built for security research.

Privacy concerns aside, the KYC process for OpenAI was self-serve and took about a minute.

reply
jiggawatts
1 hour ago
[-]
Remember the argument that the bad guys using AI to hack systems won't be a problem because all the "good guys" will have access too and can secure their software?

Pepperidge Farm remembers.

reply
alephnerd
2 hours ago
[-]
> OpenAI's shit was nearly worthless for cybersec for what, a year already

Plenty of AI for Cybersecurity companies use a mixture of models depending on iteration and testing, including OpenAI's.

reply
mmooss
3 hours ago
[-]
This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?

Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.

KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.

reply
sureMan6
3 hours ago
[-]
> Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes

How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable

This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it

reply
marshray
45 minutes ago
[-]
"It's just a neutral tool" gets a lot harder to claim once a vendor starts specifically training and marketing the model for its ability to bypass security controls.

Yes, pentesting tools, even automated ones, are often legal. But they commonly do run up against legal restrictions and risks. They're marketed very differently from ChatGPT.

reply
0x3f
3 hours ago
[-]
There's always some wedge issue that means "don't punish the toolkmaker" is not politically viable. You can pick from guns to legal drugs to illegal drugs to all kinds of emotive things.

And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.

The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.

reply
luma
3 hours ago
[-]
So who is at fault in your solution, the org who created and shipped the software bug, or the company that discovered it?

I don't see how OpenAI is Ford in your analogy as OpenAI didn't make the software that blew up.

reply
zb3
3 hours ago
[-]
> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.

Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.

Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.

reply
Phelinofist
3 hours ago
[-]
Sounds totally reasonable to trust OpenAI and the sociopath sama.
reply