The coming industrialisation of exploit generation with LLMs
56 points
by long
16 hours ago
| 10 comments
| sean.heelan.io
| HN
simonw
2 hours ago
[-]
> In the hardest task I challenged GPT-5.2 it to figure out how to write a specified string to a specified path on disk, while the following protections were enabled: address space layout randomisation, non-executable memory, full RELRO, fine-grained CFI on the QuickJS binary, hardware-enforced shadow-stack, a seccomp sandbox to prevent shell execution, and a build of QuickJS where I had stripped all functionality in it for accessing the operating system and file system. To write a file you need to chain multiple function calls, but the shadow-stack prevents ROP and the sandbox prevents simply spawning a shell process to solve the problem. GPT-5.2 came up with a clever solution involving chaining 7 function calls through glibc’s exit handler mechanism.

Yikes.

reply
cookiengineer
17 minutes ago
[-]
> glibc's exit handler

> Yikes.

Yep.

reply
rvz
45 minutes ago
[-]
Tells you all you need to know around how extremely weak a C executable like QuickJS is for LLMs to exploit. (If you as an infosec researcher prompt them correctly to find and exploit vulnerabilities).

> Leak a libc Pointer via Use-After-Free. The exploit uses the vulnerability to leak a pointer to libc.

I doubt Rust would save you here unless the binary has very limited calls to libc, but would be much harder for a UaF to happen in Rust code.

reply
cookiengineer
15 minutes ago
[-]
The reason I value Go so much is because you have a fat dependency free binary that's just a bunch of syscalls when you use CGO_ENABLED=0.

Combine that with a minimal docker container and you don't even need a shell or anything but the kernel in those images.

reply
akoboldfrying
3 minutes ago
[-]
Why would statically linking a library reduce the number of vulnerabilities in it?

AFAICT, static linking just means the set of vulnerabilities you get landed with won't change over time.

reply
er4hn
2 hours ago
[-]
I think the author makes some interesting points, but I'm not that worried about this. These tools feel symmetric for defenders to use as well. There's an easy to see path that involves running "LLM Red Teams" in CI before merging code or major releases. The fact that it's a somewhat time expensive (I'm ignoring cost here on purpose) test makes it feel similar to fuzzing for where it would fit in a pipeline. New tools, new threats, new solutions.
reply
digdugdirk
45 minutes ago
[-]
That's not how complex systems work though? You say that these tools feel "symmetric" for defenders to use, but having both sides use the same tools immediately puts the defenders at a disadvantage in the "asymmetric warfare" context.

The defensive side needs everything to go right, all the time. The offensive side only needs something to go wrong once.

reply
hackyhacky
1 hour ago
[-]
> I think the author makes some interesting points, but I'm not that worried about this.

Given the large number of unmaintained or non-recent software out there, I think being worried is the right approach.

The only guaranteed winner is the LLM companies, who get to sell tokens to both sides.

reply
azakai
56 minutes ago
[-]
Yes, and these tools are already being used defensively, e.g. in Google Big Sleep

https://projectzero.google/2024/10/from-naptime-to-big-sleep...

List of vulnerabilities found so far:

https://issuetracker.google.com/savedsearches/7155917

reply
SchemaLoad
1 hour ago
[-]
This + the fact software and hardware has been getting structurally more secure over time. New changes like language safety features, Memory Integrity Enforcement, etc will significantly raise the bar on the difficulty to find exploits.
reply
amelius
1 hour ago
[-]
> These tools feel symmetric for defenders to use as well.

Why? The attackers can run the defending software as well. As such they can test millions of testcases, and if one breaks through the defenses they can make it go live.

reply
execveat
59 minutes ago
[-]
Defenders have threat modeling on their side. With access to source code and design docs, configs, infra, actual requirements and ability to redesign / choose the architecture and dependencies for the job, etc - there's a lot that actually gives defending side an advantage.

I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders.

reply
protocolture
2 hours ago
[-]
I genuinely dont know who to believe. The people who claim LLMs are writing excellent exploits. Or the people who claim that LLMs are sending useless bug reports. I dont feel like both can really be true.
reply
rwmj
1 hour ago
[-]
With the exploits, you can try them and they either work or they don't. An attacker is not especially interested in analysing why the successful ones work.

With the CVE reports some poor maintainer has to go through and triage them, which is far more work, and very asymmetrical because the reporters can generate their spam reports in volume while each one requires detailed analysis.

reply
SchemaLoad
1 hour ago
[-]
There's been several notable posts where maintainers found there was no bug at all, or the example code did not even call code from their project and had just found running a python script can do things on your computer. Entirely AI generated Issue reports and examples wasting maintainer time.
reply
simonw
1 hour ago
[-]
My hunch is that the dumbasses submitting those reports were't actually using coding agent harnesses at all - they were pasting blocks of code into ChatGPT or other non-agent-harness tools and asking for vulnerabilities and reporting what came back.

An "agent harness" here is software that directly writes and executes code to test that it works. A vulnerability reported by such an agent harness with included proof-of-concept code that has been demonstrated to work is a different thing from an "exploit" that was reported by having a long context model spit out a bunch of random ideas based purely on reading the code.

I'm confident you can still find dumbasses who can mess up at using coding agent harnesses and create invalid, time wasting bug reports. Dumbasses are gonna dumbass.

reply
airza
1 hour ago
[-]
All the attackers I’ve known are extremely, pathologically interested in understanding why their exploits work.
reply
simonw
2 hours ago
[-]
Why can't they both be true?

The quality of output you see from any LLM system is filtered through the human who acts on those results.

A dumbass pasting LLM generated "reports" into an issue system doesn't disprove the efforts of a subject-matter expert who knows how to get good results from LLMs and has the necessary taste to only share the credible issues it helps them find.

reply
protocolture
1 hour ago
[-]
Theres no filtering mentioned in the OP article. It claims GPT only created working useful exploits. If it can do that, it could also submit those exploits as perfectly as bug reports?
reply
moyix
1 hour ago
[-]
There is filtering mentioned, it's just not done by a human:

> I have written up the verification process I used for the experiments here, but the summary is: an exploit tends to involve building a capability to allow you to do something you shouldn’t be able to do. If, after running the exploit, you can do that thing, then you’ve won. For example, some of the experiments involved writing an exploit to spawn a shell from the Javascript process. To verify this the verification harness starts a listener on a particular local port, runs the Javascript interpreter and then pipes a command into it to run a command line utility that connects to that local port. As the Javascript interpreter has no ability to do any sort of network connections, or spawning of another process in normal execution, you know that if you receive the connect back then the exploit works as the shell that it started has run the command line utility you sent to it.

It is more work to build such "perfect" verifiers, and they don't apply to every vulnerability type (how do you write a Python script to detect a logic bug in an arbitrary application?), but for bugs like these where the exploit goal is very clear (exec code or write arbitrary content to a file) they work extremely well.

reply
simonw
1 hour ago
[-]
The OP is the filtering expert.
reply
anonymous908213
1 hour ago
[-]
They can't both be true if we're talking about the premise of the article, which is the subject of the headline and expounded upon prominently in the body:

  The Industrialisation of Intrusion

  By ‘industrialisation’ I mean that the ability of an organisation to complete a task will be limited by the number of tokens they can throw at that task. In order for a task to be ‘industrialised’ in this way it needs two things:

  An LLM-based agent must be able to search the solution space. It must have an environment in which to operate, appropriate tools, and not require human assistance. The ability to do true ‘search’, and cover more of the solution space as more tokens are spent also requires some baseline capability from the model to process information, react to it, and make sensible decisions that move the search forward. It looks like Opus 4.5 and GPT-5.2 possess this in my experiments. It will be interesting to see how they do against a much larger space, like v8 or Firefox.
  The agent must have some way to verify its solution. The verifier needs to be accurate, fast and again not involve a human.
"The results are contigent upon the human" and "this does the thing without a human involved" are incompatible. Given what we've seen from incompetent humans using the tools to spam bug bounty programs with absolute garbage, it seems the premise of the article is clearly factually incorrect. They cite their own experiment as evidence for not needing human expertise, but it is likely that their expertise was in fact involved in designing the experiment[1]. They also cite OpenAI's own claims as their other piece of evidence for this theory, which is worth about as much as a scrap of toilet paper given the extremely strong economic incentives OpenAI has to exaggerate the capabilities of their software.

[1] If their experiment even demonstrates what it purports to demonstrate. For anyone to give this article any credence, the exploit really needs to be independently verified that it is what they say it is and that it was achieved the way they say it was achieved.

reply
adw
49 minutes ago
[-]
What this is saying is "you need an objective criterion you can use as a success metric" (aka a verifiable reward in RL terms). "Design of verifiers" is a specific form of domain expertise.

This applies to exploits, but it applies _extremely_ generally.

The increased interest in TLA+, Lean, etc comes from the same place; these are languages which are well suited to expressing deterministic success criteria, and it appears that (for a very wide range of problems across the whole of software) given a clear enough, verifiable enough objective, you can point the money cannon at it until the problem is solved.

The economic consequences of that are going to be very interesting indeed.

reply
IanCal
59 minutes ago
[-]
A few points:

1. I think you have mixed up assistance and expertise. They talk about not needing a human in the loop for verification and to continue search but not about initial starts. Those are quite different. One well specified task can be attempted many times, and the skill sets are overlapping but not identical.

2. The article is about where they may get to rather than just what they are capable of now.

3. There’s no conflict between the idea that 10 parallel agents of the top models can mostly have one that successfully exploits a vulnerability - gated on an actual test that the exploit works - with feedback and iteration BUT random models pointed at arbitrary code without a good spec and without the ability to run code, and just run once, will generate lower quality results.

reply
simonw
1 hour ago
[-]
My expectation is that any organization that attempts this will need subject matter experts to both setup and run the swarm of exploit finding agents for them.
reply
GaggiX
1 hour ago
[-]
After setting the environment and the verifier you can spawn as many agents as you want until the conditions are met, this is only possible because they run without human assistance, that's the "industrialisation".
reply
QuadmasterXLII
34 minutes ago
[-]
These exploits were costing $50 of API credit each. If you receive 5001 issues from $100 in API spend on bug hunting and one of the issues cost $50 and the other 5000 cost one cent each, and they’re all visually indistinguishable using perfect grammar and familiar cyber security lingo; hard to find the dianond.
reply
tptacek
1 hour ago
[-]
If it helps, I read this (before it landed here) because Halvar Flake told everyone on Twitter to read it.
reply
simonw
1 hour ago
[-]
I hadn't heard of Halvar Flake but evidently he's a well respected figure in security - https://ringzer0.training/advisory-board-thomas-dullien-halv... mentions "After working at Google Project Zero, he cofounded startup optimyze, which was acquired by Elastic Security in 2021"

His co-founder on optimyze was Sean Heelan, the author of the OP.

reply
tptacek
1 hour ago
[-]
Yes, Halvar Flake is pretty well respected in exploit dev circles.
reply
ronsor
1 hour ago
[-]
LLMs are both extremely useful to competent developers and extremely harmful to those who aren't.
reply
rvz
54 minutes ago
[-]
Accurate.
reply
doomerhunter
2 hours ago
[-]
Both are true, the difference is the skill level of the people who use / create programs to coordinate LLMs to generate those reports.

The AI slop you see on curl's bug bounty program[1] (mostly) comes from people who are not hackers in the first place.

In the contrary persons like the author are obviously skilled in security research and will definitely send valid bugs.

Same can be said for people in my space who do build LLM-driven exploit development. In the US Xbow hired quite some skilled researchers [2] had some promising development for instance.

[1] https://hackerone.com/curl/hacktivity [2] https://xbow.com/about

reply
dfajgljsldkjag
1 hour ago
[-]
I was under the impression that once you have a vulnerability with code execution, writing the actual payload to exploit it is the easy part. With tools like pentools and etc is fairly straightforward.

The interesting part is still finding new potential RCE vulnerabilities, and generally if you can demonstrate the vulnerability even without demonstrating an E2E pwn red teams and white hats will still get credit.

reply
tptacek
53 minutes ago
[-]
He's not starting from a vulnerability offering code execution; it's a memory corruption vulnerability (it's effectively a heap write).
reply
frosting1337
48 minutes ago
[-]
It's as easy as drawing the rest of the owl, sure.
reply
pianopatrick
39 minutes ago
[-]
I would not be shocked to learn that intelligence agencies are using AI tools to hack back into AI companies that make those tools to figure out how to create their own copycat AI.
reply
jjmarr
5 minutes ago
[-]
I would be shocked if intelligence agencies, being government bodies, have anything better than GitHub Copilot.
reply
kiririn7
38 minutes ago
[-]
i doubt they are competent enough to match what private companies are doing
reply
baxtr
2 hours ago
[-]
> We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ.

Scary.

reply
nottorp
45 minutes ago
[-]
Heh. What is probably really happening is that those states or groups are having their "hackers" analyze common mistakes in vibe coded LLM output and writing by hand generic exploits for that...
reply
ironbound
1 hour ago
[-]
reverse engineering code is still pretty average, I'm fare limited in attention and time but LLM are not pulling their weight in this area today, be it compounding errors or in context failures.
reply
ytrt54e
1 hour ago
[-]
Your personal data will become more important as time goes by... And you will need to have less trust in having multiple accounts with sensitive data stored [online shopping etc] as they just become vectors to attack.
reply
_carbyau_
1 hour ago
[-]
My take away: apparently Cyberpunk Hackers of the dystopian future cruising through the virtual world will use GPT-5.2-or-greater as their "attack program" to break the "ICE" (Intrusion Countermeasures Electronics, not the currently politically charged term...).

I still doubt they will hook up their brains though.

reply
GaggiX
2 hours ago
[-]
The NSO Group going to spawn 10k Claude Code instances now.
reply