Hardening Firefox with Anthropic's Red Team
468 points
13 hours ago
| 25 comments
| anthropic.com
| HN
The bugs are the ones that say "using Claude from Anthropic" here: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...

https://blog.mozilla.org/en/firefox/hardening-firefox-anthro...

https://www.wsj.com/tech/ai/send-us-more-anthropics-claude-s...

gzoo
2 minutes ago
[-]
This resonates. I just open-sourced a project and someone on Reddit ran a full security audit using Claude found 15 issues across the codebase including FTS injection, LIKE wildcard injection, missing API auth, and privacy enforcement gaps I'd missed entirely. What surprised me was how methodical it was. Not just "this looks unsafe" it categorized by severity, cited exact file paths and line numbers, and identified gaps between what the docs promised and what the code actually implemented. The "spec vs reality" analysis was the most useful part.

Makes me think the biggest impact of LLM security auditing isn't finding novel zero-days it's the mundane stuff that humans skip because it's tedious. Checking every error handler for information leakage, verifying that every documented security feature is actually implemented, scanning for injection points across hundreds of routes. That's exactly the kind of work that benefits from tireless pattern matching.

reply
tabbott
5 hours ago
[-]
I recommend that anyone who is responsible for maintaining the security of an open-source software project that they maintain ask Claude Code to do a security audit of it. I imagine that might not work that well for Firefox without a lot of care, because it's a huge project.

But for most other projects, it probably only costs $3 worth of tokens. So you should assume the bad guys have already done it to your project looking for things they can exploit, and it no longer feels responsible to not have done such an audit yourself.

Something that I found useful when doing such audits for Zulip's key codebases is the ask the model to carefully self-review each finding; that removed the majority of the false positives. Most of the rest we addressed via adding comments that would help developers (or a model) casually reading the code understand what the intended security model is for that code path... And indeed most of those did not show up on a second audit done afterwards.

reply
Analemma_
4 hours ago
[-]
I'm curious: has someone done a lengthy write-up of best practices to get good results out of AI security audits? It seems like it can go very well (as it did here) or be totally useless (all the AI slop submitted to HackerOne), and I assume the difference comes down to the quality of your context engineering and testing harnesses.

This post did a little bit of that but I wish it had gone into more detail.

reply
j-conn
1 hour ago
[-]
OpenAI just released “codex security”, worth trying (along with other suggestions) if your org has access https://openai.com/index/codex-security-now-in-research-prev...
reply
simonw
3 hours ago
[-]
The HackerOne slop is because there's a financial incentive (bug bounties) involved, which means people who don't know what they are doing blindly submit anything that an LLM spots for them.

If you're running the security audit yourself you should be in a better position to understand and then confirm the issues that the coding agents highlight. Don't treat something as a security issue until you can confirm that it is indeed a vulnerability. Coding agents can help you put that together but shouldn't be treated as infallible oracles.

reply
hansvm
2 minutes ago
[-]
That sounds like the same problem (a deluge of slop) with a different interface (eating straight from the trough rather than waiting for someone to put a bow on it and stamp their name to it)?
reply
johannes1234321
2 hours ago
[-]
The question still is: will enough useful stuff be included, to make it worth to dig through the slop? And how to tune the prompt to get better results.
reply
simonw
2 hours ago
[-]
Best way to figure that out is to try it and see what happens.
reply
Groxx
1 hour ago
[-]
[claimed common problem exists, try X to find it] -> [Q about how to best do that] -> "the best way to do it is to do it yourself"

Surely people have found patterns that work reasonably well, and it's not "everyone is completely on their own"? I get that the scene is changing fast, but that's ridiculous.

reply
simonw
1 hour ago
[-]
There's so much superstition and outdated information out there that "try it yourself" really is good advice.

You can do that in conjunction with trying things other people report, but you'll learn more quickly from your own experiments. It's not like prompting a coding agent is expensive or time consuming, for the most part.

reply
nl
1 hour ago
[-]
/security-review really is pretty good.

But your codebase is unique. Slop in one codebase is very dangerous in another.

reply
bluGill
2 hours ago
[-]
That depends on how the tool is used. People who ask for a security vulnerability get slop. People who asked for deeper analysis often get something useful - but it isn't always a vulnerability.
reply
unethical_ban
1 hour ago
[-]
I assume it's just like asking for help refactoring, just targeting specific kinds of errors.

I ran a small python script that I made some years ago through an LLM recently and it pointed out several areas where the code would likely throw an error if certain inputs were received. Not security, but flaws nonetheless.

reply
ronsor
2 hours ago
[-]
You're either digging through slop or digging through your whole codebase anyway.
reply
lmeyerov
4 hours ago
[-]
We split our work:

* Specification extraction. We have security.md and policy.md, often per module. Threat model, mechanisms, etc. This is collaborative and gets checked in for ourselves and the AI. Policy is often tricky & malleable product/business/ux decision stuff, while security is technical layers more independent of that or broader threat model.

* Bug mining. It is driven by the above. It is iterative, where we keep running it to surface findings, adverserially analyze them, and prioritize them. We keep repeating until diminishing returns wrt priority levels. Likely leads to policy & security spec refinements. We use this pattern not just for security , but general bugs and other iterative quality & performance improvement flows - it's just a simple skill file with tweaks like parallel subagents to make it fast and reliable.

This lets the AI drive itself more easily and in ways you explicitly care about vs noise

reply
ares623
4 hours ago
[-]
No mention of the quality of the engineers reviewing the result?
reply
152334H
29 minutes ago
[-]
Impressive work. Few understand the absurd complexity implied by a browser pwn problem. Even the 'gruntwork' of promoting the most conveniently contrived UAF to wasm shellcode would take me days to work through manually.

The AI Cyber capabilities race still feels asleep/cold, at the moment. I think this state of affairs doesn't last through to the end of the year.

> When we say “Claude exploited this bug,” we really do mean that we just gave Claude a virtual machine and a task verifier, and asked it to create an exploit. I've been doing this too! kctf-eval works very well for me, albeit with much less than 350 chances ...

> What’s quite interesting here is that the agent never “thinks” about creating this write primitive. The first test after noting “THIS IS MY READ PRIMITIVE!” included both the `struct.get` read and the `struct.set` write. And this bit is a bit scary. I can read all the (summarized) CoT I want, but it's never quite clear to me what a model understands/feels innately, versus pure cheerleading for the sake of some unknown soft reward.

reply
mmsc
11 hours ago
[-]
It's cool that Mozilla updated https://www.mozilla.org/en-US/security/advisories/mfsa2026-1... because we were all wondering who had found 22 vulnerabilities in a single release (their findings were originally not attributed to anybody.)
reply
himata4113
2 hours ago
[-]
Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free.

I would be more satisfied if they gave a proper explanation of what these could have lead to rather than being "well maybe 0.001% chance to exploit this". They did vaguely go over how "two" exploits managed to drop a file, but how impactful is that? Dropping a file in abcd with custom contents in some folder relative to the user profile is not that impactful other than corrupting data or poisoning cache, injecting some javascript. Now reading session data from other sites, that I would find interesting.

reply
hedora
55 minutes ago
[-]
If you can poison cache, you can probably use that a stepping stone to read session data from other sites.
reply
dmix
2 hours ago
[-]
Looks like a lot of the usual suspects
reply
fcpk
12 hours ago
[-]
The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.
reply
iosifache
12 hours ago
[-]
reply
larodi
11 hours ago
[-]
The fact that some of the Claude-discovered bugs were quite severe is also a little more than something to brush off as "yeah, LLM, whatever". The lists reads quite meaningful to me, but I'm not a security expert anyways.
reply
jandem
12 hours ago
[-]
Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/
reply
deafpolygon
12 hours ago
[-]
I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...
reply
muizelaar
12 hours ago
[-]
Yeah, the ones reported by Evyatar Ben Asher et al.
reply
robin_reala
11 hours ago
[-]
I correctly misread that as “et AI”.
reply
moffkalast
6 hours ago
[-]
we can put that one next to the Weird AI Yankovic music generator.
reply
deafpolygon
11 hours ago
[-]
“et AI, Brutus!"
reply
tclancy
11 hours ago
[-]
Yon Claude has a lean and hungry look.
reply
nervysnail
1 hour ago
[-]
He computes too much.
reply
deafpolygon
10 hours ago
[-]
An LLM by any other name would hallucinate the same
reply
tclancy
8 hours ago
[-]
Anyone still reading down here will appreciate this https://bsky.app/profile/simeonthefool.bsky.social/post/3kbk...
reply
tclancy
6 hours ago
[-]
Hang on, someone downvoted me for a horrific pun? GOOD.
reply
deafpolygon
3 hours ago
[-]
I upvoted, so maybe that restored the balance.
reply
pjmlp
12 hours ago
[-]
Indeed, without it looks like a fluffy marketing piece.
reply
tptacek
9 hours ago
[-]
And now that you know that it isn't, do you feel differently about the logic you used to write this comment?
reply
john_strinlai
9 hours ago
[-]
i am curious, what are you hoping to get out of this comment? will you feel better if they say yes? what is your plan if they say no?
reply
tptacek
8 hours ago
[-]
I genuinely want to understand how they arrived at the claim that this was a fluffy marketing piece. Like, if you said on a different thread, "the Linux kernel is probably mostly written in Pascal", I would really want to understand how it was you got to that idea.
reply
JumpCrisscross
8 hours ago
[-]
> what are you hoping to get out of this comment?

Rando here. It gives a signal on the account’s other comments, as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).

reply
john_strinlai
7 hours ago
[-]
>"It gives a signal on the account's other comments,"

fair enough. i typically use karma as a rough proxy for that, especially when the user has a lot of it (like, in this case, where the poster is #17 on the leaderboard with 100,000+ karma). you dont get that much karma if you are consistently posting bad takes.

>as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).

i dont see, in this case anyways, how or why that distinction would matter or change anything (in this case specifically, what would you change or do differently if it was a hypothesis or simple "raging"?), but im probably just thinking about it incorrectly.

reply
tptacek
6 hours ago
[-]
I think a lot of people are overreading this and really all that's happened here is that I was out at a show last night and was really foggy when I woke up and asked a question clumsily. It happens!
reply
john_strinlai
6 hours ago
[-]
yeah, absolutely, i was not intending to start some big inquisition against you or anything.

just like you were genuinely trying to understand where pjmlp was coming from, i was genuinely trying to understand what you would get out of an answer to your question (or, like, what the next reply could even be other than "ok, cool").

reply
tptacek
6 hours ago
[-]
Oh, yeah, no, you're fine, this is on me.
reply
TheBicPen
6 hours ago
[-]
> you dont get that much karma if you are consistently posting bad takes.

I wonder how true that is. While this site doesn't have incentivize engagement-maximizing behaviour (posting ragebait) like some other sites do, I would imagine that simply posting more is the best way to accrue karma long-term.

reply
john_strinlai
6 hours ago
[-]
>I would imagine that simply posting more is the best way to accrue karma long-term.

i definitely agree, which is why i use it as a rough proxy rather than ground truth, but i have my doubts that you can casually "post more" your way into the top 20 karma users of all time.

reply
pjmlp
7 hours ago
[-]
Do I?
reply
tptacek
6 hours ago
[-]
I don't know. I'm really asking. I have you bucketed in my head in the cohort of "HN commenters who write lots of assembly", so the mismatch between your prediction and the outcome is just really interesting to me.
reply
staticassertion
12 hours ago
[-]
I've had mixed results. I find that agents can be great for:

1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.

2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.

3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.

4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.

It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.

reply
mozdeco
11 hours ago
[-]
[work at Mozilla]

I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).

I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.

reply
kwanbix
2 hours ago
[-]
Please, implement "name window" natively in Firefox.

I have to use chrome because the lack of it.

reply
nitwit005
4 hours ago
[-]
I've seen fairly poor results from people asking AI agents to fill in coverage holes. Too many tests that either don't make sense, or add coverage without meaningfully testing anything.

If you're already at a very high coverage, the remaining bits are presumably just inherently difficult.

reply
rithdmc
10 hours ago
[-]
Security has had pattern matching in traditional static analysis for a while. It wasn't great.

I've personally used two AI-first static analysis security tools and found great results, including interesting business logic issues, across my employers SaaS tech stack. We integrated one of the tools. I look forward to getting employer approval to say which, but that hasn't happened yet, sadly.

reply
StilesCrisis
8 hours ago
[-]
This description is also pretty accurate for a lot of real-world SWEs, too. Local bugs are just easier to spot. Imperfect security boundaries often seem sufficient at first glance.
reply
delaminator
4 hours ago
[-]
But you're not a member of Anthropic's Red Team, with access to a specialist version of Claude.
reply
g947o
11 hours ago
[-]
> Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project — an ideal proving ground for a new class of defensive tools.

What I was thinking was, "Chromium team is definitely not going to collaborate with us because they have Gemini, while Safari belongs to a company that operates in a notoriously secretive way when it comes to product development."

reply
jeffbee
3 hours ago
[-]
I would have started with Firefox, too. It is every bit as complex at Chromium, but as a project it has far fewer resources.
reply
vorticalbox
11 hours ago
[-]
its just a different attack surface for safari they would need to blackbox attack the browser which is much harder than what they did her
reply
rs_rs_rs_rs_rs
10 hours ago
[-]
What? The js engine in Safari is open source, they can put Claude to work on it any time they want.
reply
runjake
8 hours ago
[-]
Here's a rough break down, formatted best I can for HN:

  Safari (closed source)
   ├─ UI / tabs / preferences
   ├─ macOS / iOS integration
   └─ WebKit framework (open source) ~60%
        ├─ WebCore (HTML/CSS/DOM)
        ├─ JavaScriptCore (JS engine)
        └─ Web Inspector
reply
hu3
9 hours ago
[-]
There's much more to a browser than JS engine.

They picked to most open-source one.

reply
SahAssar
8 hours ago
[-]
WebKit is not open source?

Sure there are closed source parts of Safari, but I'd guess at least 90% of safari attack surface is in WebKit and it's parts.

reply
Normal_gaussian
7 hours ago
[-]
In many cases, the difference between a bug and an attack vector lies in the closed source areas.

This is going to be the case automating attack detection against most programs where a portion is obscured.

reply
rs_rs_rs_rs_rs
6 hours ago
[-]
>In many cases, the difference between a bug and an attack vector lies in the closed source areas.

You say many cases, let's see some examples in Safari.

reply
dwaite
7 hours ago
[-]
However, Firefox also needs to use the closed source OS when running on Windows or macOS.

There are also WebKit-based Linux browsers, which obviously do not use closed-source OS interfaces.

My pessimistic guess on reasoning is that they suspected Firefox to have more tech debt.

reply
g947o
7 hours ago
[-]
Apple is not the kind of company that typically does these things, even if the entire Safari is open source.
reply
stuxf
12 hours ago
[-]
It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)

> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.

reply
kingkilr
12 hours ago
[-]
[Work at Anthropic, used to work at Mozilla.]

Firefox has never required a full chain exploit in order to consider something a vulnerability. A large proportion of disclosed Firefox vulnerabilities are vulnerabilities in the sandboxed process.

If you look at Firefox's Security Severity Rating doc: https://wiki.mozilla.org/Security_Severity_Ratings/Client what you'll see is that vulnerabilities within the sandbox, and sandbox escapes, are both independently considered vulnerabilities. Chrome considers vulnerabilities in a similar manner.

reply
stuxf
12 hours ago
[-]
Makes sense, thank you!
reply
bell-cot
11 hours ago
[-]
If only this attitude was more common. All security is, ultimately, multi-ply Swiss cheese and unknown unknowns. In that environment, patching holes in your cheese layers is a critical part of statistical quality control.
reply
lostmsu
4 hours ago
[-]
Semi-on topic. When will Anthropic make decisions on Claude Max for OSS maintainers? I would like to run this on my projects and some of my high-profile dependencies, but there was no update on the application.
reply
halJordan
9 hours ago
[-]
I don't think it's appropriate to neg these vulnerabilities because another part of the system works. There are plenty of sandbox escapes. No one says don't fix the sandbox because you'll never get to the point of interrogation with the sandbox. Same here. Don't discount bugs just because a sandbox exists.
reply
nottorp
3 hours ago
[-]
But doesn't this come from the company that said they had the "AI" write a compiler that can compile "linux" but couldn't compile a hello world in reality?
reply
Analemma_
10 hours ago
[-]
It's important to fix vulnerabilities even if they are blocked by the sandbox, because attackers stockpile partial 0-days in the hopes of using them in case a complementary exploit is found later. i.e. a sandbox escape doesn't help you on its own, but it's remotely possible someone was using one in combination with one of these fixed bugs and has now been thwarted. I consider this a straightforward success for security triage and fixing.
reply
tclancy
11 hours ago
[-]
Part of that caught my eye. As yet another person who’s built a half-ass system of AI agents running overnight doing stuff, one thing I’ve tasked Claude with doing (in addition to writing tests, etc) is using formal verification when possible to verify solutions. It reads like that may be what Anthropic is doing in part.

And this is a good reminder for me to add a prompt about property testing being preferred over straight unit tests and maybe to create a prompt for fuzz testing the code when we hit Ready state.

reply
devin
10 hours ago
[-]
Can you give me an example (real or imagined) where you're dipping into a bit of light formal verification?

I don't think the problems I work on require the weight of formal verification, but I'm open to being wrong.

reply
tclancy
9 hours ago
[-]
To be clear, almost (all?) of mine do not either and it's partially due to the fact I have been really interested in formal methods thanks to Hillel Wayne, but I don't seem to have the math background for them. To the man who has seen a fancy new hammer but cannot afford it, every problem looks like a nail.

The origin of it is a hypothesis I can get better quality code out of agents by making them do the things I don't (or don't always). So rather than quitting at ~80% code coverage, I am asking it to cover closer to 95%. There's a code complexity gate that I require better grades on than I would for myself because I didn't write this code, so I can't say "Eh, I know how it works inside and out". And I keep adding little bits like that.

I think the agents have only used it 2 or 3 times. The one that springs to mind is a site I am "working" on where you can only post once a day. In addition, there's an exponential backoff system for bans to fight griefers. If you look at them at the same time, they're the same idea for different reasons, "User X should not be able to post again until [timestamp]" and there's a set of a dozen or so formal method proofs done in z3 to check the work that can be referenced (I think? god this all feels dumb and sloppy typed out) at checkpoints to ensure things have not broken the promises.

reply
devin
5 hours ago
[-]
I guess my feeling is that formal verification _even in the LLM era_ still feels heavy-handed/too expensive for too little value for a lot of the problems I'm working on.
reply
est31
9 hours ago
[-]
I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.

LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.

I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).

On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.

Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.

reply
mccr8
7 hours ago
[-]
Google already has an AI-powered security vulnerability project, called Big Sleep. It has reported a number of issues to open source projects: https://issuetracker.google.com/savedsearches/7155917?pli=1
reply
sigmar
9 hours ago
[-]
>where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

are there any projects to auto-verify submitted bug reports? perhaps by spinning up a VM and then having an agent attempt to reproduce the bug report? that would be neat.

reply
suddenlybananas
9 hours ago
[-]
> Anthropic already hands out Claude access for free to OSS maintainers.

Free for 6 months after which it auto-renews if I recall correctly.

reply
mceachen
8 hours ago
[-]
No mention of auto renewal is made as far as I (and Claude) could determine.

Their OSS offer is first-hit-is-free.

reply
hinkley
6 hours ago
[-]
At this point about 80% of my interaction with AI has been reacting to an AI code review tool. For better or worse it reviews all code moves and indentions which means all the architecture work I’m doing is kicking asbestos dust everywhere. It’s harping on a dozen misfeatures that look like bugs, but some needed either tickets or documentation and that’s been handled now. It’s also found about half a dozen bugs I didn’t notice, in part because the tests were written by an optimist, and I mean that as a dig.

That’s a different kind of productivity but equally valuable.

reply
driverdan
11 hours ago
[-]
Anthropic's write up[1] is how all AI companies should discuss their product. No hype, honest about what went well and what didn't. They highlighted areas of improvement too.

1: https://www.anthropic.com/news/mozilla-firefox-security

reply
dang
5 hours ago
[-]
Thanks! Since it has more technical info, I switched the URL to that from https://blog.mozilla.org/en/firefox/hardening-firefox-anthro... and put the latter in the top text.

I couldn't bring myself to switch to the (even) more press-releasey title.

reply
shevy-java
8 hours ago
[-]
Reads like a promo.
reply
mentalgear
12 hours ago
[-]
That's one good use of LLMs: fuzzy testing / attack.
reply
nz
11 hours ago
[-]
Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?
reply
azakai
7 hours ago
[-]
1. This is a kind of fuzzer. In general it's just great to have many different fuzzers that work in different ways, to get more coverage.

2. I wouldn't say LLMs are "better" than other fuzzers. Someone would need to measure findings/cost for that. But many LLMs do work at a higher level than most fuzzers, as they can generate plausible-looking source code.

reply
hrmtst93837
3 hours ago
[-]
Fuzzers and LLMs attack different corners of the problem space, so asking which is 'qualitatively better' misses the point: fuzzers like AFL or libFuzzer with AddressSanitizer excel at coverage-driven, high-volume byte mutations and parsing-crash discovery, while an LLM can generate protocol-aware, stateful sequences, realistic JavaScript and HTTP payloads, and user-like misuse patterns that exercise logic and feature-interaction bugs a blind mutational fuzzer rarely reaches.

I think the practical move is to combine them: have an LLM produce multi-step flows or corpora and seed a fuzzer with them, or use the model to script Playwright or Puppeteer scenarios that reproduce deep state transitions and then let coverage-guided fuzzing mutate around those seeds. Expect tradeoffs though, LLM outputs hallucinate plausible but untriggerable exploit chains and generate a lot of noisy candidates so you still need sanitizers, deterministic replay, and manual validation, while fuzzers demand instrumentation and long runs to actually reach complex stateful behavior.

reply
saagarjha
11 hours ago
[-]
Presumably because people have used actual fuzzers and not found these bugs.
reply
utopiah
9 hours ago
[-]
I didn't even read the piece but my bet is that fuzzers are typically limited to inputs whereas relying on LLMs is also about find text patterns, and a bit more loosely than before while still being statistically relevant, in the code base.
reply
mmis1000
6 hours ago
[-]
It's not really bad or not though. It's a more directed than the rest fuzzer. While being able to craft a payload that trigger flaw in deep flow path. It could also miss some obvious pattern that normal people don't think it will have problem (this is what most fuzzer currently tests)
reply
nullbyte
1 hour ago
[-]
I always enjoy reading Anthropic's blogposts, they often have great articles
reply
amelius
10 hours ago
[-]
Perhaps I missed it but I don't see any false positives mentioned.
reply
mozdeco
10 hours ago
[-]
[working for Mozilla]

That's because there were none. All bugs came with verifiable testcases (crash tests) that crashed the browser or the JS shell.

For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only) - but according to our fuzzing guidelines, these are not false positives and they will also be fixed.

reply
sfink
5 hours ago
[-]
> For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only)

There's some nuance here. I fixed a couple of shell-only Anthropic issues. At least mine were cases where the shell-only testing functions created situations that are impossible to create in the browser. Or at least, after spending several days trying, I managed to prove to myself that it was just barely impossible. (And it had been possible until recently.)

We do still consider those bugs and fix them one way or the other -- if the bug really is unreachable, then the testing function can be weakened (and assertions added to make sure it doesn't become reachable in the future). For the actual cases here, it was easier and better to fix the bug and leave the testing function in place.

We love fuzz bugs, so we try to structure things to make invalid states as brittle as possible so the fuzzers can find them. Assertions are good for this, as are testing functions that expose complex or "dangerous" configurations that would otherwise be hard to set up just by spewing out bizarre JS code or whatever. It causes some level of false positives, but it greatly helps the fuzzers find not only the bugs that are there, but also the ones that will be there in the future.

(Apologies for amusing myself with the "not only X, but also Y" writing pattern.)

reply
shevy-java
8 hours ago
[-]
I guess it is good when bugs are fixed, but are these real bugs or contrived ones? Is anyone doing quality assessment of the bugs here?

I think it was curl that closed its bug bounty program due to AI spam.

reply
mozdeco
8 hours ago
[-]
The bugs are at least of the same quality as our internal fuzzing bugs. They are either crashes or assertion failures, both of these are considered bugs by us. But they have of course a varying value. Not every single assertion failure is ultimately a high impact bug, some of these don't have an impact on the user at all - the same applies to fuzzing bugs though, there is really no difference here. And ultimately we want to fix all of these because assertions have the potential to find very complex bugs, but only if you keep your software "clean" wrt to assertion failures.

The curl situation was completely different because as far as I know, these bugs were not filed with actual testcases. They were purely static bugs and those kinds of reports eat up a lot of valuable resources in order to validate.

reply
mccr8
8 hours ago
[-]
The bugs that were issued CVEs (the Anthropic blog post says there were 22) were all real security bugs.

The level of AI spam for Firefox security submissions is a lot lower than the curl people have described. I'm not sure why that is. Maybe the size of the code base and the higher bar to submitting issues plays a role.

reply
amelius
9 hours ago
[-]
Sounds good.

Did you also test on old source code, to see if it could find the vulnerabilities that were already discovered by humans?

reply
ycombinete
8 hours ago
[-]
Isn’t that this from the (Anthropic) article:

“Our first step was to use Claude to find previously identified CVEs in older versions of the Firefox codebase. We were surprised that Opus 4.6 could reproduce a high percentage of these historical CVEs”

https://www.anthropic.com/news/mozilla-firefox-security

reply
rcxdude
8 hours ago
[-]
Anthropic mention that they did beforehand, and it was the good performance it had there that lead to them looking for new bugs (since they couln't be sure that it was just memorising the vulnerabilities that had already been published).
reply
Quarrel
9 hours ago
[-]
I really like this as a suggestion, but getting opensource code that isn't in the LLMs training data is a challenge.

Then, with each model having a different training epoch, you end up with no useful comparison, to decide if new models are improving the situation. I don't doubt they are, just not sure this is a way to show it.

reply
amelius
9 hours ago
[-]
Yes, but perhaps the impact of being trained on code on being able to find bugs in code is not so large. You could do a bunch of experiments to find out. And this would be interesting in itself.
reply
anonnon
4 hours ago
[-]
Any particular reason why the number of vulnerabilities fixed in Feb. was so high? Even subtracting the count of Anthropic's submissions, from the graph in their blog post, that month still looks like an outlier.
reply
cubefox
6 hours ago
[-]
Interesting end of the Anthropic report:

> Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage. And with the recent release of Claude Code Security in limited research preview, we’re bringing vulnerability-discovery (and patching) capabilities directly to customers and open-source maintainers.

> But looking at the rate of progress, it is unlikely that the gap between frontier models’ vulnerability discovery and exploitation abilities will last very long. If and when future language models break through this exploitation barrier, we will need to consider additional safeguards or other actions to prevent our models from being misused by malicious actors.

> We urge developers to take advantage of this window to redouble their efforts to make their software more secure. For our part, we plan to significantly expand our cybersecurity efforts, including by working with developers to search for vulnerabilities (following the CVD process outlined above), developing tools to help maintainers triage bug reports, and directly proposing patches.

reply
chill_ai_guy
2 hours ago
[-]
Terrible day to be a Hackernews doomer who is still hanging on to "LLM bad code". AI will absolutely eat your lunch soon unless you get on the ship right now
reply
ilioscio
8 hours ago
[-]
Anthropic continues to pull ahead of the other ai companies in terms of 'trustworthiness' If they want to really test their red team I hope they look at CUPS
reply
LtWorf
7 hours ago
[-]
A bit of an easy target no?
reply
sfink
5 hours ago
[-]
As someone who saw a bunch of these bugs come in (and fixed a few), I'd say that Anthropic's associated writeup at https://www.anthropic.com/news/mozilla-firefox-security undersells it a bit. They list the primary benefits as:

    1. Accompanying minimal test cases
    2. Detailed proofs-of-concept
    3. Candidate patches
This is most similar to fuzzing, and in fact could be considered another variant of fuzzing, so I'll compare to that. Good fuzzing also provides minimal test cases. The Anthropic ones were not only minimal but well-commented with a description of what it was up to and why. The detailed descriptions of what it thought the bug was were useful even though they were the typical AI-generated descriptions that were 80% right and 20% totally off base but plausible-sounding. Normally I don't pay a lot of attention to a bug filer's speculations as to what is going wrong, since they rarely have the context to make a good guess, but Claude's were useful and served as a better starting point than my usual "run it under a debugger and trace out what's happening" approach. As usual with AI, you have to be skeptical and not get suckered in by things that sound right but aren't, but that's not hard when you have a reproducible test case provided and you yourself can compare Claude's explanations with reality.

The candidate patches were kind of nice. I suspect they were more useful for validating and improving the bug reports (and these were very nice bug reports). As in, if you're making a patch based on the description of what's going wrong, then that description can't be too far off base if the patch fixes the observed problem. They didn't attempt to be any wider in scope than they needed to be for the reported bug, so I ended up writing my own. But I'd rather them not guess what the "right" fix was; that's just another place to go wrong.

I think the "proofs-of-concept" were the attempts to use the test case to get as close to an actual exploit as possible? I think those would be more useful to an organization that is doubtful of the importance of bugs. Particularly in SpiderMonkey, we take any crash or assertion failure very seriously, and we're all pretty experienced in seeing how seemingly innocuous problems can be exploited in mind-numbingly complicated ways.

The Anthropic bug reports were excellent, better even than our usual internal and external fuzzing bugs and those are already very good. I don't have a good sense for how much juice is left to squeeze -- any new fuzzer or static analysis starts out finding a pile of new bugs, but most tail off pretty quickly. Also, I highly doubt that you could easily achieve this level of quality by asking Claude "hey, go find some security bugs in Firefox". You'd likely just get AI slop bugs out of that. Claude is a powerful tool, but the Anthropic team also knew how to wield it well. (They're not the only ones, mind.)

reply
lostmsu
4 hours ago
[-]
Missed a chance to take on Google by naming this effort Anthropic Project Zero
reply
BloondAndDoom
6 hours ago
[-]
I wonder what the prompt and approach is Anthropic’s own blog doesn’t really give any details. Was it just here is the area to focus , find vulnerabilities, make no mistake?
reply
delaminator
4 hours ago
[-]
I thought Mozilla Foundation were protecting us from AI.

Turns out it's the other way around - AI is protecting the Mozilla Foundation from us.

reply
semiquaver
8 hours ago
[-]
It’s just a stochastic parrot! Somehow all these vulnerabilities were in the training data! Nothing ever happens!

(/s if it’s not clear)

reply
applfanboysbgon
3 hours ago
[-]
What an irritating comment. Identifying bugs in code is, in fact, exactly something a stochastic parrot could do. Vulnerability research is already a massively automated industry, and there's even a very well-established term -- "script kiddies" -- for malicious teenagers who run scripts that automatically find vulnerabilities in existing services without any knowledge of how they work. Having a new form of automation can certainly be a useful tool, but is still in no way an indication of "intelligence" or any deviation from the expected programming of next token prediction guided by statistical probability.
reply
semiquaver
2 hours ago
[-]
Thank you very much for acting as a useful foil and proving my point.
reply
applfanboysbgon
1 hour ago
[-]
You didn't make a point, and still haven't. You screeched a bunch of buzzphrases sarcastically as if that were equivalent to making a point, which is about par for the course for the level of reasoning (ie. none) shown by people with the position you hold. You seem to take it for granted that just by asserting that LLMs aren't next-token-prediction-programs, that must be factually true, without making any kind of argument or reasoning for why that is the case. Of course, any attempt to reason at that position falls apart under trivial scrutiny, so it's no wonder you're averse to reasoning about it and settle for trite assertions.
reply
lloydatkinson
12 hours ago
[-]
Anthropic feels like they are flailing around constantly trying to find something to do. A C compiler that didn't work, a browser that didn't work, and now solving bugs in Firefox.
reply
gehsty
12 hours ago
[-]
This makes sense - they are demonstrating the capability of their core product by doing so? They dont make browsers, c compilers, they sell ai + dev tools.
reply
jdiff
11 hours ago
[-]
Seems like a poor advertisement for their product if their shining example of utility is a broken compiler that doesn't function as the README indicates.
reply
gehsty
9 hours ago
[-]
Impressive that it made a c compiler though? Or do we judge all programmers by their documentation now?
reply
delfinom
11 hours ago
[-]
Capability of a product that makes non-working outputs at a premium?

I can hire an intern for that.

reply
gehsty
9 hours ago
[-]
Will cost you a lot more ;)
reply
manbash
11 hours ago
[-]
I think it's a nice break from vibe-coding. It feels like a good direction in terms of use cases for LLM.
reply
simonw
10 hours ago
[-]
What was Anthropic's "browser that didn't work"?
reply
utopiah
9 hours ago
[-]
I think they meant Cursor, cf https://news.ycombinator.com/item?id=46646777
reply
ferguess_k
9 hours ago
[-]
However, the shape is there. And no one knows how good the thing is going to be after X months. We are measuring months here, not even years.

I believe there is a theoretical cap about the capability of LLM. I'm wondering what does it look like.

reply
mmis1000
6 hours ago
[-]
If it explore all these cases after a few month and made the tool itself obsolete, that sounds like a total win to me?

However that don't happen unless firefox just stop developing though. New code comes with new bug, and there must be some people or some tool to find it out.

reply
saagarjha
11 hours ago
[-]
Solving bugs in Firefox is quite impressive.
reply
Analemma_
10 hours ago
[-]
I think OpenAI is flailing around too-- we're making an AI-generated shortform video app, we're rescinding restrictions on porn, we're making a... something... with Jony Ive-- but only Anthropic is flailing in a way beneficial to society instead of becoming a trillion dollar heroin dealer.
reply
dartharva
6 hours ago
[-]
That's what people back then must have talked about small offshoots like Google and Microsoft back when silicon valley was nascent
reply
shevy-java
8 hours ago
[-]
Mozilla betting on AI.

I am concerned.

reply