We see this firsthand at Prismor with auto generated security fixes. Even with the best LLMs, validating fixes is the real bottleneck our pipeline struggles to exceed 70% on an internal golden dataset (which itself is somewhat biased).
Many patches technically fix the vulnerability but introduce semantic regressions or architectural drift. Passing tests is a weak signal and proving a fix is truly safe to merge is much harder
A benchmark that checks CI pass/fail captures the first part. It cannot capture the second. An agent that makes CI green by weakening an assertion or bypassing a check will score well here but create a time bomb.
The monorepo point from yuyuqueen hits this. When the agent can see the full dependency graph, it is less likely to fix something locally while breaking a downstream assumption. The biggest maintenance failures I have seen are not wrong logic. They are fixes that are locally correct but violate an unwritten contract between components.
This may also be the limit to the quality of an automated port to another language. What isn't encoded as automated tests or manual test procedure cannot be verified.
So often I'm amazed at what it's possible to accomplish from a prompt that's certainly insufficient with insufficient context. "It should have been necessary to specify more context there," or "I would have thought that it wasn't possible to do that without reading in more context than just one source code file," and then a few prompts later, "there's where we failed for trying to skimp on context"
To prevent architectural rework as a human developer also requires substantial ahead-of-time codebase review.
Are AGENTS.md files the best place to summarize more comprehensive codebase review and useful dense context like guidelines for testing and architectural components in order to avoid rework?
* Claude Opus 4.6 : 0.71
* Claude Opus 4.5 : 0.51
* KIMI-K2.5 : 0.37
* GLM-5 : 0.36
* GPT-5.2 : 0.23
Note: later GPT versions seem to be only available within openAi's proprietary codex cli, so can't be tested - and if tested via the codex cli "harness" it wouldn't be a pure model-to-model comparison any more.
---
Of course, the interesting follow-up question is: How well perform these models with added agent tooling ("harness") ?
Maybe someone has tokens to burn and can run a matrix of agent tools over the top models and provide the results?
https://github.com/openai/codex
You can definitely access the latest models via the API. That's how Codex CLI works.
But it was in codex
5.3-codex is only available via the Responses API, not the Completions API. Two different APIs for model access. If you were using Completions you have to port to Responses. It's not that hard. I did this for my own agent the other week. I think it might be like that for all their new models from now on. Responses is a much more powerful API. It's more like a front to ChatGPT than the underlying models.
Well that's already not a very fair comparison, we've known for years (one of the early-ish LLM papers, maybe someone knows which one) that prompting makes an enormous difference on agent performance, and most strikingly, the same prompt that massively boosts performance on one model, can massively reduce performance on another.
So you already need to fine-tune the prompts for the model, if you want anything approaching best results.
Now what's really amusing is that if you run models without their official harness, they can actually do way better on some benchmarks! [0] e.g. On Terminal Bench 2, Claude Opus 4.6 goes from #33 (Claude Code) to #5 (custom harness). Similar results for Codex.
Now, this is "for this one very specific benchmark", but I still thought it was funny, since you'd expect "the harness made by the same company" to be the best for all tasks, but that's clearly not the case. (For specific tasks, it's actually quite trivial to outperform a general purpose harness.)
But the interesting comparison when evaluating coding agent capabilities is to evaluate the offerings given to users.
So this means comparing Claude Code to Codex to whatever CLI tools Kimi, GLM, and others give you.
And it might mean throwing Cursor, OpenCode, Amp, Pi, mini-swe-agent, etc into the mix
https://developers.openai.com/api/docs/models/gpt-5.3-codex
IMHO The harness must be used when running these experiments. The model vendors know best on giving the best harness with gpt 5.4 and codex or Claude code with opus 4.6 which makes a big difference if you are running any kind of agentic coding tasks.
I see both Claude and gpt to be neck and neck in coding. Every other model+harness is definitely 3-6 months behind. Right now codex seems to be the best in terms of solving complex bugs, long running tasks, much higher limits and even speed while Claude seems to do well in front end and their cli ux seems nice! Codex app is very good though (wish it wasn’t electron as a memory hog but it’s good)
This was only true for Claude Code for a while. Codex was poor and Gemini was unusable.
Since then Codex has gotten quite good.
This seems like a really cool thing to benchmark! Technically it'd be possible to take GitHub repos that the AI orgs probably already have, cross-reference the code against the issues and regressions, and train/validate on that.
The dataset would need to be way bigger to get close to the likes of SWE-bench: https://www.swebench.com/original.html
"Vibe coded stuff gets hard to maintain and will end up buggy." Yeah, so make models that deal with that better, optimize for maintainability and consistency.
Cool to see Claude doing decently though!
The scales do seem to be tipped in its favor (cf: my other comment in this thread).
Something hard to capture in benchmarks: project-level conventions. A well-maintained CLAUDE.md at the repo root — describing architecture, naming patterns, test conventions — gives the agent context it internalizes before touching code. My regression rate dropped noticeably once I started maintaining that kind of project metadata. Model choice is only half the equation — the other half is how well you've structured the information environment the agent works in.
I can't help but notice that they're benchmarking Opus 4.6 (Anthropic's latest and greatest model) against GPT-5.2 (which is three generations behind OpenAI's latest coding models: GPT-5.2-Codex, GPT-5.3-Codex and the latest GPT-5.4).
We are talking about regressions, what once worked no longer does, and should be measured in 9s