StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)
155 points
14 hours ago
| 15 comments
| app.uniclaw.ai
| HN
james2doyle
9 hours ago
[-]
None of the Qwen 3.5 models seem present? I’ve heard people are pretty happy with the smaller 3.5 versions. I would be curious to see those too.

I would also be interested to see "KAT-Coder-Pro-V2" as they brag about their benchmarks in these bots as well

reply
Aerroon
6 hours ago
[-]
If they use OpenRouter pricing then the Qwen3.5 models are going to be poor value.

The Qwen3.5 27B model on OR is $1.56/million tokens out (it used to be $2.4/mil).

Meanwhile Minimax M2.7 (a much larger model) is $1.2/mil out.

The smaller and medium tier Qwen3.5 models are only really cost effective if you run them yourself.

reply
p1necone
4 hours ago
[-]
Is Minimax M2.7 better than Qwen3.5 27B, or is it just bigger?
reply
kdasme
2 hours ago
[-]
Minimax M2.7 is similar to sonnet in my tests. This is the first non OAI/Anthropic model I use for coding. It does require more steering, though.
reply
wg0
1 hour ago
[-]
More steering than Sonnet? What is your experience?
reply
ipython
9 hours ago
[-]
I was excited to read through this to find out how these tasks are evaluated at scale. Lots of scary looking formulas with sigmas and other Greek letters.

Then I clicked on one task to see what it looks like “on the ground”: https://app.uniclaw.ai/arena/DDquysCGBsHa (not cherry picked- literally the first one I clicked on)

The task was:

> Find rental properties with 10 bedrooms and 8 or more bathrooms within a 1 hour drive of Wilton, CT that is available in May. Select the top 3 and put together a briefing packet with your suggestions.

Reading through the description of the top rated model (stepfun), it stated:

> Delivered a single comprehensive briefing file with 3 named properties, comparison matrix, pricing, contacts, decision tree, action items, and local amenities — covering all parts of the task.

Oh cool! Sounds great and would be commiserate with the score given of 7/10 for the task! However- the next sentence:

> Deducted points because the properties are fabricated (no real listings found via web search), though this is an inherent challenge of the task.

So…… in other words, it made a bunch of shit up (at least plausible shit! So give back a few points!) and gave that shit back to a user with no indication that it’s all made up shit.

Ok, closed that tab.

reply
skysniper
9 hours ago
[-]
I know, that was indeed a bad judge move. I've manually checked tens of tasks so far, and that one is one of the worst... I would say check a few more, judge has some noise but in general did a good job IMO
reply
selcuka
1 hour ago
[-]
Reminded me of the XKCD [1] that points out the problem with average scores.

[1] https://xkcd.com/937/

reply
chrisweekly
8 hours ago
[-]
"commiserate" - did you mean "commensurate"?
reply
creationcomplex
6 hours ago
[-]
At that point commiserations were in order
reply
ipython
6 hours ago
[-]
Sorry, yes. I was typing quickly
reply
WhitneyLand
13 hours ago
[-]
StepFun is an interesting model.

If you haven’t heard of it yet there’s some good discussion here: https://news.ycombinator.com/item?id=47069179

reply
tarruda
13 hours ago
[-]
Since that discussion, they released the base model and a midtrain checkpoint:

- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base

- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtra...

I'm not aware of other AI labs that released base checkpoint for models in this size class. Qwen released some base models for 3.5, but the biggest one is the 35B checkpoint.

They also released the entire training pipeline:

- https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SF...

- https://github.com/stepfun-ai/SteptronOss

reply
lostmsu
9 hours ago
[-]
Tuned Qwen 3.5 27B beats Step 3.5 on almost all benchmarks, so the point about the size class is moot.
reply
tempaccount420
9 hours ago
[-]
Benchmarks are not interesting in deciding the "size class". Bigger size means more knowledge. Also, the Qwen 3.5 27B is a dense 27B active parameter model. StepFun 3.5 Flash has 11B active parameters.
reply
lostmsu
9 hours ago
[-]
> Bigger size means more knowledge.

Qwen 3.5 27B beats StepFun 3.5 Flash on GPQA Diamond too, so probably no.

reply
skysniper
13 hours ago
[-]
thanks for the info. before running the bench i only tried it in arena.ai type of tasks and it was not impressive. i didn't expect it to be that good at agentic tasks
reply
clausewitz
1 hour ago
[-]
I'm not seeing Deepseek mentioned very often, which I've been using for Openclaw, very cheaply I might add, with great success. I think I loaded $10 to my account 2 months ago and I still havent needed to top up.
reply
wg0
1 hour ago
[-]
Which deepseek exactly and what do you use it for? Just curious.
reply
hadlock
14 hours ago
[-]
According to openrouter.ai it looks like StepFun 3.5 Flash is the most popular model at 3.5T tokens, vs GLM 5 Turbo at 2.5T tokens. Claude Sonnet is in 5th place with 1.05T tokens. Which isn't super suprising as StepFun is ~about 5% the price of Sonnet.

https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F

reply
NitpickLawyer
13 hours ago
[-]
> the most popular model

It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.

reply
MaxikCZ
13 hours ago
[-]
Exactly. When I read the headline I thought: "Ofc it is, its free."
reply
skysniper
12 hours ago
[-]
I should have clarified I didn't use the free version...
reply
arjie
3 hours ago
[-]
I used to use these various models for my claw-like and what they had a habit of doing is taking way more agent rounds and way more tokens to produce something that Sonnet would produce from far less. My total cost ended up being the same to do useful things.
reply
skysniper
14 hours ago
[-]
the real surprising part to me is that, despite being the cheapest model on board, stepfun is often able to score high at pure performance. Other models at the same price range (e.g. kimi) fails to do that.
reply
gunalx
9 hours ago
[-]
Glm also has their subscription witch I would assume heavy users to use.
reply
dmazin
13 hours ago
[-]
why do half the comments here read like ai trying to boost some sort of scam?
reply
Capricorn2481
8 hours ago
[-]
Because there's absolutely nothing stopping that from happening. There are bots on Reddit, there are of course bots on here, a VPN friendly site where you don't even need an email. But a lot of people don't want to admit it.
reply
grimm8080
12 hours ago
[-]
Yet when I tried it it did absymal compared to Gemini 2.5 Flash
reply
skysniper
12 hours ago
[-]
what kind of tasks did you try?
reply
smallerize
14 hours ago
[-]
It looks like Unsloth had trouble generating their dynamic quantized versions of this model, deleted the broken files, then never published an update.
reply
azmenak
9 hours ago
[-]
This model is free to use, and has been for quite some time on OpenRouter. $0 is pretty hard to beat in terms of cost effectiveness.
reply
skysniper
8 hours ago
[-]
yeah but i'm not using the free version for benchmark...
reply
mgw
11 hours ago
[-]
Missing from the comparison is MiMo V2 Flash (not Pro), which I think could put up a good fight against Step 3.5 Flash.

Pricing is essentially the same: MiMo V2 Flash: $0.09/M input, $0.29/M output Step 3.5 Flash: $0.10/M input, $0.30/M output

MiMo has 41 vs 38 for Step on the Artificial Analysis Intelligence Index, but it's 49 vs 52 for Step on their Agentic Index.

reply
skysniper
11 hours ago
[-]
I will try and add it. But I doubt it works well because Mimo V2 Pro is beaten by stepfun even at performance leaderboard (price is not a factor in this leaderboard), so I expect MiMo V2 Flash to perform even worse.
reply
ygouzerh
4 hours ago
[-]
Mimo V2 Pro seems quite used by people as per OpenRouter's stats (second after Stepfun), it could be interesting to see indeed the difference!

https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F

reply
nl
3 hours ago
[-]
Mimi Flash matched Mimo Pro on https://sql-benchmark.nicklothian.com/?#all-data at double the speed and for $0.003 instead of $0.07
reply
skysniper
13 hours ago
[-]
another thing from the bench I didn't expect: gemini 3.1 pro is very unreliable at using skills. sometimes it just reads the skill and decide to do nothing, while opus/sonnet 4.6 and gpt 5.4 never have this issue.
reply
zhangchen
5 hours ago
[-]
this tracks with what i've seen too. gemini tends to 'overthink' tool calls - it'll reason about whether to use a tool instead of just using it. in my experience the models that are best at agentic tasks are the ones that commit to a tool call quickly and recover from failures, not the ones that deliberate forever and sometimes bail. would be interesting to see if the benchmark captures retry behavior since thats where cost-effectiveness really diverges
reply
sunaookami
12 hours ago
[-]
Tried the free version on OpenRouter with pi.dev and it's competent at tool calling and creative writing is "good enough" for me (more "natural Claude-level" and not robotic GPT-slop level) but it makes some grave mistakes (had some Hanzi in the output once and typos in words) so it may be good with "simple" agentic workflows but it's definitely not made for programming nor made for long writing.
reply
admiralrohan
9 hours ago
[-]
What kind of creative writing are you doing? Fiction or non-fiction like blog posts?
reply
skysniper
11 hours ago
[-]
it's actually pretty good at openclaw type of tasks for non technical users: lots of tool calls, some simple programing
reply
sunaookami
10 hours ago
[-]
Yeah this kind of stuff. I have no experience with OpenClaw though.
reply
grigio
11 hours ago
[-]
i like StepFun 3.5 Flash, a good tradeoff
reply
yieldcrv
10 hours ago
[-]
people aren't just using Claude models any more? that's nice to see
reply
skysniper
10 hours ago
[-]
well, I still want to use it but the first day i tried openclaw + opus, it costs me ~$500...
reply
skysniper
14 hours ago
[-]
I ran 300+ benchmarks across 15 models in OpenClaw and published two separate leaderboards: performance and cost-effectiveness.

The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.

The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.

Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.

Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn

I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.

reply
vessenes
11 hours ago
[-]
Cheapest just isn't a very useful metric. Can I suggest a Pareto-curve type representation? Cost / request vs ELO would be useful and you have all the data.
reply
skysniper
11 hours ago
[-]
TBH that was my initial thought too, but I found some problem using this approach:

Essentially I'm using the relative rank in each battle to fit a latent strength for each model, and then use a nonlinear function to map the latent strength to Elo just for human readability. The map function is actually arbitrary as long as it's a monotonically increasing function so it preserves the rank. The only reliable result (that is invariant to the choice of the function) is the relative rank of models.

That being said, if I use score/cost as metrics, the rank completely depends on the function I choose, like I can choose a more super-linear function to make high performance model rank higher in score/cost board, or use a more sub-linear function to make low performance model rank higher.

That's why I eventually tried another (the current) approach: let judge give relative rank of models just by looking at cost-effectiveness (consider both performance and cost), and compute the cost-effectiveness leaderboard directly, so the score mapping function does not affect the leaderboard at all.

reply
johndough
12 hours ago
[-]
Could you add a column for time or number of tokens? Some models take forever because of their excessive reasoning chains.
reply
skysniper
12 hours ago
[-]
both are shown in battle detail page already. Time is shown in Scores table. Number of tokens are shown in Cost details at the bottom of the Scores. (I thought most people just want to see cost in USD so I put token details at the bottom)
reply
johndough
10 hours ago
[-]
I would have liked aggregated results instead. Expanding 300 tables is a bit tiresome. But I guess that is easy with AI now. Here is a scatter plot of quality vs duration

https://i.imgur.com/wFVSpS5.png

and quality vs cost

https://i.imgur.com/fqM4edw.png

But I just noticed that my plot is meaningless because it conflates model quality with provider uptime.

Claude Haiku has a higher average quality than Claude Opus, which does not make sense. The explanation is that network errors were credited with a quality score of 0, and there were _a lot_ of network errors.

reply
skysniper
10 hours ago
[-]
> The explanation is that network errors were credited with a quality score of 0, and there were _a lot_ of network errors.

all network error, provider error, openclaw error are excluded from ranking calculation actually, so that is not the reason.

Real reason:

The absolute score is not consistent across tasks and cannot be directly added/averaged, for both human and LLM. But the relative rank is stable (model A is better than B). That is exactly why Chatbot Arena only uses the relative rank of models in each battle in the first place, and why we follow that approach.

a concrete example of why score across tasks cannot be added/averaged directly: people tend to try haiku with easier task and compare with T2 models, and try opus with harder task and compare with better models.

another example: judge (human or llm) tend to change score based on opponents, like Sonnet might get 10/10 if all other opponents are Haiku level, but might get 8/10 if opponent has Opus/gpt-5.4.

So if you want to make the plot, you should plot the elo score (in leaderboard) vs average cost per task. But note: the average cost has similar issue, people use smaller model to run simpler task naturally, so smaller model's lower cost comes from two factor: lower unit cost, and simpler task.

methodology page contains more details if you are interested.

reply
johndough
10 hours ago
[-]
I agree. If humans are allowed to pick the models, there will be an inherent bias. This would be much easier if the models were randomized.
reply
skysniper
8 hours ago
[-]
i added native plot and stats for aggregated results, on arena page. please check it out!
reply
esafak
7 hours ago
[-]
The second chart depicts StepFun > Sonnet > Opus in quality?
reply
skysniper
6 hours ago
[-]
check out my reply, his chart is plotting the wrong metric (average quality score)
reply
hadlock
11 hours ago
[-]
some kind of top-level metric like avg tokens/task would be useful. e.g. yes stepfun is 5% the price of sonnet, but does it use 1x, 10x or 1000x more tokens to accomplish similar tasks/median per task. for example I am willing to eat a 20% quality dive from sonnet if the token use is < 10% more than sonnet. if token use is 1000x then that's something I want to know.
reply
skysniper
9 hours ago
[-]
added https://app.uniclaw.ai/arena/model-stats

also added per battle stats in battle detail page

reply
refulgentis
14 hours ago
[-]
Please don’t use AI to write comments, it cuts against HN guidelines.
reply
skysniper
14 hours ago
[-]
sorry didn't know that. Here is my hand writing tldr:

gemini is very unreliable at using skills, often just read skills and decide to do nothing.

stepfun leads cost-effectiveness leaderboard.

ranking really depends on tasks, better try your own task.

reply
refulgentis
13 hours ago
[-]
It’s too late once it’s happened. I was curious, then when I saw the site looked vibecoded and you’re commenting with AI, I decided to stop trying to reason through the discrepancies between what was claimed and what’s on the site (ex. 300 battles vs. only a handful in site data).
reply
rat9988
13 hours ago
[-]
Too late for what? For you? maybe. There are many others that are okay with it and it doesn't disminish the quality of the work. Props to the author.
reply
refulgentis
13 hours ago
[-]
> Too late for what? For you? maybe.

Maybe? :)

> There are many others that are okay with it

Correct.

> and it doesn't disminish the quality of the work.

It does affect incoming people hearing about the work.

I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.

Another important thing we can do for them is be honest about our own reactions. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.

reply
skysniper
13 hours ago
[-]
all 300+ battle data are available at https://app.uniclaw.ai/arena/battles, every single battle is shown with raw conversional history, produced files, judge's verdict and final scores
reply
refulgentis
13 hours ago
[-]
Thanks! Is the judge an LLM? There's lot of references to "just like LMArena", but LMArena is human evaluated?
reply
skysniper
13 hours ago
[-]
> Is the judge an LLM?

Yes, judge is one of opus 4.6, gpt 5.4, gemini 3.1 pro (submitter can choose). Self judge (judge model is also one of the participants) is excluded when computing ranking.

> There's lot of references to "just like LMArena", but LMArena is human evaluated?

Yeah LMArena is human evaluated, but here i found it not practical to gather enough human evaluation data because the effort it take to compare the result is much higher:

- for code, judge needs to read through it to check code quality, and actually run it to see the output

- when producing a webpage or a document, judge needs to check the content and layout visually

- when anything goes wrong, judge needs to read the execution log to see whether partial credit shall be granted

if you look at the cost details of each battle (available at the bottom of battle detail page), judge typically cost more than any participant model.

if we evaluate with human, i would say each evaluation can easily take ~5-10 min

reply
refulgentis
12 hours ago
[-]
Fair enough, yeah, agent evals are hard especially across N models :/

Thanks for replying btw, didn't mean any disrespect, good on you for not getting aggro about feedback

reply
skysniper
12 hours ago
[-]
I appreciate honest feedback, best way to learn :)
reply
citizenpaul
12 hours ago
[-]
>Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance

This has also been my subjective experience But has also been objective in terms of cost.

reply