I'm not sure how these new models compare to the biggest and baddest models, but if price, speed, and reliability are a concern for your use cases I cannot recommend Mistral enough.
Very excited to try out these new models! To be fair, mistral-3-medium-0525 still occasionally produces gibberish ~0.1% of my use cases (vs gpt-5's 15% failure rate). Will report back if that goes up or down with these new models
On the API side of things my experience is that the model behaving as expected is the greatest feature.
There I also switched to Openrouter instead of paying directly so I can use whatever model fits best.
The recent buzz about ad-based chatbot services is probably because the companies no longer have an edge despite what the benchmarks say, users are noticing it and cancel paid plans. Just today OpenAI offered me 1 month free trial as if I wasn’t using it two months ago. I guess they hope I forget to cancel.
Was really plug and play. There are still small nuances to each one, but compared to a year ago prompts are much more portable
Business model of most subscription based services.
Does Mistral even have a Tool Use model? That would be awesome to have a new coder entrant beyond OpenAI, Anthropic, Grok, and Qwen.
I then tried multiple models, and they all failed in spectacular ways. Only Grok and Mistral had an acceptable success rate, although Grok did not follow the formatting instructions as well as Mistral.
Phrasing is a language learning application, so the formatting is very complicated, with multiple languages and multiple scripts intertwined with markdown formatting. I do include dozens of examples in the prompts, but it's something many models struggle with.
This was a few months ago, so to be fair, it's possible gpt-5.1 or gemini-3 or the new deepseek model may have caught up. I have not had the time or need to compare, as Mistral has been sufficient for my use cases.
I mean, I'd love to get that 0.1% error rate down, but there have always more pressing issues XD
I tried using it for a very small and quick summarization task that needed low latency and any level above that took several seconds to get a response. Using minimal brought that down significantly.
Weirdly gpt5's reasoning levels don't map to the OpenAI api level reasoning effort levels.
These are screenshots from that week: https://x.com/barrelltech/status/1995900100174880806
I'm not going to share the prompt because (1) it's very long (2) there were dozens of variations and (3) it seems like poor business practices to share the most indefensible part of your business online XD
It's a good thing that open source models use the best arch available. K2 does the same but at least mentions "Kimi K2 was designed to further scale up Moonlight, which employs an architecture similar to DeepSeek-V3".
---
vllm/model_executor/models/mistral_large_3.py
```
from vllm.model_executor.models.deepseek_v2 import DeepseekV3ForCausalLM
class MistralLarge3ForCausalLM(DeepseekV3ForCausalLM):
```
"Science has always thrived on openness and shared discovery." btw
Okay I'll stop being snarky now and try the 14B model at home. Vision is good additional functionality on Large.
Do things ever work that way? What if Google did Open source Gemini. Would you say the same? You never know. There's never "supposed" and "purpose" like that.
Granted, this is a subject that is very well present in the training data but still.
Mistral had the best small models on consumer GPUs for a while, hopefully Ministral 14B lives up to their benchmarks.
Had they gone to the EU, Mistral would have gotten a miniscule grant from the EU to train their AI models.
2. Did ASML invest in Mistral in their first round of venture funding or was it US VCs all along that took that early risk and backed them from the very start?
Risk aversion is in the DNA and in almost every plot of land in Europe such that US VCs saw something in Mistral before even the european giants like ASML did.
ASML would have passed on Mistral from the start and Mistral would have instead begged to the EU for a grant.
2. ASML was propped up by ASM and Philips, stepping in as "VCs"
Isn't that then a chicken and egg?
Mistral Large 3 is ranked 28, behind all the other major SOTA models. The delta between Mistral and the leader is only 1418 vs. 1491 though. I *think* that means the difference is relatively small.
Does that also mean that Gemini-3 (the top ranked model) loses to mistral 3 40% of the time?
Does that make Gemini 1.5x better, or mistral 2/3rd as good as Gemini, or can we not quantify the difference like that?
Why would they? They know they can't compete against the heavily closed-source models.
They are not even comparing against GPT-OSS.
That is absolutely and shockingly bearish.
The company I work for for example, a mid-sized tech business, currently investigates their local hosting options for LLMs. So Mistral certainly will be an option, among the Qwen familiy and Deepseek.
Mistral is positioning themselves for that market, not the one you have in mind. Comparing their models with Claude etc. would mean associating themselves with the data leeches, which they probably try to avoid.
There are also plenty of reasons not to use proprietary US models for comparison: The major US models haven't been living up to their benchmarks; their releases rarely include training & architectural details; they're not terribly cost effective; they often fail to compare with non-US models; and the performance delta between model releases has plateaued.
A decent number of users in r/LocalLlama have reported that they've switched back from Opus 4.5 to Sonnet 4.5 because Opus' real world performance was worse. From my vantage point it seems like trust in OpenAI, Anthropic, and Google is waning and this lack of comparison is another symptom.
We’re actually at a unique point right now where the gap is larger than it has been in some time. Consensus since the latest batch of releases is that we haven’t found the wall yet. 5.1 Max, Opus 4.5, and G3 are absolutely astounding models and unless you have unique requirements some way down the price/perf curve I would not even look at this release (which is fine!)
- Mistral Large 3 is comparable with the previous Deepseek release.
- Ministral 3 LLMs are comparable with older open LLMs of similar sizes.
To be fair, we don’t even know if the closed models are even LLMs. They could be doing all manner of tool use behind the scenes.
I think that Qwen3 8B and 4B are SOTA for their size. The GPQA Diamond accuracy chart is weird: Both Qwen3 8B and 4B have higher scores, so they used this weid chart where "x" axis shows the number of output tokens. I missed the point of this.
Why should they compare apples to oranges? Ministral3 Large costs ~1/10th of Sonnet 4.5. They clearly target different users. If you want a coding assistant you probably wouldn't choose this model for various reasons. There is place for more than only the benchmark king.
Deepmind is not an UK company, its google aka US.
Mistral is a real EU based company.
Google DeepMind does exist.
And no, it's not only americans. I keep hearing this thing from people living in Europe as well (or better, in the EU). I also very often hear phrases like "Switzerland is not in Europe" to indicate that the country is not part of the European Union.
Open weight means secondary sales channels like their fine tuning service for enterprises [0].
They can't compete with large proprietary providers but they can erode and potentially collapse them.
Open weights and research builds on itself advancing its participants creating environment that has a shot at proprietary services.
Transparency, control, privacy, cost etc. do matter to people and corporations.
gpt-oss is killing the ongoing AIME3 competition on kaggle. They're using a hidden, new set of problems, IMO level, handcrafted to be "AI hardened". And gpt-oss submissions are at ~33/50 right now, two weeks into the competition. The benchmarks (at least for math) were not gamed at all. They are really good at math.
The next "public" model is qwen30b-thinking at 23/50.
Competition is limited to 1 H100 (80GB) and 5h runtime for 50 problems. So larger open models (deepseek, larger qwens) don't fit.
[1] https://www.kaggle.com/competitions/ai-mathematical-olympiad...
Releasing a near stat-of-the-art open model instanly catapults companies to a valuation of several billion dollars, making it possible raise money to acquire GPUs and train more SOTA models.
Now, what happens if such a business model does not emerge? I hope we won't find out!
- Gemini 3.0 Pro : 84.8
- DeepSeek 3.2 : 83.6
- GPT-5.1 : 69.2
- Claude Opus 4.5 : 67.4
- Kimi-K2 (1.2T) : 42.0
- Mistral Large 3 (675B) : 41.9
- Deepseek-3.1 (670B) : 39.7
The 14B 8B & 3B models are SOTA though, and do not have chinese censorship like Qwen3.
Maybe an architectural leap?
https://huggingface.co/mistralai/Ministral-3-14B-Instruct-25...
The unsloth quants are here:
https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512...
https://huggingface.co/collections/mistralai/mistral-large-3
I guess it says a bit about the state of European AI
I also think most people do not consider open weights as OSS.
https://www.llama.com/docs/how-to-guides/vision-capabilities...
Most likely reason is that the instruct model underperforms compared to the open competition (even among non-reasoners like Kimi K2).
Like how does 14B compare to Qwen30B-A3B?
(Which I think is a lot of people's goto or it's instruct/coding variant, from what I've seen in local model circles)
My guess is the vast scale of google data. They've been hoovering data for decades now, and have had curation pipelines (guided by real human interactions) since forever.
Went to actually use it, got a message saying that I missed a payment 8 months previously and thus wasn't allowed to use Pro despite having paid for Pro for the previous 8 months. The lady I contacted in support simply told me to pay the outstanding balance. You would think if you missed a payment it would relate to simply that month that was missed not all subsequent months.
Utterly ridiculous that one missed payment can justify not providing the service (otherwise paid for in full) at all.
Basically if you find yourself in this situation you're actually better of deleting the account and resigning up again under a different email.
We really need to get our shit together in the EU on this sort of stuff, I was a paying customer purely out of sympathy but that sympathy dried up pretty quick with hostile customer service.