Mistral 3 family of models released
826 points
4 months ago
| 35 comments
| mistral.ai
| HN
barrell
4 months ago
[-]
I use large language models in http://phrasing.app to format data I can retrieve in a consistent skimmable manner. I switched to mistral-3-medium-0525 a few months back after struggling to get gpt-5 to stop producing gibberish. It's been insanely fast, cheap, reliable, and follows formatting instructions to the letter. I was (and still am) super super impressed. Even if it does not hold up in benchmarks, it still outperformed in practice.

I'm not sure how these new models compare to the biggest and baddest models, but if price, speed, and reliability are a concern for your use cases I cannot recommend Mistral enough.

Very excited to try out these new models! To be fair, mistral-3-medium-0525 still occasionally produces gibberish ~0.1% of my use cases (vs gpt-5's 15% failure rate). Will report back if that goes up or down with these new models

reply
mrtksn
4 months ago
[-]
Some time ago I canceled all my paid subscriptions to chatbots because they are interchangeable so I just rotate between Grok, ChatGPT, Gemini, Deepseek and Mistral.

On the API side of things my experience is that the model behaving as expected is the greatest feature.

There I also switched to Openrouter instead of paying directly so I can use whatever model fits best.

The recent buzz about ad-based chatbot services is probably because the companies no longer have an edge despite what the benchmarks say, users are noticing it and cancel paid plans. Just today OpenAI offered me 1 month free trial as if I wasn’t using it two months ago. I guess they hope I forget to cancel.

reply
barrell
4 months ago
[-]
Yep I spent 3 days optimizing my prompt trying to get gpt-5 to work. Tried a bunch of different models (some Azure some OpenRouter) and got a better success rate with several others without any tailoring of the prompt.

Was really plug and play. There are still small nuances to each one, but compared to a year ago prompts are much more portable

reply
distalx
4 months ago
[-]
What tools or process do you use to optimize your prompts?
reply
amy_petrik
4 months ago
[-]
usually either use Grok to optimize a mistral prompt, or you can use gemini to optimize a chatGPT prompt. It's best to keep those pairs of AIs and not cross streams!
reply
barbazoo
4 months ago
[-]
> I guess they hope I forget to cancel.

Business model of most subscription based services.

reply
viking123
4 months ago
[-]
For me it's just that I am too lazy to start switching from my GPT subscription, I use it with codex and it's very good for my use-case. And the price at least here in Asia is not expensive at all for the plus tier. The amount of tokens are so much that I usually cannot even spend the weekly quota, although I use context smartly and know my codebase so I can always point it to right place right away.

I feel like at least for normies if they are familiar with ChatGPT, it might be hard to make them switch especially if they are subscribed.

reply
b3ing
4 months ago
[-]
I estimate at 10% of meetup runs like that
reply
acuozzo
4 months ago
[-]
> because they are interchangeable

What is your use-case?

Mine is: I use "Pro"/"Max"/"DeepThink" models to iterate on novel cross-domain applications of existing mathematics.

My interaction is: I craft a detailed prompt in my editor, hand it off, come back 20-30 minutes later, review the reply, and then repeat if necessary.

My experience is that they're all very, very different from one another.

reply
mrtksn
4 months ago
[-]
my use case is Google replacement, things that I can do by myself so I can verify and things that are not important so I don’t have to verify.

Sure, they produce different output so sometimes I will run the same thing on a few different models when Im not sure or happy but I’d don’t delegate the thinking part actually, I always give a direction in my prompts. I don’t see myself running 30min queries because I will never trust the output and will have to do all the work myself. Instead I like to go step by step together.

reply
giancarlostoro
4 months ago
[-]
Maybe give Perplexity a shot? It has Grok, ChatGPT, Gemini, Kimi K2, I dont think it has Mistral unfortunately.
reply
mrtksn
4 months ago
[-]
I like perplexity actually but haven’t been using it since some time. Maybe I should give it a go :)
reply
ecommerceguy
4 months ago
[-]
I use their browser called Comet for finance related research. Very nice. I use pretty much all of the main ai's, chat, deep, gem, claude - all i have found little niche use case that i'm sure will rotate at some point in an upgrade cycle. there are so many ai's i don't see the point in paying for one. I'm convinced they will need ads to survive.

excited to add mistral to the rotation!

reply
giancarlostoro
4 months ago
[-]
Oh man I use Comet nearly daily, I tried setting perplexity as my new tab page on other browsers and for some reason its not the same. I mostly use it that boring way too.
reply
VHRanger
4 months ago
[-]
Kagi has Mistral as well
reply
druskacik
4 months ago
[-]
This is my experience as well. Mistral models may not be the best according to benchmarks and I don't use them for personal chats or coding, but for simple tasks with pre-defined scope (such as categorization, summarization, etc.) they are the option I choose. I use mistral-small with batch API and it's probably the best cost-efficient option out there.
reply
leobg
4 months ago
[-]
Did you compare it to gemini-2.0-flash-lite?
reply
leobg
4 months ago
[-]
Answering my own question:

Artificial Analysis ranks them close in terms of price (both 0.3 USD/1M tokens) and intelligence (27 / 29 for gemini/mistral), but ranks gemini-2.0-flash-lite higher in terms of speed (189 tokens/s vs. 130).

So they should be interchangeable. Looking forward to testing this.

[0] https://artificialanalysis.ai/?models=o3%2Cgemini-2-5-pro%2C...

reply
druskacik
4 months ago
[-]
I did some vibe-evals only and it seemed slightly worse for my use case, so I didn't change it.
reply
mbowcut2
4 months ago
[-]
It makes me wonder about the gaps in evaluating LLMs by benchmarks. There almost certainly is overfitting happening which could degrade other use cases. "In practice" evaluation is what inspired the Chatbot Arena right? But then people realized that Chatbot arena over-prioritizes formatting, and maybe sycophancy(?). Makes you wonder what the best evaluation would be. We probably need lots more task-specific models. That's seemed to be fruitful for improved coding.
reply
pants2
4 months ago
[-]
The best benchmark is one that you build for your use-case. I finally did that for a project and I was not expecting the results. Frontier models are generally "good enough" for most use-cases but if you have something specific you're optimizing for there's probably a more obscure model that just does a better job.
reply
airstrike
4 months ago
[-]
If you and others have any insights to share on structuring that benchmark, I'm all ears.

There a new model seemingly every week so finding a way to evaluate them repeatedly would be nice.

The answer may be that it's so bespoke you have to handroll every time, but my gut says there's a set of best practiced that are generally applicable.

reply
pants2
4 months ago
[-]
Generally, the easiest:

1. Sample a set of prompts / answers from historical usage.

2. Run that through various frontier models again and if they don't agree on some answers, hand-pick what you're looking for.

3. Test different models using OpenRouter and score each along cost / speed / accuracy dimensions against your test set.

4. Analyze the results and pick the best, then prompt-optimize to make it even better. Repeat as needed.

reply
dotancohen
4 months ago
[-]
How do you find and decide which obscure models to test? Do you manually review the model card for each new model on Hugging Face? Is there a better resource?
reply
pants2
4 months ago
[-]
Just grab the top ~30 models on OpenRouter[1] and test them all. If that's too expensive make a sample 'screening' benchmark that's just a few of the hardest problems to see if it's even worth the full benchmark.

1. https://openrouter.ai/models?order=top-weekly&fmt=table

reply
dotancohen
4 months ago
[-]
Thank you! I'll see about building a test suite.

Do you compare models' output subjectively, manually? Or do you have some objective measures? My use case would be to test diagnostic information summaries - the output is free text, not structured. The only way I can think to automate that would be with another LLM.

Advice welcome!

reply
pants2
3 months ago
[-]
Yeah - things are easy when you can objectively score an output, otherwise as you said you'll probably need another LLM to score it. For summaries you can try to make that somewhat more objective, like length and "8/10 key points are covered in this summary."

This is a real training method (like Group Relative Policy Optimization), so it's a legitimate approach.

reply
dotancohen
3 months ago
[-]
Thank you. I will google Group Relative Policy Optimization to learn about that and the other training methods. If you have any resources handy that I should be reading, that would be appreciated as well. Have a great weekend.
reply
pants2
3 months ago
[-]
Nothing off the top of my head! If you find anything good let me know. GRPO is a training technique likely not exactly what you'd do for benchmarking, but it's interesting to read about anyway. Glad I cuold help
reply
Legend2440
4 months ago
[-]
I don’t think benchmark overfitting is as common as people think. Benchmark scores are highly correlated with the subjective “intelligence” of the model. So is pretraining loss.

The only exception I can think of is models trained on synthetic data like Phi.

reply
pembrook
4 months ago
[-]
If the models from the big US labs are being overfit to benchmarks, than we also need to account for HN commenters overfitting positive evaluations to Chinese or European models based on their political biases (US big tech = default bad, anything European = default good).

Also, we should be aware of people cynically playing into that bias to try to advertise their app, like OP who has managed to spam a link in the first line of a top comment on this popular front page article by telling the audience exactly what they want to hear ;)

reply
astrange
4 months ago
[-]
Americans have an opposing bias via the phenomenon of "safe edgy", where for obvious reasons they're uncomfortable with being biased towards anyone who looks like a US minority, and redirect all that energy towards being racist to the French. So it's all balanced.
reply
mentalgear
4 months ago
[-]
Thanks for sharing your use case of the mistral models, which are indeed top-notch ! I had a look at phrasing.app, and while a nice website, I found the copy of "Hand-crafted. Phrasing was designed & developed by humans, for humans." somewhat of a false virtue given your statements here of advanced lllm usage.
reply
barrell
4 months ago
[-]
I don't see the contention. I do not use llms in the design, development, copywriting, marketing, blogging, or any other aspect of the crafting of the application.

I labor over every word, every button, every line of code, every blog post. I would say it is as hand-crafted as something digital can be.

reply
willlma
3 months ago
[-]
It's interesting. I've been tinkering with an article summarizing/highlighting browser extension, and realized that I don't want the end-user to have read AI-generated content because it's not as high-quality as I'd hoped. But on the flip side, I'm loving having the AI write most of the code for me.
reply
basilgohar
4 months ago
[-]
I admire and respect this stance. I have been very AI-hesitant and while I'm using it more and more, I have spaces that I want to definitely keep human-only, as this is my preference. I'm glad to hear I'm not the only one like this.
reply
barrell
4 months ago
[-]
Thank you :) and you're definitely not the only one.

Full transparency, the first backend version of phrasing was 'vibe-coded' (long before vibe coding was a thing). I didn't like the results, I didn't like the experience, I didn't feel good ethically, and I didn't like my own development.

I rewrote the application (completely, from scratch, new repo new language new framework) and all of the sudden I liked the results, I loved the process, I had no moral qualms, and I improved leaps and bounds in all areas I worked on.

Automation has some amazing use cases (I am building an automation product at the end of the day) but so does doing hard things yourself.

Although most important is just to enjoy what you do; or perhaps do something you can be proud of.

reply
metadat
4 months ago
[-]
Are you saying gpt-5 produces gibberish 15% of the time? Or are you comparing Mistral gibberish production rate to gpt-5.1's complex task failure rate?

Does Mistral even have a Tool Use model? That would be awesome to have a new coder entrant beyond OpenAI, Anthropic, Grok, and Qwen.

reply
barrell
4 months ago
[-]
Yes. I spent about 3 days trying to optimize the prompt to get gpt-5 to not produce gibberish, to no avail. Completions took several minutes, had an above 50% timeout rate (with a 6 minute timeout mind you), and after retrying they still would return gibberish about 15% of the time (12% on one task, 20% on another task).

I then tried multiple models, and they all failed in spectacular ways. Only Grok and Mistral had an acceptable success rate, although Grok did not follow the formatting instructions as well as Mistral.

Phrasing is a language learning application, so the formatting is very complicated, with multiple languages and multiple scripts intertwined with markdown formatting. I do include dozens of examples in the prompts, but it's something many models struggle with.

This was a few months ago, so to be fair, it's possible gpt-5.1 or gemini-3 or the new deepseek model may have caught up. I have not had the time or need to compare, as Mistral has been sufficient for my use cases.

I mean, I'd love to get that 0.1% error rate down, but there have always more pressing issues XD

reply
data-ottawa
4 months ago
[-]
With gpt5 did you try adjusting the reasoning level to "minimal"?

I tried using it for a very small and quick summarization task that needed low latency and any level above that took several seconds to get a response. Using minimal brought that down significantly.

Weirdly gpt5's reasoning levels don't map to the OpenAI api level reasoning effort levels.

reply
barrell
4 months ago
[-]
Reasoning was set to minimal and low (and I think I tried medium at some point). I do not believe the timeouts were due to the reasoning taking to long, although I never streamed the results. I think the model just fails often. It stops producing tokens and eventually the request times out.
reply
barbazoo
4 months ago
[-]
Hard to gauge what gibberish is without an example of the data and what you prompted the LLM with.
reply
barrell
4 months ago
[-]
If you wanted examples, you needed only ask :)

These are screenshots from that week: https://x.com/barrelltech/status/1995900100174880806

I'm not going to share the prompt because (1) it's very long (2) there were dozens of variations and (3) it seems like poor business practices to share the most indefensible part of your business online XD

reply
barbazoo
4 months ago
[-]
Surely reads like someone's brain transformed into a tree :)

Impressive, I haven't seen that myself yet, I've only used 5 conversationally, not via API yet.

reply
barrell
4 months ago
[-]
Heh it's a quote from Archer FX (and admittedly a poor machine translation, it's a very old expression of mine).

And yes, this only happens when I ask it to apply my formatting rules. If you let GPT format itself, I would be surprised if this ever happens.

reply
sandblast
4 months ago
[-]
XD XD
reply
acuozzo
4 months ago
[-]
I have a need to remove loose "signature" lines from the last 10% of a tremendous e-mail dataset. Based on your experience, how do you think mistral-3-medium-0525 would do?
reply
barrell
4 months ago
[-]
What's your acceptable error rate? Honestly ministral would probably be sufficient if you can tolerate a small failure rate. I feel like medium would be overkill.

But I'm no expert. I can't say I've used mistral much outside of my own domain.

reply
acuozzo
4 months ago
[-]
I'd prefer for the error rate to be as close to 0% as possible under the strict requirement of having to use a local model. I have access to nodes with 8xH200, but I'd prefer to not tie those up with this task. I'd, instead, prefer to use a model I can run on an M2 Ultra.
reply
barrell
4 months ago
[-]
If I cannot tolerate a failure rate, I do not use LLMs (or and ML models).

But in that case the larger the better. If mistral medium can run on your M2 Ultra then it should be up to the task. Should eek out ministral and be just shy of the biggest frontier models.

But I wouldn’t even trust GPT-5 or Claude Opus or Gemini 3 Pro to get close to a zero percent success rate, and for a task such as this I would not expect mistral medium to outperform the big boys

reply
mackross
4 months ago
[-]
Cool app. I couldn’t see a way to report an error in one of the default expressions.
reply
msp26
4 months ago
[-]
The new large model uses DeepseekV2 architecture. 0 mention on the page lol.

It's a good thing that open source models use the best arch available. K2 does the same but at least mentions "Kimi K2 was designed to further scale up Moonlight, which employs an architecture similar to DeepSeek-V3".

---

vllm/model_executor/models/mistral_large_3.py

```

from vllm.model_executor.models.deepseek_v2 import DeepseekV3ForCausalLM

class MistralLarge3ForCausalLM(DeepseekV3ForCausalLM):

```

"Science has always thrived on openness and shared discovery." btw

Okay I'll stop being snarky now and try the 14B model at home. Vision is good additional functionality on Large.

reply
Jackson__
4 months ago
[-]
So they spent all of their R&D to copy deepseek, leaving none for the singular novel added feature: vision.

To quote the hf page:

>Behind vision-first models in multimodal tasks: Mistral Large 3 can lag behind models optimized for vision tasks and use cases.

reply
Ey7NFZ3P0nzAe
4 months ago
[-]
Well, behind "models" not "langual models".

Of course models purely made for image stuff will completely wipe it out. The vision language models are useful for their generalist capabilities

reply
make3
4 months ago
[-]
Architecture difference wrt vanilla transformers and between modern transformers are a tiny part of what makes a model nowadays
reply
halJordan
4 months ago
[-]
I don't think it's fair to demand everything be open and then get mad when they open-ness is used. It's an obsessive and harmful double standard.
reply
simonw
4 months ago
[-]
The 3B vision model runs in the browser (after a 3GB model download). There's a very cool demo of that here: https://huggingface.co/spaces/mistralai/Ministral_3B_WebGPU

Pelicans are OK but not earth-shattering: https://simonwillison.net/2025/Dec/2/introducing-mistral-3/

reply
troyvit
4 months ago
[-]
I'm reading this post and wondering what kind of crazy accessibility tools one could make. I think it's a little off the rails but imagine a tool that describes a web video for a blind user as it happens, not just the speech, but the actual action.
reply
GaggiX
4 months ago
[-]
This is not local but Gemini models can process very long videos and provide description with timestamps if asked for.

https://ai.google.dev/gemini-api/docs/video-understanding#tr...

reply
embedding-shape
4 months ago
[-]
Nor would it be describing things as they happen, but instead needing pre-processing, so in the end, very different :)
reply
user_of_the_wek
4 months ago
[-]
> The image depicts and older man...

Ouch

reply
mythz
4 months ago
[-]
Europe's bright star has been quiet for a while, great to see them back and good to see them come back to Open Source light with Apache 2.0 licenses - they're too far from the SOTA pack that exclusive/proprietary models would work in their favor.

Mistral had the best small models on consumer GPUs for a while, hopefully Ministral 14B lives up to their benchmarks.

reply
rvz
4 months ago
[-]
All thanks to the US VCs that acutally have money to fund Mistral's entire business.

Had they gone to the EU, Mistral would have gotten a miniscule grant from the EU to train their AI models.

reply
amarcheschi
4 months ago
[-]
Mistral biggest investor is asml, although it became so later than other vcs
reply
crimsoneer
4 months ago
[-]
I mean, one is a government, the other are VCs (also, I would be shocked if there isn't some French gov funding somewhere in the massive mistral pile).
reply
kergonath
4 months ago
[-]
> I would be shocked if there isn't some French gov funding somewhere in the massive mistral pile

There is a bit of it, yes, although how much exactly is difficult to know. It’s not all tax breaks and subventions; several public agencies are using it, including in the army so finding out the details is not trivial.

reply
whiplash451
4 months ago
[-]
1. so what 2. asml
reply
rvz
4 months ago
[-]
1. It matters.

2. Did ASML invest in Mistral in their first round of venture funding or was it US VCs all along that took that early risk and backed them from the very start?

Risk aversion is in the DNA and in almost every plot of land in Europe such that US VCs saw something in Mistral before even the european giants like ASML did.

ASML would have passed on Mistral from the start and Mistral would have instead begged to the EU for a grant.

reply
apexalpha
4 months ago
[-]
1. Big problem

2. ASML was propped up by ASM and Philips, stepping in as "VCs"

reply
didibus
4 months ago
[-]
For VC don't you need a lot of capital and people with too much money?

Isn't that then a chicken and egg?

reply
JumpCrisscross
4 months ago
[-]
> and people with too much money?

No. VC’s historical capital has come from institutional investors. Pensions. Endowments. Foundations.

reply
didibus
4 months ago
[-]
Interesting, is that still the case? And how is the decision to take those high risk investments made for things like pensions and such?
reply
timpera
4 months ago
[-]
Extremely cool! I just wish they would also include comparisons to SOTA models from OpenAI, Google, and Anthropic in the press release, so it's easier to know how it fares in the grand scheme of things.
reply
Youden
4 months ago
[-]
They mentioned LMArena, you can get the results for that here: https://lmarena.ai/leaderboard/text

Mistral Large 3 is ranked 28, behind all the other major SOTA models. The delta between Mistral and the leader is only 1418 vs. 1491 though. I *think* that means the difference is relatively small.

reply
jampekka
4 months ago
[-]
1491 vs 1418 ELO means the stronger model wins about 60% of the time.
reply
supermatt
4 months ago
[-]
Probably naive questions:

Does that also mean that Gemini-3 (the top ranked model) loses to mistral 3 40% of the time?

Does that make Gemini 1.5x better, or mistral 2/3rd as good as Gemini, or can we not quantify the difference like that?

reply
esafak
4 months ago
[-]
Yes, of course.
reply
uejfiweun
4 months ago
[-]
Wow. If all the trillions only produces that small of a diff... that's shocking. That's the sort of knowledge that could pop the bubble.
reply
JustFinishedBSG
4 months ago
[-]
I wouldn't trust LMArena results much. They measure user preference and users are highly skewed by style, tone etc.

You can litteraly "improve" your model on LMArena by just adding a bunch of emojis.

reply
qznc
4 months ago
[-]
I guess that could be considered comparative advertising then and companies generally try to avoid that scrutiny.
reply
constantcrying
4 months ago
[-]
The lack of the comparison (which absolutely was done), tells you exactly what you need to know.
reply
bildung
4 months ago
[-]
I think people from the US often aren't aware how many companies from the EU simply won't risk losing their data to the providers you have in mind, OpenAI, Anthropic and Google. They simply are no option at all.

The company I work for for example, a mid-sized tech business, currently investigates their local hosting options for LLMs. So Mistral certainly will be an option, among the Qwen familiy and Deepseek.

Mistral is positioning themselves for that market, not the one you have in mind. Comparing their models with Claude etc. would mean associating themselves with the data leeches, which they probably try to avoid.

reply
adam_patarino
4 months ago
[-]
We're seeing the same thing for many companies, even in the US. Exposing your entire codebase to an unreliable third party is not exactly SOC / ISO compliant. This is one of the core things that motivated us to develop cortex.build so we could put the model on the developer's machine and completely isolate the code without complicated model deployments and maintenance.
reply
leobg
4 months ago
[-]
Does your company use Microsoft Teams?
reply
BoorishBears
4 months ago
[-]
Mistral is founded by multiple Meta engineers, no?

Funded mostly by US VCs?

Hosted primarily on Azure?

Do you really have to go out of your way to start calling their competition "data leeches" for out-executing them?

reply
sofixa
4 months ago
[-]
Mistral are mostly focusing on b2b, and for customers that want to self-host (banks and stuff). So their founders being from Meta, or where their cloud platform are hosted, are entirely irrelevant to the story.
reply
BoorishBears
4 months ago
[-]
The fact they would not exist without the leeches and built their business on the leeches is irrelevant.

Pan-nationalism is a hell of a drug: a company that does not know you exist puts out an objectively awful release, and people take frank discussion of it as a personal slight.

reply
Fnoord
4 months ago
[-]
Those who crawled the web without consent, and then put their LLM in a blackbox without attribution, with secret prompt and secret weights -- ie. all of this without giving back, while creating tons of Co2. Those are the leeches.
reply
BoorishBears
4 months ago
[-]
Ah, so "crawled the web without consent, and then put their LLM in a blackbox without attribution" is not being a leech once you release the weights of an underperforming model using someone else's arch.

I knew y'all's standards were lower but geez!

reply
Fnoord
4 months ago
[-]
At the very least it is a step in the right direction. Can't say the same for these proprietary models. And guess which country has all these proprietary models? USA.
reply
BoorishBears
4 months ago
[-]
Thank goodness for that, otherwise all we might have is useless copies of Deepseek.
reply
baq
4 months ago
[-]
If you want to allocate capital efficiently planet-scale you have to ignore nations to the largest extent possible.
reply
sofixa
4 months ago
[-]
> The fact they would not exist without the leeches and built their business on the leeches is irrelevant.

How so?

reply
troyvit
4 months ago
[-]
It's wayyyy to early in the game to say who is out-executing whom.

I mean why do you think those guys left Meta? It reminds me of a time ten years ago I was sitting on a flight with a guy who works for the natural gas industry. I was (cough still am) a pretty naive environmentalist, so I asked him what he thought of solar, wind, etc. and why should we be investing in natural gas when there are all these other options. His response was simple. Natural gas can serve as a bridge from hydrocarbons to true green energy sources. Leverage that dense energy to springboard the other sources in the mix and you build a path forward to carbon free energy.

I see Mistral's use of US VCs the same way. Those VCs are hedging their bets and maybe hoping to make a few bucks. A few of them are probably involved because they're buddies with the former Meta guys "back in the day." If Mistral executes on their plan of being a transparent b2b option with solid data protections then they used those VCs the way they deserve to be used and the VCs make a few bucks. If Europe ever catches up to the US in terms of data centers, would Mistral move off of Azure? I'd bet $5 that they would.

reply
bildung
4 months ago
[-]
I didn't mean to imply US bad EU good. As such, this isn't about which passport the VCs have, but about local hosting and open weight models. A closed model from a US company always comes with the risk of data exfiltration either for training or thanks to CLOUD Act etc (i.e. industrial espionage).

And personally I don't care at all about the performance delta - we are talking about a difference of 6 to at most 12 months here, between closed source SOTA and open weight models.

reply
popinman322
4 months ago
[-]
They're comparing against open weights models that are roughly a month away from the frontier. Likely there's an implicit open-weights political stance here.

There are also plenty of reasons not to use proprietary US models for comparison: The major US models haven't been living up to their benchmarks; their releases rarely include training & architectural details; they're not terribly cost effective; they often fail to compare with non-US models; and the performance delta between model releases has plateaued.

A decent number of users in r/LocalLlama have reported that they've switched back from Opus 4.5 to Sonnet 4.5 because Opus' real world performance was worse. From my vantage point it seems like trust in OpenAI, Anthropic, and Google is waning and this lack of comparison is another symptom.

reply
kalkin
4 months ago
[-]
Scale AI wrote a paper a year ago comparing various models performance on benchmarks to performance on similar but held-out questions. Generally the closed source models performed better, and Mistral came out looking pretty badly: https://arxiv.org/pdf/2405.00332
reply
extr
4 months ago
[-]
??? Closed US frontier models are vastly more effective than anything OSS right now, the reason they didn’t compare is because they’re a different weight class (and therefore product) and it’s a bit unfair.

We’re actually at a unique point right now where the gap is larger than it has been in some time. Consensus since the latest batch of releases is that we haven’t found the wall yet. 5.1 Max, Opus 4.5, and G3 are absolutely astounding models and unless you have unique requirements some way down the price/perf curve I would not even look at this release (which is fine!)

reply
crimsoneer
4 months ago
[-]
If someone is using these models, they probably can't or won't use the existing SOTA models, so not sure how useful those comparisons actually are. "Here is a benchmark that makes us look bad from a model you can't use on a task you won't be undertaking" isn't actually helpful (and definitely not in a press release).
reply
constantcrying
4 months ago
[-]
Completely agree, that there are legitimate reasons to prefer comparison to e.g. deepeek models. But that doesn't change my point, we both agree that the comparisons would be extremely unfavorable.
reply
Lapel2742
4 months ago
[-]
> that the comparisons would be extremely unfavorable.

Why should they compare apples to oranges? Ministral3 Large costs ~1/10th of Sonnet 4.5. They clearly target different users. If you want a coding assistant you probably wouldn't choose this model for various reasons. There is place for more than only the benchmark king.

reply
constantcrying
4 months ago
[-]
Come on. Do you just not read posts at all?
reply
esafak
4 months ago
[-]
Which lightweight models do these compare unfavorably with?
reply
tarruda
4 months ago
[-]
Here's what I understood from the blog post:

- Mistral Large 3 is comparable with the previous Deepseek release.

- Ministral 3 LLMs are comparable with older open LLMs of similar sizes.

reply
constantcrying
4 months ago
[-]
And implicit in this is that it compares very poorly to SOTA models. Do you disagree with that? Do you think these Models are beating SOTA and they did not include the benchmarks, because they forgot?
reply
saubeidl
4 months ago
[-]
Those are SOTA for open models. It's a separate league from closed models entirely.
reply
supermatt
4 months ago
[-]
> It's a separate league from closed models entirely.

To be fair, the SOTA models aren't even a single LLM these days. They are doing all manner of tool use and specialised submodel calls behind the scenes - a far cry from in-model MoE.

reply
tarruda
4 months ago
[-]
> Do you disagree with that?

I think that Qwen3 8B and 4B are SOTA for their size. The GPQA Diamond accuracy chart is weird: Both Qwen3 8B and 4B have higher scores, so they used this weid chart where "x" axis shows the number of output tokens. I missed the point of this.

reply
meatmanek
4 months ago
[-]
Generation time is more or less proportional to tokens * model size, so if you can get the same quality result with fewer tokens from the same size of model, then you save time and money.
reply
kergonath
4 months ago
[-]
Thanks. That was not obvious to me either.
reply
rvz
4 months ago
[-]
> I just wish they would also include comparisons to SOTA models from OpenAI, Google, and Anthropic in the press release,

Why would they? They know they can't compete against the heavily closed-source models.

They are not even comparing against GPT-OSS.

That is absolutely and shockingly bearish.

reply
yvoschaap
4 months ago
[-]
Upvoting for Europe's best efforts.
reply
sebzim4500
4 months ago
[-]
That's unfair to Europe. A bunch of AI work is done in London (Deepmind is based here for a start)
reply
p2detar
4 months ago
[-]
That's ok. How could they know that there are companies like Aleph Alpha, Helsing or the famous DeepL. European companies are not that vocal, but that doesn't mean they aren't making progress in the field.

edit: typos

reply
Glemkloksdjf
4 months ago
[-]
Thats not the point.

Deepmind is not an UK company, its google aka US.

Mistral is a real EU based company.

reply
gishh
4 months ago
[-]
Using US VC dollars. Where their desks are isn’t really important.
reply
data-ottawa
4 months ago
[-]
Increasingly where the desks and servers are is critical.

The cloud act and the current US administration doing things like sanctioning the ICC demonstrate why the locations of those desks is important.

reply
cycomanic
4 months ago
[-]
That's such a silly argument. X, OpenAI and others have large Saudi investments. In the grant scheme of things the US is largely indebted to China and Japan.
reply
vintermann
4 months ago
[-]
Currency is interchangeable. Location might not be.
reply
Glemkloksdjf
4 months ago
[-]
An EU Company pays taxes in EU, has a EU mindset (worker laws etc.), focuses more on EU than other countries.

And an EU company can't be forced by the US Gov to hand over data.

reply
GaggiX
4 months ago
[-]
London is not part of Europe anymore since Brexit /s
reply
ot
4 months ago
[-]
Is it so hard for people to understand that Europe is a continent, EU is a federation of European countries, and the two are not the same?
reply
usrnm
4 months ago
[-]
Europe isn't even a continent and has no real definition (none that would make any sense, anyway), so the whole thing is confusing by design
reply
rc1
4 months ago
[-]
If Europe isn’t a continent, on what continent are the EU member states sitting on?
reply
rkomorn
4 months ago
[-]
Eurasia is the widely accepted answer.
reply
denysvitali
4 months ago
[-]
I honestly think it is. The amount of people who thinks Europe and EU are the same thing is really concerning.

And no, it's not only americans. I keep hearing this thing from people living in Europe as well (or better, in the EU). I also very often hear phrases like "Switzerland is not in Europe" to indicate that the country is not part of the European Union.

reply
MadDemon
4 months ago
[-]
Switzerland has such close ties to the EU that I would consider them half in.
reply
lostmsu
4 months ago
[-]
Isn't London on an island, mr. Pedantic?
reply
TulliusCicero
4 months ago
[-]
So I guess Japan isn't Asian then?
reply
layer8
4 months ago
[-]
While Japan is part of Asia, and Asia is a continent, Japan is also separated from the Asian continent: https://en.wikipedia.org/wiki/Geography_of_Japan#Location
reply
lostmsu
4 months ago
[-]
What's more interesting is that the comment you are replying to mistakenly asked me instead of asking the parent.
reply
GaggiX
4 months ago
[-]
I think you missed the joke
reply
tmoravec
4 months ago
[-]
Drifted to the Caribbean.
reply
colesantiago
4 months ago
[-]
Deepmind doesn't exist anymore.

Google DeepMind does exist.

reply
LunaSea
4 months ago
[-]
Upvoting Windows 11 as the US's best effort at Operating Systems development.
reply
DarmokJalad1701
4 months ago
[-]
Wouldn't that be macOS? Or BSD? Or Unix? CentOS?
reply
LunaSea
4 months ago
[-]
What's the market share of those compared to Windows and Linux?
reply
DarmokJalad1701
4 months ago
[-]
"best effort at Operating Systems development" doesn't imply anything about the market share.
reply
mrinterweb
4 months ago
[-]
I don't like being this guy, but I think Deepseek 3.2 stole all the thunder yesterday. Notice that these comparisons are to Deepseek 3.1. Deepseek 3.2 is a big step up over 3.1, if benchmarks are to be believed. Just unfortunate timing of release. https://api-docs.deepseek.com/news/news251201
reply
hiddencost
4 months ago
[-]
Idk. They look like they're ahead on the saturated benchmarks and behind on the unsaturated ones. Looks more like that over fit to the benchmarks.
reply
simgt
4 months ago
[-]
I still don't understand what the incentive is for releasing genuinely good model weights. What makes sense however is OpenAI releasing a somewhat generic model like gpt-oss that games the benchmarks just for PR. Or some Chinese companies doing the same to cut the ground from under the feet of American big tech. Are we really hopeful we'll still get decent open weights models in the future?
reply
mirekrusin
4 months ago
[-]
Because there is no money in making them closed.

Open weight means secondary sales channels like their fine tuning service for enterprises [0].

They can't compete with large proprietary providers but they can erode and potentially collapse them.

Open weights and research builds on itself advancing its participants creating environment that has a shot at proprietary services.

Transparency, control, privacy, cost etc. do matter to people and corporations.

[0] https://mistral.ai/solutions/custom-model-training

reply
talliman
4 months ago
[-]
Until there is a sustainable, profitable and moat-building business model for generative AI, the competition is not to have the best proprietary model, but rather to raise the most VC money to be well positioned when that business model does arise.

Releasing a near stat-of-the-art open model instanly catapults companies to a valuation of several billion dollars, making it possible raise money to acquire GPUs and train more SOTA models.

Now, what happens if such a business model does not emerge? I hope we won't find out!

reply
mirekrusin
4 months ago
[-]
Explained well in this documentary [0].

[0] https://www.youtube.com/watch?v=BzAdXyPYKQo

reply
simgt
4 months ago
[-]
I was fully expecting that but it doesn't get old ;)
reply
memming
4 months ago
[-]
It’s funny how future money drive the world. Fortunately it’s fueling progress this time around.
reply
NitpickLawyer
4 months ago
[-]
> gpt-oss that games the benchmarks just for PR.

gpt-oss is killing the ongoing AIME3 competition on kaggle. They're using a hidden, new set of problems, IMO level, handcrafted to be "AI hardened". And gpt-oss submissions are at ~33/50 right now, two weeks into the competition. The benchmarks (at least for math) were not gamed at all. They are really good at math.

reply
lostmsu
4 months ago
[-]
Are they ahead of all other recent open models? Is there a leaderboard?
reply
NitpickLawyer
4 months ago
[-]
There is a leaderboard [1] but we'll have to wait till april for the competition to end to know what models they're using. The current number 3 on there (34/50) has mentioned in discussions that they're using gpt-oss-120b. There were also some scores shared for gpt-oss-20b, in the 25/50 range.

The next "public" model is qwen30b-thinking at 23/50.

Competition is limited to 1 H100 (80GB) and 5h runtime for 50 problems. So larger open models (deepseek, larger qwens) don't fit.

[1] https://www.kaggle.com/competitions/ai-mathematical-olympiad...

reply
data-ottawa
4 months ago
[-]
I find the qwen3 models spend a ton of thinking tokens which could hamstring them on the runtime limitations. Gpt-oss 120b is much more focused and steerable there.

The token use chart in the OP release page demonstrates the Qwen issue well.

Token churn does help smaller models on math tasks, but for general purpose stuff it seems to hurt.

reply
prodigycorp
4 months ago
[-]
gpt-oss are really solid models. by far the best at tool calling, and performant.
reply
nullbio
4 months ago
[-]
Google games benchmarks more than anyone, hence Gemini's strong bench lead. In reality though, it's still garbage for general usage.
reply
tucnak
4 months ago
[-]
If the claims on multilingual and pretraining performance are accurate, this is huge! This may be the best-in-class multilingual stuff since the more recent Gemma's, where they used to be unmatched. I know Americans don't care much about the rest of the world, but we're still using our native tongues thank you very much; there is a huge issue with i.e. Ukrainian (as opposed to Russian) being underrepresented in many open-weight and weight-available models. Gemma used to be a notable exception, I wonder if it's still the case. On a different note: I wonder why scores on TriviaQA vis-a-vis 14b model lags behind Gemma 12b so much; that one is not a formatting-heavy benchmark.
reply
NitpickLawyer
4 months ago
[-]
> I wonder why scores on TriviaQA vis-a-vis 14b model lags behind Gemma 12b so much; that one is not a formatting-heavy benchmark.

My guess is the vast scale of google data. They've been hoovering data for decades now, and have had curation pipelines (guided by real human interactions) since forever.

reply
nullbio
4 months ago
[-]
Anyone else find that despite Gemini performing best on benches, it's actually still far worse than ChatGPT and Claude? It seems to hallucinate nonsense far more frequently than any of the others. Feels like Google just bench maxes all day every day. As for Mistral, hopefully OSS can eat all of their lunch soon enough.
reply
apexalpha
4 months ago
[-]
No, I've been using Gemini for help while learning / building my onprem k8s cluster and it has been almost spotless.

Granted, this is a subject that is very well present in the training data but still.

reply
Synthetic7346
4 months ago
[-]
I found gemini 3 to be pretty lackluster for setting up an onprem k8s cluster - sonnet 4.5 was more accurate from the get go, required less handholding
reply
mvkel
4 months ago
[-]
Open weight LLMs aren't supposed to "beat" closed models, and they never will. That isn’t their purpose. Their value is as a structural check on the power of proprietary systems; they guarantee a competitive floor. They’re essential to the ecosystem, but they’re not chasing SOTA.
reply
cmrdporcupine
4 months ago
[-]
This may be the case, but DeepSeek 3.2 is "good enough" that it competes well with Sonnet 4 -- maybe 4.5 -- for about 80% of my use cases, at a fraction of the cost.

I feel we're only a year or two away from hitting a plateau with the frontier closed models having diminishing returns vs what's "open"

reply
troyvit
4 months ago
[-]
I think you're right, and I feel the same about Mistral. It's "good enough", super cheap, privacy friendly, and doesn't burn coal by the shovel-full. No need to pay through the nose for the SOTA models just to get wrapped into the same SaaS games that plague the rest of the industry.
reply
barrell
4 months ago
[-]
I can attest to Mistral beating OpenAI in my use cases pretty definitively :)
reply
theshrike79
4 months ago
[-]
In my use cases mistral has been next to useless.

Granted my uses have been programming related. Mistral prints the answer almost immediately and is also completely and utterly hallucinating everything and producing just something that looks like code but could never even compile...

reply
re-thc
4 months ago
[-]
> Open weight LLMs aren't supposed to "beat" closed models, and they never will. That isn’t their purpose.

Do things ever work that way? What if Google did Open source Gemini. Would you say the same? You never know. There's never "supposed" and "purpose" like that.

reply
lowkey_
4 months ago
[-]
Not the above poster, but:

OpenAI went closed (despite open literally being in the name) once they had the advantage. Meta also is going closed now that they've caught up.

Open-source makes sense to accelerate to catch up, but once ahead, closed will come back to retain advantage.

reply
mvkel
4 months ago
[-]
I continue to be surprised that the supposed bastion of "safe" AI, anthropic, has a record of being the least-open AI company
reply
pants2
4 months ago
[-]
> Their value is as a structural check on the power of proprietary systems

Unfortunately that doesn't pay the electricity bill

reply
array_key_first
4 months ago
[-]
It kind of does, because the proprietary systems are unacceptable for many usecases because they are proprietary.

There's a lot of businesses who do not want to hand over their sensitive data to hackers, employees of their competitors, and various world governments. There's inherent risk in choosing a propreitary option, and that doesn't just go for LLMs. You can get your feet swept up from underneath you.

reply
dchest
4 months ago
[-]
Nope, Gemini 3 is hallucinating less than GPT-5.1 for my questions.
reply
mrtksn
4 months ago
[-]
Yep, Gemini is my least favorite and I’m convinced that the hype around it isn’t organic because I don’t see the claimed “superiority”, quite the opposite.
reply
cmrdporcupine
4 months ago
[-]
I think a lot of the hype around Gemini comes down to people who aren't using it for coding but for other things maybe.

Frankly, I don't actually care about or want "general intelligence" -- I want it to make good code, follow instructions, and find bugs. Gemini wasn't bad at the last bit, but wasn't great at the others.

They're all trying to make general purpose AI, but I just want really smart augmentation / tools.

reply
erichocean
3 months ago
[-]
I exclusively use Gemini Pro for coding, and it's been writing ~100% of the code I produce since July.

It's great.

reply
tootie
4 months ago
[-]
No? My recent experience with Gemini was terrific. The last big test I gave of Claude it spun an immaculate web of lies before I forced it to confess.
reply
llm_nerd
4 months ago
[-]
What does your comment have to do with the submission? What a weird non-sequitur. I even went looking at the linked article to see if it somehow compares with Gemini. It doesn't, and only relates to open models.

In prior posts you oddly attack "Palantir-partnered Anthropic" as well.

Are things that grim at OpenAI that this sort of FUD is necessary? I mean, I know they're doing the whole code red thing, but I guarantee that posting nonsense like this on HN isn't the way.

reply
cmrdporcupine
4 months ago
[-]
I also had bad luck when I finally tried Gemini 3 in the gemini CLI coding tool. I am unclear if it's the model or their bad tooling/prompting. It had, as you said, hallucination problems, and it also had memory issues where it seemed to drop context between prompts here and there.

It's also slower than both Opus 4.5 and Sonnet.

reply
bluecalm
4 months ago
[-]
My experience is the opposite although I don't use it to write code but to explore/learn about algorithms and various programming ideas. It's amazing. I am close to cancelling my ChatGPT subscription (I would only use Open Router if it had nicer GUI and dark mode anyway).
reply
minimaxir
4 months ago
[-]
For noncoding tasks, Gemini atleast allows for easier grounding with Google Search.
reply
alfalfasprout
4 months ago
[-]
If anything it's a testament to human intelligence that benchmarks haven't really been a good measure of a model's competence for some time now. They provide a relative sorting to some degree, within model families, but it feels like we've hit an AI winter.
reply
gunalx
4 months ago
[-]
Have used gemini3 to GEW shot a few problems GPT5 struggled on.
reply
moffkalast
4 months ago
[-]
Yes, and likewise with Kimi K2. Despite being on the top of open source benches it makes up more batshit nonsense than even Llama 3.

Trust no one, test your use case yourself is pretty much the only approach, because people either don't run benchmarks correctly or have the incentive not to.

reply
VeejayRampay
4 months ago
[-]
no, I find Gemini to be the best
reply
arnaudsm
4 months ago
[-]
Geometric mean of MMMLU + GPQA-Diamond + SimpleQA + LiveCodeBench :

- Gemini 3.0 Pro : 84.8

- DeepSeek 3.2 : 83.6

- GPT-5.1 : 69.2

- Claude Opus 4.5 : 67.4

- Kimi-K2 (1.2T) : 42.0

- Mistral Large 3 (675B) : 41.9

- Deepseek-3.1 (670B) : 39.7

The 14B 8B & 3B models are SOTA though, and do not have chinese censorship like Qwen3.

reply
jasonjmcghee
4 months ago
[-]
How is there such a gap between Gemini 3 vs GPT 5.1/Opus 4.5? What is Gemini 3 crushing the others on?
reply
arnaudsm
4 months ago
[-]
Could be optimized for benchmarks, but Gemini 3 has been stellar for my tasks so far.

Maybe an architectural leap?

reply
netdur
4 months ago
[-]
I believe it is the system instructions that make the difference for Gemini, as I use Gemini on AI Studio with my system prompts to get it to do what I need it to do, which is not possible with gemini.google.com's gems
reply
gishh
4 months ago
[-]
Gamed tests?
reply
rdtsc
4 months ago
[-]
I always joke that Google pays for a dedicated developer to spend their full time just to make pelicans on bicycles look good. They certainly have the cash to do it.
reply
tootyskooty
4 months ago
[-]
Since no one has mentioned it yet: note that the benchmarks for large are for the base model, not for the instruct model available in the API.

Most likely reason is that the instruct model underperforms compared to the open competition (even among non-reasoners like Kimi K2).

reply
esafak
4 months ago
[-]
Well done to the France's Mistral team for closing the gap. If the benchmarks are to be believed, this is a viable model, especially at the edge.
reply
nullbio
4 months ago
[-]
Benchmarks are never to be believed, and that has been the case since day 1.
reply
hnuser123456
4 months ago
[-]
Looks like their own HF link is broken or the collection hasn't been made public yet. The 14B instruct model is here:

https://huggingface.co/mistralai/Ministral-3-14B-Instruct-25...

The unsloth quants are here:

https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512...

reply
janpio
4 months ago
[-]
reply
andhuman
4 months ago
[-]
This is big. The first really big open weights model that understands images.
reply
yoavm
4 months ago
[-]
How is this different from Llama 3.2 "vision capabilities"?

https://www.llama.com/docs/how-to-guides/vision-capabilities...

reply
Havoc
4 months ago
[-]
Guessing GP commenter considers Apache more "open" than Meta's license. Which to be fair isn't terrible but also not quite as clean as straight apache
reply
mesebrec
4 months ago
[-]
Llama's license explicitly disallows its usage in the EU.

If that doesn't even meet the threshold for "terrible", then what does?

reply
CamperBob2
4 months ago
[-]
Why does it disallow usage in the EU?
reply
Terretta
3 months ago
[-]
You'd have to ask EU's regulators why they wanted Meta to disallow it.

Much like you'd have to ask UK lawmakers why they wanted UK citizens to be unable to keep their own Apple iCloud backups secure.

reply
trvz
4 months ago
[-]
Sad to see they've apparently fully given up on releasing their models via torrent magnet URLs shared on Twitter; those will stay around long after Hugging Face is dead.
reply
ThrowawayTestr
4 months ago
[-]
How does HF manage to serve such big files?
reply
nikcub
4 months ago
[-]
reply
ThrowawayTestr
4 months ago
[-]
I meant more how do they pay for all that bandwidth. I can download a 20gb model in like 2 minutes
reply
accrual
4 months ago
[-]
Congrats on the release, Mistral team!

I haven't used Mistral much until today but am impressed. I normally use Gemma 3 27B locally, but after regenerating some responses with Mistral 3 14B, the output quality is very similar despite generating much faster on my hardware.

The vision aspect also worked fine, and actually was slightly better on the same inputs versus qwen3 VL 8B.

All in all impressive small dense model, looking forward to using it more.

reply
Tiberium
4 months ago
[-]
A bit interesting that they used Deepseek 3's architecture for their Large model :)
reply
RandyOrion
4 months ago
[-]
Thank you Mistral for releasing new small parameter-efficient (aka dense) models.
reply
lalassu
4 months ago
[-]
It's sad that they only compare to open weight models. I feel most users don't care much about OSS/not OSS. The value proposition is the quality of the generation for some use case.

I guess it says a bit about the state of European AI

reply
para_parolu
4 months ago
[-]
It’s not for users but for businesses. There is demand for inhouse use with data privacy. Regular users can’t even run large model due to lack of compute.
reply
troyvit
4 months ago
[-]
Glad I'm not most users. I'm down for 80% of the quality for an open weight model. Hell I've been using Linux for 25 years so I suppose I'm used to not-the-greatest-but-free.
reply
hopelite
4 months ago
[-]
It seems to be a reasonable comparison since that is the primary/differentiating characteristic of the model. It’s really common to also and seemingly only ever see the comparison of closed weight/proprietary models in a way that seems to act as if all of the non-American and open weight models don’t even exist.

I also think most people do not consider open weights as OSS.

reply
mortsnort
4 months ago
[-]
I use a small model as a chatbot of sorts in a game I'm making. I was hoping the 3b could replace qwen 4b, but it's far worse at following instructions and providing entertaining content. I suppose this is expected given smaller size and their own benchmarks that show Qwen beating it at instruct.
reply
dmezzetti
4 months ago
[-]
Looking forward to trying them out. Great to see they are Apache 2.0...always good to have easy-to-understand licensing.
reply
jasonjmcghee
4 months ago
[-]
I wish they showed how they compared to models larger/better and what the gap is, rather than only models they're better than.

Like how does 14B compare to Qwen30B-A3B?

(Which I think is a lot of people's goto or it's instruct/coding variant, from what I've seen in local model circles)

reply
GaggiX
4 months ago
[-]
The small dense model seems particularly good for their small sizes, I can't wait to test them out.
reply
codybontecou
4 months ago
[-]
Do all of these models, regardless of parameters, support tool use and structured output?
reply
Y_Y
4 months ago
[-]
In principle any model can do these. Tool use is just detecting something like "I should run a db query for pattern X" and structured output is even easier, just reject output tokens that don't match the grammar. The only question is how well they're trained, and how well your inference environment takes advantage.
reply
Ey7NFZ3P0nzAe
4 months ago
[-]
Yes they all support tool use at least.
reply
domoritz
4 months ago
[-]
Urg, the bar charts to not start at 0. It's making it impossible to compare across model sizes. That's a pretty basic chart design principle. I hope they can fix it. At least give me consistent y scales!
reply
Frannky
4 months ago
[-]
I haven't tried a Mistral model in ages. Llama and Mistral feel like something I was using in another era. Are they good?
reply
RYJOX
4 months ago
[-]
I find that there are too many paid sub models at the minute with non legitimate progress to warrant the money spent. Recently cancelled GPT.
reply
tmaly
4 months ago
[-]
I see several 3.x versions on Openrouter.ai, any idea which of those are the new models?
reply
PhilippGille
4 months ago
[-]
reply
Aissen
4 months ago
[-]
Anyone succeed in running it with vLLM?
reply
Patrick_Devine
4 months ago
[-]
The instruct models are available on Ollama (e.g. `ollama run ministral-3:8b`), however the reasoning models still are a wip. I was trying to get them to work last night and it works for single turn, but is still very flakey w/ multi-turn.
reply
dloss
4 months ago
[-]
Yes, the 3B variant, with vLLM 0.11.2. Parameters are given on the HF page. Had to override the temperature to 0.15 though (as suggested on HF) to avoid random looking syllables.
reply
Aissen
4 months ago
[-]
It now seems to work with the latest vLLM git.
reply
pixel_popping
4 months ago
[-]
fyi Mistral admins, there is no dates showing on your article.
reply
s_dev
4 months ago
[-]
I was subscribing to these guys purely to support the EU tech scene. So I was on Pro for about 2 years while using ChatGPT and Claude.

Went to actually use it, got a message saying that I missed a payment 8 months previously and thus wasn't allowed to use Pro despite having paid for Pro for the previous 8 months. The lady I contacted in support simply told me to pay the outstanding balance. You would think if you missed a payment it would relate to simply that month that was missed not all subsequent months.

Utterly ridiculous that one missed payment can justify not providing the service (otherwise paid for in full) at all.

Basically if you find yourself in this situation you're actually better of deleting the account and resigning up again under a different email.

We really need to get our shit together in the EU on this sort of stuff, I was a paying customer purely out of sympathy but that sympathy dried up pretty quick with hostile customer service.

reply
cycomanic
4 months ago
[-]
I'm not sure I understand you correctly, but it seems you had a subscription missed one payment some time ago, but now expect that your subscription works because the missed month was in the past and "you paid for this month"?

This sounds like the you expect your subscription to work as an on-demand service? It seems quite obvious that to be able to use a service you would need to be up to date on your payments, that would be no different in any other subscription/lease/rental agreement? Now Mistral might certainly look back at their records and see that you actually didn't use their service at all for the last few month and waive the missed payment. And that could be good customer service, but they might not even have record that you didn't use it, or at least those records would not be available to the billing department?

reply
s_dev
4 months ago
[-]
>This sounds like the you expect your subscription to work as an on-demand service?

That's exactly what it is.

>I'm not sure I understand you correctly,

I understand perfectly well, I don't agree with that approach is the issue.

If I paid for 11/12 months I should get 11/12 months subscription not 1/12 months. They happily just took a years subscription and provided nothing in return. Even if I fixed the outstanding balance they would have provided 2/12 months of service at a cost of 12/12 months of payment.

reply
shlomo_z
4 months ago
[-]
This seems like a legitimate complaint... I wonder why it's downvoted
reply
s_dev
4 months ago
[-]
My critique is more levelled at Mistral and not specifically what they've just released so it could be that some see what I have to say as off topic.

Also a lot of Europeans are upset at US tech dominance. It's a position we've roped ourselves in to so any commentary that criticises an EU tech success story is seen as being unnecessarily negative.

However I do mean it as a warning to others, I got burned even with good intentions.

reply
ThrowawayTestr
4 months ago
[-]
Awesome! Can't wait till someone abliterates them.
reply
RomanPushkin
4 months ago
[-]
Mistral presented DeepSeek 3.2
reply
another_twist
4 months ago
[-]
I am not sure why Meta paid 13B+ to hire some kid vs just hiring back or acquiring these folks. They'll easily catch up.
reply
Rastonbury
4 months ago
[-]
Age aside, not sure what Zuck was thinking, seeing as Scale AI was in data labelling and not training models, perhaps he thought he was a good operator? Then again the talent scarcity is in scientists, there are many operators, let alone one worth 14B. Back to age, the people he is managing are likely all several years older than him and Meta long timers, which would make it even more challenging
reply
vintagedave
4 months ago
[-]
What is this referring to? I googled and the company was founded in 2016. No one involved can to a “kid”?
reply
another_twist
4 months ago
[-]
True no one involved in Scale AI right now is a kid. But, their expertise is in data labelling not cutting edge AI. Compare that to the Mistral team. They launched a new LLM within 6months of founding. They're also ex-Meta researchers. But they dont have the distribution coz europe. If we want to tout 13B acqusitions and 100m pay packages, Mistral is the perfect candidate. Its basically plug and play. Compare that to Scale and the shitshow that ensued. MSL lost talent and have to start from scratch given that their head knows nothing about LLMs.
reply