MongoDB Server Security Update, December 2025
96 points
14 hours ago
| 7 comments
| mongodb.com
| HN
kryogen1c
10 hours ago
[-]
>proactive [...] security program

Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess.

>This [...] vuln is not a breach or compromise of MongoDB

IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

>vulnerability was discovered internally >detected the issue

Interesting choice of words. I wonder if their SIEM/SOC discovered a compromise, or if someone detected a tweet.

>December 12–14 – We worked continuously

It took 72 clock hours, assumably hundreds of man hours, to fix a malloc use after free and cstring null term bug? Maybe the user input field length part was a major design point??

>dec 12 "detect" the issue, dec 19 cve, dec 23 first post

Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

Not sure there's a security tool in the world that would stop data exfiltration via protocol error logs.

reply
weinzierl
5 hours ago
[-]
" >proactive [...] security program Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess. "

If you follow their history, especially the jepsen analysis and the whole back and forth, you will find a pattern.

reply
notepad0x90
10 hours ago
[-]
> IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

It's a factually statement, unless you know of some information that indicates MongoDB was breached. I think you mistook "MongoDB" there to be the software instead of the company. They meant the company, their systems and infrastructure was not compromised.

> Interesting choice of words. I wonder if their SIEM/SOC discovered a compromise, or if someone detected a tweet.

I highly doubt that. it could be a crash someone noticed, a code audit, internal bug-bounty,etc.. either way I wouldn't ascribe to them deceit without proof, if it was an external source, give them the benefit of doubt that they'd have said so.

> It took 72 clock hours, assumably hundreds of man hours, to fix a malloc use after free and cstring null term bug? Maybe the user input field length part was a major design point??

You are familiar with things like SOC and SIEM, and you're confused by this? Are you familiar with Incident Response? The act of editing the code in a text editor and committing it to a branch isn't what took 72 hours.

> Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

It does not, far from it.

> Not sure there's a security tool in the world that would stop data exfiltration via protocol error logs.

Maybe not prevent, but certainly detect and attempt to interdict/stop is certainly possible. That's what SIEMs do if they're adequately configured. But the drawback might be considerable volume of false hits. It might be better to simply reduce exposure to the internet, or remove it entirely. Just pointing out that, at least detection is possible, even with 0 days like this.

reply
kryogen1c
4 hours ago
[-]
>I think you mistook "MongoDB"

I must have, the sentence does not make sense to me. Here it is, shortened: "this vuln in mongodb server does not impact mongodb, managed mongodb server, or our systems". If the first clause is referring to their systems, why do they say the same thing in the third clause?

Also i just noticed, how come they say atlas wasn't affected but say they patched it in their timeline?

>give them the benefit of doubt that they'd have said so

Statements like this are basically legal admissions of guilt, i expect there to be as little truth as possible.

>You are familiar with things like SOC and SIEM, and you're confused by this?

I work in IT, I'm not a coder... so yes :) hundreds of hours seems excessive. Remember, this isn't a safe deployment or rollout plan, that's the next block of time. Hundreds of man hours is more than one person's full month of work. Do you expect it to take you a whole, dedicated month to fix 1 bug at a time?

>That's what SIEMs do if they're adequately configured.

This is a bit of a no true Scotsman. The intended error log is "error: {cstring payload nullterm} broke" and the mongobleed log is "error: {cstring payload MISSINGNULLTERM cstring payload nullterm} broke". Those two things look identical, how is any amount of configuration supposed to catch that?

reply
notepad0x90
1 hour ago
[-]
> Do you expect it to take you a whole, dedicated month to fix 1 bug at a time?

Like I said, the bugfix is not what takes long. They have to figure out the extent of the vulnerability, do regression testing, make sure they don't introduce more issues. And _then_ they can begin sending embargo notifications, let their customers prep, patch,etc... while in parallel they do analysis of in-the-wild exploitation. They have to support all the paying customers that are panicking and want answers. You're not the only one scrutinizing every word they say and demanding answers. They talked to lawyers plenty during that time. If you know legal admission of guilt is one of the things included, then you should know they're publicly traded and SOX plus section 8 filings are a huge deal. Their CISO could literally end up in prison if he screws this up. So yeah, it takes a couple of days. They have to have outside parties (likely) support their response, even without that, "who did what", "what was affected", "how was it abused", "how can it be prevented" , all of that needs to be answered, and then there is lots of back on forth on the specifics of the wording to the public/PR, what to tell investors, customers, etc...

> This is a bit of a no true Scotsman.

There are different detection strategies possible. Your approach could be done, when an error message that hasn't been seen previously suddently shows up, it could be flagged for follow-up investigations, contact mongo support,etc.. that's not what I meant though, you mentioned exfil, abnormal data transfers from 'mongod' could be caught is what I meant. Most moderns SIEMS do this out of the box if you feed them right and well.

reply
jacquesm
3 hours ago
[-]
> Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

If you still run MongoDB facing the internet you have bigger problems.

reply
neandrake
10 hours ago
[-]
>>This [...] vuln is not a breach or compromise of MongoDB

>IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

You elide the context that explains it. It's a vulnerability in their MongoDB Server product, not a result of MongoDB the company/services being compromised and secrets leaked.

reply
mmsc
6 hours ago
[-]
It wasn't an RCE.
reply
kryogen1c
4 hours ago
[-]
Oh goodness, wheres my head at, thank you. Too late to edit, but you are correct. Memory exfiltration, potentially containing passwords and secrets, leading to privilege escalation. Not an RCE.
reply
mmsc
1 hour ago
[-]
>Memory exfiltration, potentially containing passwords and secrets

and potentially not, too. totally overhyped

reply
anonnon
8 hours ago
[-]
> Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess.

Describing their response as "proactive" is about what you'd expect from a company that famously used unacknowledged writes to game benchmarks during their peak hype phase. Ironically, Mongo has been slower than PostgreSQL for years at JSON queries, the very thing at which it's supposed to excel, and especially relative to a "boring," "antiquated" relic like Postgres, which was started all the way back in 1985.

The real head-scratcher here is who is still using MongoDB, and why? It got to a point years ago where even "I told you so" types (like me) found it no longer necessary to pile on, given the wave of buyer's remorse postmortems from devs who bought into MongoDB's hype.

reply
gberger
13 hours ago
[-]
Why did it take them 4 days between publishing a CVE for the vulnerability (Dec 19th) and posting a public patch (Dec 23rd)?
reply
joecool1029
12 hours ago
[-]
Had their hands full getting sued the same day: https://news.ycombinator.com/item?id=46403128
reply
cebert
13 hours ago
[-]
In the US, the last two weeks of December can be slow due to the holiday season. I wouldn’t be surprised if Mongo wasn’t as staffed as usual.
reply
tanduv
9 hours ago
[-]
should've spun up a few more AI agents
reply
theteapot
10 hours ago
[-]
Might not be how it appears. The CVE number can be reserved by the org and then "published" with only minimal info, then later update with full details. Looking at the meta data that's probably what happened here (not entirely sure what the update was though):

    {
    "cveId": "CVE-2025-14847",
    "assignerOrgId": "a39b4221-9bd0-4244-95fc-f3e2e07f1deb",
    "state": "PUBLISHED",
    "assignerShortName": "mongodb",
    "dateReserved": "2025-12-17T18:56:21.301Z",
    "datePublished": "2025-12-19T11:00:22.465Z",
    "dateUpdated": "2025-12-29T23:20:23.813Z"
    }
reply
computerfan494
13 hours ago
[-]
That's a good question. I suppose that posting the commit makes it incredibly obvious how to exploit the issue, so maybe they wanted to wait a little bit longer for their on-prem users who were slow to patch?
reply
philipwhiuk
12 hours ago
[-]
Posting the CVE and then the patch is the reverse of this.
reply
computerfan494
12 hours ago
[-]
By "patch" I am talking about the public commit. Updated binaries were made available when the CVE was published.
reply
macintux
13 hours ago
[-]
reply
vivzkestrel
11 hours ago
[-]
if you are using mongodb in 2026 you deserve everything headed in your direction
reply
tgv
6 hours ago
[-]
And can you explain why? I think not. What's the superior alternative, for every use case?
reply
vivzkestrel
2 hours ago
[-]
reply
winrid
9 minutes ago
[-]
PG JSON write operations are document level whereas with Mongodb it's field level.

Would you use a DB that only let you write an entire row instead of setting a single field? Race conditions galore. Be very careful choosing PG for JSON in production systems...

reply
cyberpunk
7 hours ago
[-]
Why? I felt the same for a while but it’s really massively improved over the years. Yes, this is a bad vuln but anyone with even. tiny bit of brain is not running mongo on the internet.. I’m using mongo very successfully at the moment in ways i could not use postgres.
reply
wood_spirit
6 hours ago
[-]
Genuinely interested: what problems does mongo fit better than mainstream competitors these days? Why would you use it on a new project?
reply
tgv
6 hours ago
[-]
My application's primary task is to move JSON objects between storage and front-end. It does a lot more, but that's it's primary task. So document storage is a logical choice. There are no real reasons to join records, although it sometimes is more efficient to do so. MongoDB's join operation has one advantage (for 1:N relations): it groups the joined records as an array in the document, instead of multiplying the answers, so whatever function operates on the original data, also works on the joined data. The data itself is hierarchical in nature, so back-end operations also preferably work on structured data instead of rows.

You can argue that you can imitate that in Postgres or even SQLite by storing in JSON fields, but there are things they can't do quite as efficiently (e.g. indexing array contents); storage itself isn't very efficient either. But ignoring that, there's no functional difference: it's document in, document out. So then the choice boils down to speed, memory usage, etc. One day I'm going to check if Postgresql offers a real performance advantage, but given the backlog, that may take a while. Until then, MongoDB just works.

reply
sandblast2
3 hours ago
[-]
I consult for a small company which feeds some of the largest market research companies. This company finds data providers for each country, collect the data monthly and need to massage it into a uniform structure before handing it over. I help them scripting this. I found importing the monthly spreadsheets into mongodb and querying the set can replace an awful lot of manual scripting work. That aggregator queries are a good fit for an aggregator company shouldn't be that big of a surprise, I guess.

The mongodb instance is ephemeral, the database itself is ephemeral, both only exist while the script is running which can be measured in seconds. The structure is changing from month to month. All this plays to the strengths of mongodb while avoiding the usual problems. For eg one stage of the aggregate pipeline can only be 100MB? A source csv is a few megabytes at most.

Ps.: no, Excel can't do it, I got involved with this when the complexity to do it in Excel has become unbearable.

reply
solatic
2 hours ago
[-]
reply
cpursley
39 minutes ago
[-]
Postgres has jsonb helper functions for this.
reply
cyberpunk
6 hours ago
[-]
To be honest, I don't think it was a stand-out 'it's better for X than Y because of Z' kind of choice for us. We are a bank, and so database options are quite limited (it's Oracle or Mongo, essentially for certain applications).

I have one application at the moment which needs to handle about 175k writes/second across AZ's. We are not sharding at the moment, but probably will once scale requires (we are getting close) -- so just one big replica-set and it's behaving .. really nicely. I tried to emulate this workload on Postgres (which is my favourite database over my entire career so far (many scars)) and we couldn't get it to where mongo was for this workload, multi-az is painful, automatic failover is still an unanswered question really, I've tried all the 'right around the corner' multi-master Postgres options and none of them did anything other than make us sad.

From the developer standpoint, it's very nice to use, I just throw documents at it and it saves them. If I want an extra field, I just add it. If I want an index on something, also just add it. No big complicated schema migrations.

Especially what helps is we have absolutely incredibly great support from MongoDB. We have a _weekly_ call with them with a bunch of their senior engineers who answer all our stupid questions and proactively look for things to improve.

Ops story is also good, we aren't using Atlas, but the on-prem kube setup while a bit clunky has enough CRDs and whatever to keep devops happy for weeks at a time.

tl;dr -- it's boring and predictable, and I rarely have to think about it which is all I ever want from a database. I'm sure we could achieve the same results with other database technologies, but the ROI on even investigating them would not be worth it, as at best I think we would end up at the same place we are at now. People seem to have deeply religious feelings on databases, but I've never really been one of them.

I would not hesitate to use it on a new project.

reply
aschen
5 hours ago
[-]
> From the developer standpoint, it's very nice to use, I just throw documents at it and it saves them. If I want an extra field, I just add it. If I want an index on something, also just add it. No big complicated schema migrations.

This sentence summarize all the issues developers working with Mongo will have: multiple version of documents living in the same DB and unpredictable structure

Best thing MongoDB have it's definitely their marketing (making everyone think it's amazing to invest hundreds of millions to deliver an "OK" tier database) and their customer support

reply
cyberpunk
4 hours ago
[-]
Eh, not really. I've done both at considerable scale, and I don't hit these problems. Perhaps you need better developers? For sure, having your database enforce guardrails on what $thing should look like means your code can be lower quality, but you should pick the right tool for the job. For scenarios where I have one 'thing' that's not very relational, it works well. If your application dies because your $thing expects some field which isn't there, that's a you problem not a storage problem.
reply
bethekidyouwant
13 hours ago
[-]
Who has mongo open to the internet?
reply
reassess_blind
2 hours ago
[-]
Many people who use MongoDB Atlas (or other hosted MongoDB services) alongside a PaaS like Heroku that doesn’t offer static IPs or ranges.
reply
Culonavirus
10 hours ago
[-]
listen, I'm not saying the venn diagram between people who use mongo and people who would open it to the internet is a circle, but there is... ahem... a big overlap
reply
matt3210
13 hours ago
[-]
Ubisoft does
reply
ctxc
10 hours ago
[-]
Acc to a comment I read elsewhere, it's in the thousands (shodan result)
reply
cpursley
6 hours ago
[-]
reply
freakynit
11 hours ago
[-]
reply