Why the Sanitizer API is just `setHTML()`
129 points
2 days ago
| 12 comments
| frederikbraun.de
| HN
brainbag
1 day ago
[-]
With context, this article is more interesting than the title might imply.

> The Sanitizer API is a proposed new browser API to bring a safe and easy-to-use capability to sanitize HTML into the web platform [and] is currently being incubated in the Sanitizer API WICG, with the goal of bringing this to the WHATWG.

Which would replace the need for sanitizing user-entered content with libraries like DOMPurify by having it built into the browser's API.

The proposed specification has additional information: https://github.com/WICG/sanitizer-api/

reply
crote
1 day ago
[-]
Yeah, I was expecting something closer to "because that's what people Google for".

A big part of designing a security-related API is making it really easy and obvious to do the secure thing, and hide the insecure stuff behind a giant "here be dragons" sign. You want people to accidentally do the right thing, so you call your secure and insecure functions "setHTML" and "setUnsafeHTML" instead of "setSanitizedHTML" and "setHTML".

reply
guessmyname
23 hours ago
[-]
100%… it’s like Rust’s “unsafe” package, or Rust reqwest package naming things like danger_accept_invalid_certs(true) and danger_accept_invalid_hostnames(true) → https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder....
reply
cess11
23 hours ago
[-]
get_magic_quotes_gpc() and mysql_real_escape_string() had quite a bit to teach in this area.
reply
some_furry
19 hours ago
[-]
Both of those functions were deprecated years ago.

mysql_real_escape_string() was removed in PHP 7.0.

get_magic_quotes_gpc() was removed in PHP 8.0.

https://www.php.net/mysql_real_escape_string

https://www.php.net/get_magic_quotes_gpc

The current minimum PHP version that is supported for security fixes by the PHP community is 8.1: https://www.php.net/supported-versions.php

If you're still seeing this in 2025 (going on 2026), there are other systemic problems at play besides the PHP code.

reply
garaetjjte
44 minutes ago
[-]
mysql_real_escape_string is only deprecated because there is mysqli_real_escape_string. I always wondered why it's "real"...like is there "fake" version of it?
reply
tacone
5 hours ago
[-]
Decades ago.
reply
cess11
9 hours ago
[-]
Hence why I chose "had" for my previous comment.
reply
mubou2
1 day ago
[-]
The author really needs to start with that. They say "the API that we are building" and assume I know who they are and what they're working on, all the way until the very bottom. I just assumed it's some open source library.

> HTML parsing is not stable and a line of HTML being parsed and serialized and parsed again may turn into something rather different

Are there any examples where the first approach (sanitize to string and set inner html) is actually dangerous? Because it's pretty much the only thing you can do when sanitizing server-side, which we do a lot.

Edit: I also wonder how one would add for example rel="nofollow noreferrer" to links using this. Some sanitizers have a "post process node" visitor function for this purpose (it already has to traverse the dom tree anyway).

reply
crote
1 day ago
[-]
> Are there any examples where the first approach (sanitize to string and set inner html) is actually dangerous?

The article links to [0], which has some examples of instances in which HTML parsing is context-sensitive. The exact same string being put into a <div> might be totally fine, while putting it inside a <style> results in XSS.

[0]: https://www.sonarsource.com/blog/mxss-the-vulnerability-hidi...

reply
tobr
1 day ago
[-]
> They say "the API that we are building" and assume I know who they are and what they're working on, all the way until the very bottom.

This is a common and rather tiresome critique of all kinds of blog posts. I think it is fair to assume the reader has a bit of contextual awareness when you publish on your personal blog. Yes, you were linked to it from a place without that context, but it’s readily available on the page, not a secret.

reply
mubou2
1 day ago
[-]
Well that's... certainly a take. But I have to disagree. Most traffic coming to blog posts is not from people who know you and are personally following your posts, they're from people who clicked a link to the article someone shared or found it while googling something.

It's not hard to add one line of context so readers aren't lost. Here, take this for example, combining a couple parts of the GitHub readme:

> For those who are unfamiliar, the Sanitizer API is a proposed new browser API being incubated in the Sanitizer API WICG, with the goal of bringing this to the WHATWG.

Easy. Can fit that in right after "this blog post will explain why", and now everyone is on the same page.

reply
swiftcoder
1 day ago
[-]
> Most traffic coming to blog posts is not from people who know you and are personally following your posts

Do we have data to back that up? Anecdotally the blogs I have operated over the years tend to mostly sustain on repeat traffic from followers (with occasional bursts of external traffic if something trends on social media)

reply
rerdavies
16 hours ago
[-]
Your data sounds a bit anecdotal. :-P

Here's my anecdotal data. Number of blogs that I personally follow: zero. And yet, somehow, I end up reading a lot of blog posts (mostly linked from HN, but also from other places in my webosphere).

(More than a bit irritated by the "Do you have data to back that up" thing, given that you don't really have data to back up your position).

reply
swiftcoder
10 hours ago
[-]
> (More than a bit irritated by the "Do you have data to back that up" thing, given that you don't really have data to back up your position).

It wasn't necessarily a request for you personally to provide data. I'm curious if any larger blog operators have insight here.

"person who only reads the 0.001% of blog posts that reach the HN front page" is not terribly interesting as an anecdotal source on blog traffic patterns

reply
tobr
1 day ago
[-]
> It's not hard

It’s also not hard to look around for a few seconds to find that information, is my point.

reply
rerdavies
16 hours ago
[-]
What's hard in this case is that you end up making it 80% of the way through the article before you start to wonder what the heck this guy is talking about. So you have to click away to another page to figure out who the heck this guy is, then start again at the top of the article, reading it with that context in mind.

One word would have fixed the problem. "Why does the Mozilla API blah blah blah.". Perhaps "The Mozilla implementation used to...". Something like that.

THAT is not hard.

reply
LegionMammal978
1 day ago
[-]
They had a link in their post [0]: it seems like most of the examples are with HTML elements with wacky contextual parsing semantics such as <svg> or <noscript>. Their recommendation for server-side sanitization is "don't, lol", and they don't offer much advice regarding it.

Personally, my recommendation in most cases would be "maintain a strict list of common elements/attributes to allow in the serialized form, and don't put anything weird in that list: if a serialize-parse roundtrip has the remote possibility of breaking something, then you're allowing too much". Also, "if you want to mutate something, then do it in the object tree, not in the serialized version".

[0] https://www.sonarsource.com/blog/mxss-the-vulnerability-hidi...

reply
tlb
1 day ago
[-]
setHTML needs to support just about every element if it's going to be the standard way of rendering dynamic content. Certainly <svg> has to work or the API isn't useful.

SanitizeHTML functions in JS have had big security holes before, around edge cases like null bytes in values, or what counts as a space in Unicode. Browsers decided to be lenient in what they accept, so that means any serialize-parse chain creates some risk.

reply
LegionMammal978
23 hours ago
[-]
If you're rendering dynamic HTML, then either the source is authorized to insert arbitrary dynamic content onto the domain, or it isn't. And if it isn't, then you'll always have a hard time unless you're as strict as possible with your sanitization, given how many nonlocal effects can be embedded into an HTML snippet.

The more you allow, the less you know about what might happen. E.g., <svg> styling can very easily create clickjacking attacks. (If I wanted to allow SVGs at all, I'd consider shunting them into <img> tags with data URLs.) So anyone who does want to use these more 'advanced' features in the first place had better know what they're doing.

reply
bffjjfjf
20 hours ago
[-]
That overly reductive thinking can go back to the 80s before we had learned any lessons. There are degrees of trust. Binary thinking invites dramatic all or nothing failures.
reply
LegionMammal978
3 hours ago
[-]
And my point is that with HTML, there's always an extremely fine line between allowing "almost nothing" and "almost all of it" when it comes to sanitization. I'd love to live in a world where there are natural delineations of features that can safely be flipped on or off depending on how much control you want to give the source over the content, but in practice, there are dozens of HTML/CSS features (including everything in the linked article) that do wacky stuff that can cross over the lines.
reply
mubou2
1 day ago
[-]
Ah, I see what they're talking about. That's a good article; my brain totally skipped over that link. Thanks.
reply
rebane2001
11 hours ago
[-]
> Because it's pretty much the only thing you can do when sanitizing server-side

I'd suggest not sanitizing user-provided HTML on the server. It's totally fine to do if you're fully sanitizing it, but gets a little sketchy when you want to keep certain elements and attributes.

reply
masklinn
1 day ago
[-]
> Are there any examples where the first approach (sanitize to string and set inner html) is actually dangerous?

The term to look for is “mutation xss” (or mxss).

reply
bikeshaving
1 day ago
[-]
This is interesting. The argument which I’m gleaning from the essay is that the old proposed API of having an intermediary new Sanitizer() class with a sanitize(input) method which returns a string is actually insecure because of mutated XSS (MXSS) bugs.

The theory is that the parse->serialize->parse round-trip is not idempotent and that sanitization is element context-dependent, so having a pure string->string function opens a new class of vulnerabilities. Having a stateful setHTML() function defined on elements means the HTML context-specific rules for tables, SVG, MathML etc. are baked in, and eliminates double-parsing errors.

Are MXSS errors actually that common?

reply
jamesbvaughan
1 day ago
[-]
Aside from the article's content, I really like the inline exercise for the reader with the hidden/expandable answer section. It's fun and it successfully got me to read the proceeding section more closely than I would have otherwise.
reply
QuercusMax
13 hours ago
[-]
Same; it was a very easy question once you actually read the previous section instead of just skimming over it. :D
reply
cobbal
1 day ago
[-]
Makes sense. I think this is a variant of the "parse, don't validate" motto, but is more "parse, don't parse-serialize-parse" in the implementation.
reply
socketcluster
21 hours ago
[-]
This is a good API. I hope it gains adoption in at least one browser, that way other browsers which don't adopt it will be called 'insecure'... Which would be warranted IMO... People have been wanting the ability to inject safe HTML for almost as long as JavaScript existed.

Seriously, we got CSP before setHTML() WTF!

CSP is nasty. Removing essential functionality to mitigate possible security flaws, ignoring the developer's intent. CSP is like taping your mouth shut to lose weight... But you still sit through 3 meals a day... Basically smashing the food against your face.

reply
embedding-shape
18 hours ago
[-]
> CSP is nasty

Despite the very graphical description, I still don't understand why you don't like CSP. As the server owner, you set your own CSP rules, and if you don't want anything removed, don't configure it like that? It's all opt-in.

Obviously it doesn't fix all classes of potential security issues, but neither would anything else either, it's just one piece of the puzzle.

reply
cxr
22 hours ago
[-]
> This is pretty similar to the Sanitizer that I wanted to build into the browser: […] But that is NOT the Sanitizer we ended up with.¶ And the reason is essentially Mutated XSS (mXSS). To quickly recap, the idea behind mXSS is[…]

No, the reason is that the problem is underspecified and unsatisfiable.

The whole notion of HTML "sanitization" is the ultimate "just do what I mean". It's the customer who cannot articulate what they need. It's «Hey, how about if there were some sort of `import "nobugs"`?»

"HTML sanitization" is never going to be solved because it's not solvable.

There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement. (And if you already have this information, then the necessary tools have been available for years—decades, even: `innerHTML` and `textContent`—or if you don't like the latter, then it's trivial to write your own `escapeText` subroutine that's correct, well-formed, and sound.) No new DOMPurify alternative or native API baked into the browser is going to change this, ever.

reply
dec0dedab0de
20 hours ago
[-]
So is the usecase for this that you save un trusted html from your user in your database, then send that untrusted html to your users, but in the front end parse it down to just the safe bits?

I think maybe a better api would be to add an unsafe html tag so it would look something like:

    <unsafe>
    all unsafe code here
    </unsafe>
Then if the browsers do indeed support it, it would work even without javascript.

But in any case, you really should be validating everything server side.

reply
isaachinman
15 hours ago
[-]
So... An iframe?
reply
twodave
15 hours ago
[-]
Interesting, though not really a replacement for server-side sanitization. But as another layer of defense? Sure. I could see it useful especially in RTEs.
reply
jagged-chisel
19 hours ago
[-]
I think this API makes more sense from another standpoint as well.

You don’t want developers trying to rely on client-only sanitization for user input submitted to the server. Sanitizing while setting a user-face UI makes sense.

reply
IshKebab
22 hours ago
[-]
> Traverse the HTML fragment and remove elements as configured.

Well this is clearly wrong isn't it? You need a whitelist of elements, not a blacklist. That lesson is at least 2 decades old.

reply
jkrems
21 hours ago
[-]
I mean... "as configured" can me either an allow OR a denylist. That sentence doesn't really prescribe doing it one way or the other..? You have to parse the denylisted elements because they will affect the rest of the parse, so you _have_ to remove them afterwards in the general case.
reply
IshKebab
21 hours ago
[-]
Looks like it supports both actually: https://wicg.github.io/sanitizer-api/#sanitization

That's better than only supporting `removeElements`, but it really shouldn't support it at all.

reply
philipwhiuk
1 day ago
[-]
The downside of a new method is that it leaves innerHtml as a source of future security issues.
reply
crote
1 day ago
[-]
Yes, but you can also easily lint on it: all uses of `context.innerHTML` are now suspect and should get a suggestion to use `context.setHTML` instead.

With `const clean = DOMPurify.sanitize(input); context.innerHTML = clean;` your linter suddenly needs to do complex code analysis and keep track if each variable passed to `context.innerHTML` is clean or tainted.

reply
wbobeirne
1 day ago
[-]
I feel like calling this a downside implies there's an alternative, but there's no way that `innerHtml`'s behavior could be changed. There are a lot of valid reasons for arbitrary HTML to be set, and changing that would break so many things.
reply
cortesoft
23 hours ago
[-]
There could be a better name for it? like `innerSanitizedHTML` or something, that makes it clear what the difference between the two calls are. There is nothing in the wording of setHTML that makes it clear it sanitizes where innerHTML doesn't.
reply
rictic
2 hours ago
[-]
You can disable it for your site using a trusted types content security policy.
reply
uallo
21 hours ago
[-]
reply
cluckindan
1 day ago
[-]
Yes, one could simply make a setter for innerHTML which calls setHTML(). No code changes needed.
reply
masklinn
1 day ago
[-]
That breaks existing usages of innerhtml which may legitimately need its more dangerous features.
reply
cxr
22 hours ago
[-]
It seems obvious enough that parent is talking about changing the behavior of innerHTML within their own application, not for browser makers to change the implementation. It's unfair to take the most uncharitable interpretation and upbraid the other commenter for being insufficiently defensive[1] when they pressed "reply".

1. <https://pchiusano.github.io/2014-10-11/defensive-writing.htm...>

reply
masklinn
21 hours ago
[-]
> It seems obvious

Doesn't seem obvious unless your dutch.

Especially as the first things I would think obvious is: if breaking the behaviour of innerHTML is not a concern for your software why keep it at all? Delete the property or make it readonly.

reply
cxr
20 hours ago
[-]
> Doesn't seem obvious unless your dutch.

I don't know what that means.

> if breaking the behaviour of innerHTML is not a concern for your software why keep it at all?

For the reason that they said.

reply
cluckindan
5 hours ago
[-]
Maybe I should have included the word ”monkeypatch” in the comment.
reply
nayuki
1 day ago
[-]
> HTML parsing is not stable and a line of HTML being parsed and serialized and parsed again may turn into something rather different

This is why people should really use XHTML, the strict XML dialect of HTML, in order to avoid these nasty parsing surprises. It has the predictable behavior that you want.

In XHTML, the code does exactly what it says it does. If you write <table><a></a></table> like the example on the mXSS page, then you get a table element and an anchor child. As another example, if you write <table><td>xyz</td></table>, that's exactly what you get, and there are no implicit <tbody> or <tr> inserted inside.

It's just wild as I continue to watch the world double down for decades on HTML and all its wild behavior in parsing. Furthermore, HTML's syntax is a unique snowflake, whereas XML is a standardized language that just so happens to be used in SVG, MathML, Atom, and other standards - no need to relearn syntax every single time.

reply
bayesnet
1 day ago
[-]
I don’t think this is right. XHTML guarantees well-formedness (matched closing tags et al) but doesn’t do anything for validity. It’s not semantically valid for <td> to be a direct child of <table>, so the user agent has to make the call as to what to display regardless of the (X)HTML flavor. The alternative is parsing failure on improperly nested HTML which I don’t think is desirable.
reply
intrasight
1 day ago
[-]
> The alternative is parsing failure on improperly nested HTML which I don’t think is desirable.

It was that decision that resulted in the current mess. Browser vendors could have given us a grace period to fix HTML that didn't validate against the schema. Instead they said "there is no schema"

reply
bayesnet
23 hours ago
[-]
The issue as I see it is that XML schemas are fine[0] for immutable documents but not suited for dynamic content. As a user it would be extraordinarily frustrating for a site or web app to break midflow because of a schema validation failure after a setHTML call or something.

[0]: I’ve worked with XML schemas a lot and have grown to really dislike them actually but that’s neither here nor there

reply
favorited
20 hours ago
[-]
You might as well complain about Betamax. XHTML is not the future.
reply
recursive
20 hours ago
[-]
HTML is also a standardized language.
reply