--- main_before.go 2025-10-15 09:56:16.467115934 +0200
+++ main.go 2025-10-15 09:52:14.798134654 +0200
@@ -13,8 +13,10 @@
slog.Info("starting server on :4000")
+ csrfProt := http.NewCrossOriginProtection()
+
// Wrap the mux with the http.NewCrossOriginProtection middleware.
- err := http.ListenAndServe(":4000", http.NewCrossOriginProtection(mux))
+ err := http.ListenAndServe(":4000", csrfProt.Handler(mux))
if err != nil {
slog.Error(err.Error())
os.Exit(1)
I never seen that before, all the other learning sources that I have are just abandoned, often there will be something that brakes and you have to spend good amount of time to figure out how to fix it, which can just discourage you to go on.
Kudos to Alex that is how it should be done.
From https://developer.mozilla.org/en-US/docs/Web/Security/Attack...
In a cross-site request forgery (CSRF) attack, an attacker tricks the user or the browser into making an HTTP request to the target site from a malicious site. The request includes the user's credentials and causes the server to carry out some harmful action, thinking that the user intended it.
(Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!
SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.
SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.
In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.
You usually need another method as well
In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.
If you can spoof the origin header of a second party when they navigate to a third party, a CSRF is a complete waste of whatever vulnerability you have found.
The point is that arbitrary user's browsers out in the world won't spoof the Origin header, which is protecting them from CORF attacks.
It makes me wonder though - most browser APIs top out around 95% coverage on caniuse.com. What are these browsers/who are these people...? The modern web is very capable and can greatly simplify our efforts if we ignore the outliers. I'm inclined to do so. But am also open to counterarguments
Technological counterargument though is that you should allow people to tinker and do weird shit. Once upon a time tech wasn't about maximizing stock value, it was about getting Russian game crack to work, and making Lora's boobs bigger. Allowing weird shit is a way to respect the roots of modern tech, and allow hobbyists to tinker.
I most defintiely do not care about tinkerers, and in fact would generally classify them as something akin to attackers. I just want to allow as many people to use the app as possible, while keeping things simple and secure.
What shifts the discussion a bit is that many of the bottom 5% aren't lost customers. If your website doesn't work on my minority browser, smart TV or PS Vita I might be willing to just try it in Chrome instead
It doesn't seem unreasonable to say to those folks "were evidently not using the same web"
Also, Firefox exists, though they don't seem to care about privacy much anymore either.
And, of course, safari, which is terrible in most regards
Being chromium derivatives, they don't really have a say in what's included in “their” browser though.
> Also, Firefox exists
Well, you disregarded is as “arcane browser” right above.
> From business perspective it makes a lot of sense to just drop that bottom 5%. Actually, many businesses support Chrome only, they don't even support Firefox.
I think I'll probably carry on with not supporting browsers that don't have Sec-Fetch-Site. The alternative, Csrf tokens, actually causes me immense issues (they make caching very difficult, if not impossible).
(and I say all of this as someone who is specifically building something for the poorest folks. I'm extremely aware of and empathetic to their situation).
Secondly, users should upgrade their devices to stay safe online, since vulnerabile people are often scammed or tricked into downloading apps that contain malware.
So we should not cater to outdated browsers when they could pose a risk.
Tried old school SSR? Even on modern devices, it's waaaayy faster and much more enjoyable than JavaScript bloated "modern" website.
As example: my late grandfather of 100 years took records of his stamp collection in Excel. He used the computer for Wikipedia as well, but we didn't upgrade ist as he was comfortable, but upgrading to later Windows to ruin newer browser would have been too much of a change and rather made him stop doing what brought him fun. The router etc blocked worst places frequent backups allowed restore, thus actual risk low.
Anecdote aside: there are tons of those machines all over.
And then another big one: bots claiming to be something which they aren't.
So I wonder why the author didn't consider falling back to the Referer header, instead of relying on an unrelated feature like TLS 1.3. Checking the referrer on dangerous (POST) requests was indeed considered one way to block CSRF back in the day. Assuming all your pages are on the same https origin, is there an edge case that affects Referer but not the Origin header?
Ive read in various places though that referer has all sorts of issues, gotchas etc such that it isn't really a reliable way of doing this.
https://security.stackexchange.com/questions/158045/is-check...
A missing Referer header probably doesn't mean much one way or another. But you can at least block requests with Referer pointing to URLs outside of your origin. This fallback would seem preferable to the fail-open policy described in the article (request always allowed if neither Sec-Fetch-Site nor Origin headers are present).
Remember when you could trick a colleague into posting in Twitter, Facebook... by just sending a link?
CSRF fixes are great for security - but they've definitely made some of the internet's harmless mischief more boring
Rails uses a token-based check, and this article demonstrates token-less approach.
Rails didn't solve CSRF btw, the technique was invented long before Rails came to life.
Indeed, Csrf tokens are an ancient concept. WordPress, for example, introduced nonces a couple years before rails. Though, it does appear that rails might have been thr first to introduce csrf protection in a seemingly automated way.
I believe the new technique is easier to use for SPA architectures because you no longer need to extract the token from a cookie before adding it to request headers.
And having TLS v1.3 should be a requirement; no HTTPS, no session, no auth, no form (or API), no cookie. And having HSTS again should be default but with encrypted connections and time bounded CSRF cookies the threat window is very small.
You might want to read https://words.filippo.io/csrf.
And, the server shouldn't trust the client "trust me bro" style.
So, at the end of the day it doesn't matter whether it is a "rose by another name", i.e. it doesn't matter whether you call it a CSRF token, auth token, JWT, or whatever, it still needs to satisfy the following; a) the communication is secure (preferably encryption), b) the server can recognise the token when it sees it (headers (of which cookies are one type), etc), c) the server doesn't need to trust the client (it's easiest if the server creates the token, but it could also be a trusted OOB protocol like TOTP), and d) it identifies a given role (again it's easiest if it identifies a unique client (like a user or similar)).
So a name is just a name, but there needs to be a cookie or a cryptographically secure protocol to ensure that an untrusted client is who it says it is. Cookies are typically easier than crypto secure protocols. Frankly it doesn't really matter what you call it, what matters is that it works and is secure.
You do need a rigid authentication and authorization scheme just as you described. However, this is completely orthogonal to CSRF issues. Some authentication schemes (such as bearer tokens in the authorization header) are not susceptible to CSRF, some are (such as cookies). The reason for that is just how they are implemented in browsers.
I don't mean to be rude, but I urge you to follow the recommendation of the other commenters and read up on what CSRF is and why it is not the same issue as authentication in general.
Clearly knowledgeable people not knowing about the intricacies of (web) security is actually an issue that comes up a lot in my pentesting when I try to explain issues to customers or their developers. While they often know a lot about programming or technology, they frequently don't know enough about (web) security to conceptualize the attack vector, even after we explain it. Web security is a little special because of lots of little details in browser behavior. You truly need to engage your suspension of disbelief sometimes and just accept how things are to navigate that space. And on top of that, things tend to change a lot over the years.
Servers should not blindly trust clients (and that includes headers passed by a browser claiming they came from such and such a server / page / etc); clients must prove they are trustworthy. And if you're smart your system should be set up such that the costs to attack the system are more expensive than compliance.
And yes, I have worked both red team and blue team.
Then, CSRF is preventing a class of attacks directed against a client you actually have decided to trust, in order to fool the client to do bad stuff.
All the things you say about auth: Already done, already checked. CSRF is the next step, protecting against clients you have decided to trust.
You could say that someone makes a CSRF attack that manages to change these headers of an unwitting client, but at that point absolutely all bets are off you can invent hypothetical attacks to all current CSRF protection mechanisms too. Which are all based on data the client sends.
(If HN comments cannot convince you why you are wrong I encourage you to take the thread to ChatGPT or similar as a neutral judge of sorts and ask it why you may be wrong here.)
The OP is documenting another implementation to protect against CSRF, which is unsuitable for many since it fails to protect 5% of browsers, but still an interesting look at the road ahead for CSRF and in some years perhaps everyone will change how this is done.
And you say isn't OK, but have not in my opinion properly argued for why not.
When the browser sends a request to your server, it includes all the cookies for your domain. Even if that request is coming from a <form> or <img> tag on a different website you don't control. A malicious website could create a form element that sends a request to yourdomain.com/api/delete-my-account and the browser would send along the auth cookie for yourdomain.com.
A cookie only proves that the browser is authorized to act on behalf of the user, not that the request came from your website. That's why you need some non-cookie way to prove the request came from your origin. That's what Sec-Fetch-Site is.
You're conflating the two types of Auth/Defense.
This is probably apocryphal, but Willie Sutton was asked why he kept robbing banks, he quipped "that's where the money is". Sure browser hacking occurs, but it's a means to an end because the server is where the juicy stuff is.
So headers that can't be accessed by Javascript are way down the food chain and only provide lesser defence in depth if you have proper CSRF tokens (which you should have anyway to protect the far more valuable server resources which are the primary target).
But the server security is the primary security, because it's the one with the resources (in the money analogy it's the bank).
So yes, we do want to secure the client, but if the attacker has enough control of your computer to get your cookies then it's already game over. Like I said you can have time bounded CSRF tokens (be they cookies or whatever else, URL encoded, etc who cares) to prevent replay attacks. But at the end of the day if an attacker can get your cookies in real time you're done for, it's game over already. If they want to do a man in the middle attack (i.e. click on a fake "proxy" URL) then having the "secure" flag should be enough. If the server checks the cookie against the client's IP address, time, HMAC, other auth attributes, will then prevent the attack. If they attacker takes control of your end device, you've already lost.
I, the article and most comments here quite explicitly talked about server security via Auth and csrf protections.
None of this has anything to do with browser security, such as stealing csrf tokens (which tend to be stored as hidden fields on elements in the html, not cookies). MOREOVER, Sec-Fetch-Site obviates the need for csrf tokens.
"It is important to note that Fetch Metadata headers should be implemented as an additional layer defense in depth concept. This attribute should not replace a [sic] CSRF tokens (or equivalent framework protections)." -- OWASP; https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...
What are you referring to when you talk about keeping the browser secure?
Perfect. It's not even meant or needed to be. The server uses it to validate the request came from the expected site.
As i and others have said in various comments, you seem to be lost. Nothing you're saying has any relevance to the topic at hand. And, in fact, is largely wrong.
Prove it.
> If neither the Sec-Fetch-Site nor Origin headers are present, then it assumes the request is not coming from web browser and will always allow the request to proceed.
If you can't afford to do this you still need to use CSRF tokens.