As someone who was involved in these changes to the OWASP cheat sheet, I'm glad to see this getting implemented in the wild, as well as for the nice blog post about it all.
However, one point of clarification
> Several participants in that discussion have suggested that this method should be upgraded to a complete alternative to the standard token-based approaches. The OWASP maintainer was initially skeptical, but towards the end of the thread they appear to be warming up to the idea and in search of opinions from other leading security experts. So it is quite possible that this method will become mainstream in the near future.
The maintainer didn't just warm up to the idea - they came to accept it, otherwise the changes wouldn't have ever landed. So, the quoted section is somewhat unintentionally calling the maintainer's integrity into question.
Though, I just noticed that the cheatsheet text has changed significantly from what we settled upon. Fetch Metadata has been relegated again to defense in depth status. Hopefully there was just some mistake.
When I said "the maintainer is warming up to the idea" I meant to the idea of upgrading Fetch Metadata from the current status of defense-in-depth to a full solution that can replace the token-based approaches.
It is pretty clear to me that the maintainer is cautious and is seeking other expert opinions before accepting the proposed upgrade to full solution. This, to me, shows integrity and not the lack of it. I apologize if my choice of words somehow can be interpreted in any other way!
Our confusion might be due to the fact that an erroneous PR (by seemingly an AI-wielding student...) was somehow recently accepted that completely reverted the changes we collectively worked on, which effectively made Fetch Metadata a full solution. So, it is back to showing as defense in depth. I've raised an issue about it, which wouldn't have happened if I didn't see your article!
Here's the previous language:
> If your software targets only modern browsers, you may rely on [Fetch Metadata headers](#fetch-metadata-headers) together with the fallback options described below to block cross-site state-changing requests
We then detailed some fallbacks (eg Origin header). Full text can be viewed in the original PR
If after reading that you still think that Fetch Metadata is not a viable full solution, I'd be curious to know why - the goal of that PR (and the preceding discussion that I instigated) was to upgrade it from Defense in Depth to Full (even if slightly less full than tokens, due to the possible need for some fallbacks).
Confession, I did not read the PR. I assumed that what is currently published in the cheatsheet is the same as the PR. This is what guided my analysis.
I will update my article to be in agreement with reality, now that I understand it. Thanks!
It's good that folks working on browsers are working on making this easier, but I don't think you can really rely on this for GET requests.
It's often easier to smuggle a same-origin request than to steal a CSRF token, so you're widening the set of things you're vulnerable to by hoping that this can protect state mutating GETs.
The bugs mentioned in the GitHub issue are some of the sorts of issues that will hit you, but also common things like open redirects turn into a real problem.
Not that state mutating GETs are a common pattern, but it is encoded as a test case in the blog post's web framework.
Hi, blog post author here. With regard to state-changing GET requests, I do not recommend their use and I agree that they create some problems for CSRF protection, but you are correct that I did include tests that verify that they can be enabled in my Microdot web framework.
Please correct me if I have missed anything, but I have designed this feature in my framework so that the default action when evaluating CSRF-related headers is to block. I then check all the conditions that warrant access. The idea is that for any unexpected conditions I'm not currently considering the request is going to be blocked, which ensures security isn't put at risk.
I expect there are some situations in which state-changing GET requests are not going to be allowed, where they should be. I don't think the reverse situation is possible, though, which is what I intended with my security first design. I can always revisit the logic and add more conditions around state-changing GET requests if I have to, but as you say, these are uncommon, so maybe this is fine as it is.
I was involved in the effort to add/upgrade Fetch Metadata in the OWASP cheat sheet. We had discussed GET requests, so if you find the guidance lacking about it, please let us know how.
Likewise, if you could elaborate on the open redirects issue, that would be great.
I haven't actually dug into it, but I would assume that open redirects would strip a Sec-Fetch-Site: cross-site header and replace it with none or same-site or something. So would things like allowing users to specify image URLs, etc. And if you rely on Sec-Fetch-Site for security on GETs, these turn into actual vulnerabilities.
I think these sorts of minor web app issues are common enough that state changing GETs should be explicitly discouraged if you are relying on Sec-Fetch-Site.
As someone who was involved in these changes to the OWASP cheat sheet, I'm glad to see this getting implemented in the wild, as well as for the nice blog post about it all.
However, one point of clarification
> Several participants in that discussion have suggested that this method should be upgraded to a complete alternative to the standard token-based approaches. The OWASP maintainer was initially skeptical, but towards the end of the thread they appear to be warming up to the idea and in search of opinions from other leading security experts. So it is quite possible that this method will become mainstream in the near future.
The maintainer didn't just warm up to the idea - they came to accept it, otherwise the changes wouldn't have ever landed. So, the quoted section is somewhat unintentionally calling the maintainer's integrity into question.
Though, I just noticed that the cheatsheet text has changed significantly from what we settled upon. Fetch Metadata has been relegated again to defense in depth status. Hopefully there was just some mistake.
When I said "the maintainer is warming up to the idea" I meant to the idea of upgrading Fetch Metadata from the current status of defense-in-depth to a full solution that can replace the token-based approaches.
It is pretty clear to me that the maintainer is cautious and is seeking other expert opinions before accepting the proposed upgrade to full solution. This, to me, shows integrity and not the lack of it. I apologize if my choice of words somehow can be interpreted in any other way!
Again, the maintainer eventually came around.
Our confusion might be due to the fact that an erroneous PR (by seemingly an AI-wielding student...) was somehow recently accepted that completely reverted the changes we collectively worked on, which effectively made Fetch Metadata a full solution. So, it is back to showing as defense in depth. I've raised an issue about it, which wouldn't have happened if I didn't see your article!
Here's the previous language:
> If your software targets only modern browsers, you may rely on [Fetch Metadata headers](#fetch-metadata-headers) together with the fallback options described below to block cross-site state-changing requests
We then detailed some fallbacks (eg Origin header). Full text can be viewed in the original PR
https://github.com/OWASP/CheatSheetSeries/pull/1875
or
https://github.com/OWASP/CheatSheetSeries/blob/7fc3e6b8fde65...
If after reading that you still think that Fetch Metadata is not a viable full solution, I'd be curious to know why - the goal of that PR (and the preceding discussion that I instigated) was to upgrade it from Defense in Depth to Full (even if slightly less full than tokens, due to the possible need for some fallbacks).
Okay, now I understand where you are coming from.
Confession, I did not read the PR. I assumed that what is currently published in the cheatsheet is the same as the PR. This is what guided my analysis.
I will update my article to be in agreement with reality, now that I understand it. Thanks!
that should have been a fair assumption! I hope we can get this sorted out soon
It's good that folks working on browsers are working on making this easier, but I don't think you can really rely on this for GET requests.
It's often easier to smuggle a same-origin request than to steal a CSRF token, so you're widening the set of things you're vulnerable to by hoping that this can protect state mutating GETs.
The bugs mentioned in the GitHub issue are some of the sorts of issues that will hit you, but also common things like open redirects turn into a real problem.
Not that state mutating GETs are a common pattern, but it is encoded as a test case in the blog post's web framework.
Hi, blog post author here. With regard to state-changing GET requests, I do not recommend their use and I agree that they create some problems for CSRF protection, but you are correct that I did include tests that verify that they can be enabled in my Microdot web framework.
Please correct me if I have missed anything, but I have designed this feature in my framework so that the default action when evaluating CSRF-related headers is to block. I then check all the conditions that warrant access. The idea is that for any unexpected conditions I'm not currently considering the request is going to be blocked, which ensures security isn't put at risk.
I expect there are some situations in which state-changing GET requests are not going to be allowed, where they should be. I don't think the reverse situation is possible, though, which is what I intended with my security first design. I can always revisit the logic and add more conditions around state-changing GET requests if I have to, but as you say, these are uncommon, so maybe this is fine as it is.
I was involved in the effort to add/upgrade Fetch Metadata in the OWASP cheat sheet. We had discussed GET requests, so if you find the guidance lacking about it, please let us know how.
Likewise, if you could elaborate on the open redirects issue, that would be great.
I haven't actually dug into it, but I would assume that open redirects would strip a Sec-Fetch-Site: cross-site header and replace it with none or same-site or something. So would things like allowing users to specify image URLs, etc. And if you rely on Sec-Fetch-Site for security on GETs, these turn into actual vulnerabilities.
I think these sorts of minor web app issues are common enough that state changing GETs should be explicitly discouraged if you are relying on Sec-Fetch-Site.
https://www.w3.org/TR/fetch-metadata/#redirects
Well that's a good decision. I doubt it covers client-side redirects, but still good for like 95% of cases.
People do still allow 3rd party images/links on websites. Much less common in typical software, but it does happen.
why wouldnt it cover client-side redirects?
Why would it? Someone has to go and write the code to do it and the spec doesn't look like it covers them.
Playing with window.location and meta redirects in jsfiddle, they both seem to lose cross-site context when I link to them.
Can you share an example?