- cross-posted to:
- lemmy@lemmy.ml
- cross-posted to:
- lemmy@lemmy.ml
What is XSS?
Cross-site scripting (XSS) is an exploit where the attacker attaches code onto a legitimate website that will execute when the victim loads the website. That malicious code can be inserted in several ways. Most popularly, it is either added to the end of a url or posted directly onto a page that displays user-generated content. In more technical terms, cross-site scripting is a client-side code injection attack. https://www.cloudflare.com/learning/security/threats/cross-site-scripting/
Impact
One-click Lemmy account compromise by social engineering users to click your posts URL.
Reproduction
Lemmy does not properly sanitize URI’s on posts leading to cross-site scripting. You can see this working in action by clicking the “link” attached to this post on the web client.
To recreate, simply create a new post with the URL field set to: javascript:alert(1)//
Patching
Adding filtering to block javascript:
and data:
URI’s seems like the easiest approach.
Have you raised an issue on github for this? Its the best way to inform the devs.
deleted by creator
deleted by creator
OP doesn’t seem interested in that. They state they “sent a vulnerability a week ago” and didn’t hear back so they are being completely irresponsible and posting about it publicly on a community instead.
OP is just quoting me there I think. If they aren’t quoting me then they did try to contact the developer…
Typical reasonable disclosure is in terms months usually, not “nearly a week”. OP is being irresponsible at best by posting this before giving time to the developers to see, and act on it.
I mean, a dialogue over months, maybe. Over a week of hearing nothing even saying they got your email and are looking into it is pretty bad on the part of the lemmy devs IMO. The “responsibility” part of responsible disclosure goes both ways. Also, this is incredibly low effort to find. This isn’t even XSS really, it’s just a complete lack of link filtering.
The “responsibility” part of responsible disclosure goes both ways
It absolutely does, it also means following up, not “They didn’t reply in a week so instead of trying other ways to contact them, I’m just going to post about it”. They didn’t even try to open an issue because they “don’t use github” all while coming here talking about how bad the vulnerability is.
It’s poor (lack of) judgement on OP’s part.
It’s not great from either side here, really. Precise guidelines for responsible disclosure vary, but none would ever say “go public after trying to contact the developers once and not hearing back for a week”.
ZDI’s policy says that after 5 days they attempt contact again. After another 5 days, they’ll try any intermediaries or other ways of contacting they can think of up until 15 days after the initial contact. If at any point before that, the developers acknowledge the problem, ZDI gives up to 120 days to resolve the problem (from date of acknowledgement). They imply (without having a specific policy laid out) that more time will be given beyond that if it’s reasonably needed.
All of OP’s comments have been deleted so I don’t know what they tried exactly, but it certainly seems like they didn’t try hard enough. It also seems like Lemmy’s devs may not have been responsive enough. Ideally, they would have a
/.well-known/security.txt
file with an email address that is actively monitored explicitly for security disclosures. Failing that, whatever public method they do have for contact that OP used, assuming it was a mode of communication that could reasonably be expected to be monitored, should have been monitored.I actually don’t think that GitHub is an appropriate place to be doing security vulnerability disclosures. GitHub might not be user-facing, but it’s still public. You can maybe put something in there essentially as a way to tag “hey, be on the lookout for a real disclosure”, but any actual details should not be on GitHub. In this case, if the email didn’t work, OP should have posted something on GitHub saying “hey, major security flaw, please check your email for details”. If OP really doesn’t want to use GitHub, they should have requested someone else do that on their behalf—maybe via a sufficiently vague post on Lemmy. What’s absolutely clear that they should not have done, is gone public with all the details based on a single attempt at emailing and one week of waiting.
Damn… seems like there should be filtering to only allow
http:
andhttps:
URIs…Did you try the security email on github? I sent a vulnerability (that actually is way fucking worse than I thought given this issue) over a week ago and have heard nothing, so will be posting publicly soon.
Holy shit holy shit holy shit. Serious vulnerability confirmed. Combined with the issue(s) I have tried to report this is insane. I just tested this (and purged it so as not to publicly disclose just yet). This is really bad.
deleted by creator
If you find a way to disclose vulnerabilities without being ghosted by Lemmy developers: update me.
How have you been “ghosted by Lemmy developers” especially if you “do not use GitHub”
Yeah, I just wrote this up as a bug on github and added in that I tried to email them and to please get in contact about the other thing. Hopefully they see it. I can understand checking that email being overlooked considering how busy they likely are given the sudden influx and scaling issues.
Thank you, I was going to write one up tonight for it. You emailed security @ correct? https://github.com/LemmyNet/lemmy/security/policy
I tried to email that previously with a different issue and got no response. I was planning to post publicly (on github) about a different issue on Friday, but that other issue is now way too severe to do that now given how this can be leveraged to exploit what I found.
deleted by creator
It’s been a bit of a busy week for them. Maybe you can cut them some slack and try again?
Yeah, I found something that was “holy shit this is bad if someone finds a way to do X” and tried to report that but didn’t dig any deeper. This is X.
Does the default CSP do anything to mitigate this?
I believe if
unsafe-inline
were removed fromscript-src
then the CSP would block this.If the frontend depends on inline script tags then this likely can’t be changed super easily… The fact that
unsafe-eval
is inscript-src
is kinda worrying as well. Ideally you would lock the CSP down a lot more than they have.Aye, I am pretty sure CSP is bypass-able in most situations unless your pinning checksums or hashes. Just thought it might help take the edge off the hacker panic.
Yeah, it can certainly help in some cases, defense in depth and all that. If the CSP were ‘self’ (allowing any JS hosted on your domain) this would probably be DoA. Sadly, until the frontend stops using
<script>
to set things onwindow
to hydrate state from SSR to client-side they won’t be able to change it without breaking things.