• KyleKatarn@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 months ago

    The hackers initially got access to around 14,000 accounts using previously compromised login credentials, but they then used a feature of 23andMe to gain access to almost half of the company’s user base, or about 7 million accounts

    Is there more to the breach than just stolen passwords? What feature did they use and what access did they gain?

    • trebuchet@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      5 months ago

      I recall from previous coverage of this that there is a social network feature in the site where you can voluntarily share your info with friends and family.

      So 14,000 accounts got accessed via reused passwords and then that gave them access to 7 million people’s data because they chose previously to share info with those 14,000.

  • MrCookieRespect@reddthat.com
    link
    fedilink
    arrow-up
    16
    arrow-down
    11
    ·
    5 months ago

    Bro the data wasn’t breached, someone just took already available passwords and tried them. It is their fault for using the same password everywhere.

    And im not defending the company here, fuck em but thats definitely not on them.

    • tiramichu@lemm.ee
      link
      fedilink
      arrow-up
      32
      ·
      edit-2
      5 months ago

      23 and Me are technically correct in that it’s customer behaviour that caused the issue. People reused passwords and didn’t use MFA.

      They can claim the moral high ground if they like and shift the blame, but the truth is that regardless of WHY the breach happened, it was still a breach and it still happened!

      As a software engineer, I believe there’s a real argument to be made here that 23 and Me were negligent in their approach. Given the personal nature of data stored they should have enforced MFA from the start, but they did not. They made an explicit decision to choose customer convenience above customer security.

      The argument that customers should have made better security decisions is evasive bullshit.

      As a software engineer you cannot trust customers to take correct decisions about security. And customers should not be expected to either - they are not the experts! It’s the job of IT professionals to ensure that data has an appropriate level of protection so that it is safeguarded even against naive user behaviour.

      • Pastaguini [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 months ago

        My mom used 23 and me last year and created an account with 2FA. Their 2FA fucked up and never sent the code. She spent weeks on the phone with customer service but they just shuffled her around. I tried to talk to them but it was just “I’ll escalate this to my manager” and then they’d never call back. Then we tried to get a refund and they refused, so they basically stole 40 bucks from my mom. They probably never enforced 2FA because they knew it didn’t work and didn’t want to bog down their nonexistent customer service with complaints about their fucked up 2FA. I looked online and my mom wasn’t the only one with this issue. So in that sense, they are responsible IMO.

      • RonSijm@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        23 and Me are technically correct in that it’s customer behaviour that caused the issue.

        Maybe I don’t really understand what happened, but it sounds like 2 different things happened:

        The hackers initially got access to around 14,000 accounts using previously compromised login credentials, but they then used a feature of 23andMe to gain access to almost half of the company’s user base, or about 7 million accounts

        14k accounts were compromised due to poor passwords and password re-use -

        And then they got access to 7 million accounts. Where did that 7 million account breach come from? Were those 7 million connections of the 14k or something? Because I don’t think your connections can see many in-dept details

        • jadero@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          5 months ago

          Let’s pretend that I had an account and that you used the internal social share to share your stuff with me.

          I, being an idiot, used monkey123 as my password. As a result, the bad guys got into my account. Once in my account, they had access to everything in my account, including the stuff you shared with me.

          Now to get from 14,000 to 7,000,000 would mean an average of 500 shares per account. That seems unreasonable, so there must have been something like your sharing with me gives me access not just to what you shared, but to everything that others shared with you in some kind of sharing chain. That, at a minimum, is exclusively on 23andMe. There is no way any sane and competent person would have deliberately constructed things like that.

          Edit: I think I goofed. It seems to be sharing with relatives as a collection, not individuals. As was pointed out, you don’t have to go very far back to find common ancestors with thousands of people, so that’s a more likely explanation than mine.

          • Mikina@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            5 months ago

            From how I understand it, the 14 000 -> 7 000 000 is caused by a feature that allows you to share your information with your “relatives”, i.e. people who were traced to some common ancestor.

            I’m still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google’s fault?

            If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?

            Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it’s “always on CAPTCHA” - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.

            Not because I want to defend the company or something, they have definitely done a lot of things wrong (even though nowhere near as wrong as the users), but because of security awarness.

            Shifting the blame solely on the company that it “hasn’t done enough” only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with “They’ve hacked the stupid company, it’s not my fault.”. No. It’s their fault. Get a password manager FFS.

            Headlines like “A company was breached and leaked 7 000 000 of user’s worth of private data” will probably get mostly unnoticed. A headline “14 000 people with weak passwords caused the leak of 7 000 000 user’s worth of private data” may at least spread some awarness.

            • jadero@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              5 months ago

              Ok, that makes much more sense! I’ve done a tiny bit of genealogy, so I knew about the exponential numbers, but I misunderstood the sharing. Yes, I know the feature was described as “with relatives” but I was thinking of “with person”. Yes, choosing to share with all relatives in one click would produce huge numbers.

              As for where to place the blame, it’s tough. The vast majority of people have no concept of how this stuff works. In effect, everything from mere typing into a document to logging in to and using network resources is treated quite literally as magic, even if nobody would actually use that word.

              That puts a high burden on services to protect people from this magical thinking. Maybe it’s an unreasonably high burden, but they have to at least make the attempt.

              2FA (the real thing, not the SMS mess) is easy to set up on the server side. It’s easy enough to set up on the client side that if that’s too much for some fraction of your customer base, then you should probably treat that as a useful “filter” on your potential customers.

              There are any number of “breached password” lists published by reputable companies and organizations. At least one of those companies (have I been pwned) makes their list available in machine readable formats. At this point, no reputable company who makes any claims to protection of privacy and security should be allowing passwords that show up on those lists. Account setup procedures have enough to do already that a client-side password check would be barely noticeable.

              We know enough about human nature and human cognition to know that humans are horrifically bad at creating passwords on the fly. Some services, maybe most services, should prohibit users from ever setting their own passwords, using client-side scripting to generate random strings of characters. Those with password managers can simply log the assigned password. Those without can either write it in their address book or let their browser manage it. This has the added benefit of not needing to check a password against a published list of breached passwords.

              My data will always be at risk of some kind of weak link that I have no control over. That makes it the responsibility of each online service to ensure that the weak links are as strong as possible. Rate limiting, enforcement of known good login policies and procedures, anomaly detection and blocking, etc should be standard practice.

              • Mikina@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                5 months ago

                You are right, and the company is definitely to blame. But, compared to how usually other breach happens, I don’t think this company was that much negligient - I mean, their only mistake was as far as I know that they did not force the users to use MFA. A mistake, sure, but not as grave as we usually see in data breaches.

                My point was mostly that IMO we should in this case focus more on the users, because they are also at fault, but more importantly I think it’s a pretty impactful story - “few thousand people reuse passwords, so they caused millions of users data to be leaked” is a headline that teaches a lesson in security awarness, and I think would be better to focus on that, instead of on “A company didn’t force users to use MFA”, which is only framed as “company has been breached and blames users”. That will not teach anyone anything, unfortunately.

                I’m not saying that the company shouldn’t also be blamed, because they did purposefully choose to prefer user experience and conversion rate (because bad UX hurts sales, as you’ve mentioned) instead of better security practices, I’m just trying to figure out how to get at least something good out of this incident - and “company blames users for them getting breached” isn’t going to teach anyone anything.

                However, something good did come up out of it, at least for me - I’ve realized that it never occured to us to put “MFA is not enforced” into pentest findings, and this would make for a great case why to start doing it, so I’ve added it into our templates.

                • jadero@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 months ago

                  I agree with everything you’ve said. One thing that would go a long way to securing accounts would be legislation requiring all government services, banks, and credit unions to implement authenticator-based 2FA. At a minimum.

                  Those institutions are already very heavily regulated (at least here in Canada), so one more regulation would be meaningless.

                  With that in place, it would be trivial for everyone else to follow suit, since they’d know that approximately everyone has a second factor and knows how to use it.

                  Good for you in adding to your testing template. Security is a journey, not a destination, so keeping things up to date is important.

          • threelonmusketeers@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 months ago

            Now to get from 14,000 to 7,000,000 would mean an average of 500 shares per account. That seems unreasonable

            That doesn’t seem that unreasonable to me. Everyone has 2^n great-great-grandparents, and that means a lot of cousins, uncles, aunts, etc. You don’t have to go back many generations to reach a very large number of relatives. I think that’s one of the cool things about genealogy.

            • jadero@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Ok, that makes much more sense! I’ve done a tiny bit of genealogy, so I knew about the exponential numbers, but I misunderstood the sharing. Yes, I know the feature was described as “with relatives” but I was thinking of “with person”. Yes, choosing to share with all relatives in one click would produce huge numbers.

      • pishadoot@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        As a consumer you can’t trust companies or software engineers to make correct decisions about security, either. The blade cuts both ways and everyone is at fault here.

      • MrCookieRespect@reddthat.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        5 months ago

        But its not a breach, its accounts being compromised. Yes you can’t trust them but its their own fault still. And you can’t make it too hard to get the data because otherwise your idiot of a user cant access it either.

        They should definitely force 2FA however.

        • tiramichu@lemm.ee
          link
          fedilink
          arrow-up
          9
          arrow-down
          2
          ·
          5 months ago

          IBM defines “Data Breach” as:

          any security incident in which unauthorized parties gain access to sensitive data or confidential information, including personal data (Social Security numbers, bank account numbers, healthcare data) or corporate data (customer data records, intellectual property, financial information).

          Despite the fact the attackers used real passwords to log in they are still an ‘unauthorized party’ because they are not the intended party.

          It’s also legally the case that using a password to access data you know you are not supposed to access still counts as ‘hacking’

          • MrCookieRespect@reddthat.com
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            5 months ago

            Well, the authorisation is the password, so from their side it was in fact not a breach because they just got a normal login with the correct authorisation(password).

            • woodytrombone@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              5 months ago

              The front door unlocked because the burglar found a copy of the key outside.

              This wasn’t a burglary, though. His key was legitimate.

    • Kbin_space_program@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      5 months ago

      You’re missing a very critical detail.

      Yes the initial breach was reused logins.
      But that was only a pittance 14,000 logins.

      The hackers got access to millions of users through tools provided by 23AndMe

      • Mikina@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        From how I understand it, that’s also on the users.

        If I get it right, they have a social share function that allows you to share your data with anyone who is your “relative”, i.e. probably can be traced to some common ancestor. So, the millions of people deliberately shared the data with others, and nothing was exploited.

        We should blame the 14 000 users for their terrible security practices way more than the company for not forcing people into using it. Sure, 23AndMe could’ve done more, such as forcing MFA, but by writing headlines about how company got hacked, when it’s literally the fault of people reusing their passwords on every stupid site they log in to, will not help with security awarness in the slightest. They will just keep on with their bad practices until eventually they loose more than just an ancestry records.

        There should be headlines about how “Password reuse of 14 000 users caused a leak of 7 000 000 of user data.”. Not because I want to defend the company, but because it spreads security awarness. It’s still mostly the fault of the users.

        Get a password manager, FFS.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    This is the best summary I could come up with:


    Now, after being hit by a series of class action lawsuits from victims of the breach, the company is reportedly turning the blame back to the users — telling them they should have been more cautious about recycling their login credentials.

    “Users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe,” the company told a group of victims in a letter initially reported by TechCrunch.

    The CPRA — otherwise known as the California Privacy Rights Act — strengthened security measures for consumers to stop businesses from sharing their personal information.

    “Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch.

    Following the breach, the company asked all its users to reset their passwords and set up additional security measures like two-factor authentication, according to its website.

    In October, the company said the results of its preliminary investigation showed no indication of a data security incident within its systems.


    The original article contains 364 words, the summary contains 192 words. Saved 47%. I’m a bot and I’m open source!

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    5 months ago

    I’m still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google’s fault?

    If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?

    Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it’s “always on CAPTCHA” - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.

    Not because I want to defend the company or something, they have definitely done some things wrong (even though nowhere near as wrong as the users), but because of security awarness.

    Shifting the blame solely on the company that it “hasn’t done enough” only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with “They’ve hacked the stupid company, it’s not my fault.”. No. It’s their fault. Get a password manager FFS.

    Headlines like “A company was breached and leaked 7 000 000 of user’s worth of private data” will probably get mostly unnoticed. A headline “14 000 people with weak passwords caused the leak of 7 000 000 user’s worth of private data” may at least spread some awarness.

    • DaleGribble88@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      As someone else who dabbles in cybersecurity - hard disagree. If developers and alleged IT professionals got their shit together, most data breaches wouldn’t be a significant problem. Looking at the OWASP top ten, every single item on that list can be boiled down to either 1) negligence, or 2) industry professionals negotiating with terrorist business leaders who prioritize profits over user safety.
      Proper engineers have their standards, laws, and ethical principles written in blood. They are much less willing to bend safety requirements compared to the typical jr. developer who sees no problem storing unsalted passwords with an md5 hash.

      • Mikina@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        I get what are you getting at, and I agree with that - a world where every product would follow best practices in regards to security, instead of prioritizing user convenience in places where it’s definitely not worth it in the long term, would be awesome (and speaking from the experience of someone who’s doing Red Teamings, we’re slowly getting there - lately the engagements have been more and more difficult, for the larger companies at least).

        But I think that since we’re definitely not there yet, and probably won’t ever be for was majority of webs and services, it’s important to educate users and hammer proper security practices outside of their safe environment into them. Pragmatically speaking, a case like this, where you can illustrate what kind of impact will your personal lack of security practices cause, I think it’s better to focus on the user’s fault, instead of the company. Just for the sake of security awarness (and that is my main goal), because I still think that headlines about how “14000 people caused millions of people private data leaked”, if framed properly, will have better overall impact than just another “company is dumb, they had a breach”.

        Also, I think that going with “lets force users into environment that is really annoying to use” just by policies alone isn’t really what you want, because the users will only get more and more frustrated, that they have to use stupid smart cards, have to remember a password that’s basically a sentence and change it every month, or take the time to sign emails and commits, and they will start taking shortcuts. I think that the ideal outcome would be if you managed to convince them that’s what they want to do, and that they really understand the importance and reasoning for all of the uncomfortable security anoyancies. And this story could be, IMO, a perfect lesson in security awarness, if it wasn’t turned into “company got breached”.

        But just as with what you were saying about what the company should be doing, but isn’t, it’s unfortunately the same problem with this point of view - we’ll probably never get there, so you can’t rely on other users being as security aware as you are, thus you need the company to force it onto them. And vice versa, many companies won’t do that, so you need to also rely on your own security practices. But for this case - I think it would serve as a better lesson in personal security, than in the corporate security, because from what I’ve read the company didn’t really do that much wrong, as far as security is considered - their only mistake was not forcing users to use MFA. And tbh, I don’t think we even include “Users are not forced to use MFA” into pentest reports, although that may have changed, I haven’t done a regular pentest it quite some time (but it’s actually a great point, and I’ll make sure to include it into our findings database if it isn’t there).