Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

His testimony comes after Microsoft admitted that it could have taken steps to prevent two aggressive nation-state cyberattacks from China and Russia.

According to Microsoft whistleblower Andrew Harris, Microsoft spent years ignoring a vulnerability while he proposed fixes to the “security nightmare.” Instead, Microsoft feared it might lose its government contract by warning about the bug and allegedly downplayed the problem, choosing profits over security, ProPublica reported.

This apparent negligence led to one of the largest cyberattacks in US history, and officials’ sensitive data was compromised due to Microsoft’s security failures. The China-linked hackers stole 60,000 US State Department emails, Reuters reported. And several federal agencies were hit, giving attackers access to sensitive government information, including data from the National Nuclear Security Administration and the National Institutes of Health, ProPublica reported. Even Microsoft itself was breached, with a Russian group accessing senior staff emails this year, including their “correspondence with government officials,” Reuters reported.

  • Dudewitbow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    5 months ago

    id argue arguing the unknown can’t be used to say if its technically secure, nor insecure. If that kind of coding is brought into place, then say any OS using non open source hardware is insecure because the VHDL/Verilog code is not verifiable.

    Unless everyone running an open source version of RISC-V code or a FPGA for their hardware, its a game of goalposts on where someone puts said flag.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Consider people counting paper votes in an election. Multiple political parties are motivated by their own self interests to watch the counting to prevent each other faking votes. That is a security feature and without it then the validity of the election has a critical unknown making it very sussy.

      An OS using proprietary software is like as an electronic voting machine, we pretend it’s secure to feel better about a failing we can’t change.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        the problem is the bad actors have direct access to said voting machines. in the case of security, the people creating the OS is not the bad actor typically in question when you think of bad actors, which kind of goes back to the goalpost situation. Unless you knew how everything is designed from the ground up (including the hardware code in whatever language it is) then thats just setting an arbitrary goalpost. basically typical NSA backdoor, or foreign backdoor via hardware situation, independent of the OS. To bluntly place it only at the OS stage is setting said goalpost there when you can really apply it to any part of the line (the chip design, the hardware assembler, the os designer, the software maker). Setting it at the OS level fundamentally means all OS’ are insecure by nature unless you’re actively running it on a FPGA thats constantly getting updates.

        For instance, any CPU with speculative programming fundamentally is insecure and is virtually in all modern processors. never even mind the CPU when the door is already open regardless of the OS.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          When I think of bad actors and software I think of security from 3rd parties after the intentions of the authors. Not just security but also privacy and any other anti-features users wouldn’t want. That applies to the OS, apps or drivers. Hardware indeed has concerns like software, which is just a wider conversation about security, which is just part of user/consumer rights.