All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.
Apparently caused by a bad CrowdStrike update.
Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…
I work for one of these behemoths, and there are a lot of adults in the room. When we began our transition off the prior, well known corporate AV, I never even heard of crowd strike.
The adults were asking reasonable questions: why such an aggressive migration timeline? Why can’t we have our vendor recommended exclusion lists applied? Why does this need to be installed here when previously agentless technologies was sufficient? Why is crowd strike spending monies on a Superbowl ad instead of investing back into the technology?
Either something fucky is a foot, as in this was mandated to our higher ups to m make the switch (why?), or, as is typically the case, the decision was made already and this ‘due diligence’ is all window dressing to CYA.
Who gives a shit about fines on SLAs if your vendor is going to foot the bill.
As someone who works in offensive Cybersecurity doing Red Teamings, where most of my job is to bypass and evade such solutions, I can say that bypassing agent less technologies is so much easier than agented ones. While you can access most of the logs remotely, having an agent helps you extremely with catching 0-day malware, since you can scan memory (that one is a bitch to bypass and usually how we get caught), or hook syscalls which you can then correlate.
Oh, an unknown unsigned process just called RWX memory allocation, loaded a crypto binary, and spawned a thread in another process that’s trying to execute it? Better scan that memory and see what it’s up to. That is something you cannot do remotely.
Removed by mod