Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • pyrex@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 months ago

    I read a few of the guy’s other blog posts and they follow a general theme:

    • He’s pretty resourceful! Surprisingly often, when he’s feeling comfortable, he resorts to sensible troubleshooting steps.
    • Despite that, when confronted with code, it seems like he often just kind of guesses at what things mean without verifying it.
    • When he’s decided he doesn’t understand a thing, he WILL NOT DIG INTO THE THING.

    He seems totally hireable as a junior, but he absolutely needs the adult supervision.

    The LLM Revolution seems really really bad for this guy specifically – it promises that he can keep working in this ineffective way without changing anything.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      My conspiracy theory is that he isn’t clueless, and that his blogposts are meant to be read by whoever is his boss. In the case of using LLMs for automatic malware and anti-malware.

      “Oh you want me to use LLMs for our cybersecurity, look how easy it is to write malware (as long as one executes anything they download, and have too many default permissions on a device) using LLMs, and how hard it is to do countermeasures, it took me over 42 (a hint?) tries and I still failed! Maybe it’s better to use normal sandboxing, hardening and ACL practices, in the meantime to protect ourselves from this new threat, how convenient it’s the same approach we’ve always taken”