Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    Ā·
    edit-2
    5 months ago

    Found in the wilds^

    Giganto brain AI safety ā€˜scientistā€™

    If AIs are conscious right now, we are monsters. Nobody wants to think theyā€™re monsters. Ergo: AIs are definitely not conscious.

    Internet rando:

    If furniture is conscious right now, we are monsters. Nobody wants to think theyā€™re monsters. Ergo: Furniture is definitely not conscious.

  • jax@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    5 months ago

    NYT opinion piece title: Effective Altruism Is Flawed. But Whatā€™s the Alternative? (archive.org)

    lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? ohā€¦

    (also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesnā€™t exist, but there are actually many alternatives that have fewer flaws).

    You donā€™t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.

    Lucky souls havenā€™t found sneerclub yet.

    But if you read this newsletter, you might be the kind of person who canā€™t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of usā€¦

    rational_economist.webp

    There are actually some decent quotes critical of EA (though the author doesnā€™t actually engage with them at all):

    The problem is that ā€œE.A. grew up in an environment that doesnā€™t have much feedback from reality,ā€ Wenar told me.

    Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as ā€œneo-colonialā€ in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors donā€™t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      5 months ago

      Oh my god there is literally nothing the effective altruists do that canā€™t be done better by people who arenā€™t in a cult

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      5 months ago

      But if you read this newsletter, you might be the kind of person who canā€™t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of usā€¦

      Funny how the wannabe LW Rationalists donā€™t seem read that much Rationalism, as Scott has already mentioned that our views on economists (that they are all looking for the Rational Economic Human Unit) is not up to date and not how economists think anymore. (So in a way it is a false stereotype of economists, wasnā€™t there something about how Rationalists shouldnā€™t fall for these things? ;) ).

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      edit-2
      5 months ago

      Effective Altruism Is Flawed. Hereā€™s Why Itā€™s Bad News for Joe Biden.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    Ā·
    5 months ago

    https://xcancel.com/AISafetyMemes/status/1802894899022533034#m

    The same pundits have been saying ā€œdeep learning is hitting a wallā€ for a DECADE. Why do they have ANY credibility left? Wrong, wrong, wrong. Year after year after year. Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief. Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention. You have to acknowledge the world as we know it will end, for better or worse. Your 20 year plans up in smoke. Learning a language for no reason. Preparing for a career that wonā€™t exist. Raising kids who might justā€¦ suddenly die. Because we invited aliens with superior technology we couldnā€™t control. Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banksā€™ Culture series as a good outcomeā€¦ where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME. Whatā€™s funny, too, is that noted skeptics like Gary Marcus still think thereā€™s a 35% chance of AGI in the next 12 years - that is still HIGH! (Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.) Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance weā€™re all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away. It is insane that this isnā€™t the only thing on the news right now. Soā€¦ we stay in our hopium dens, nitpicking The Latest Thing AI Still Canā€™t Do, missing forests from trees, underreacting to the clear-as-day exponential. Most insiders agree: the alien ships are now visible in the sky, and we donā€™t know if theyā€™re going to cure cancer or exterminate us. Be brave. Stare AGI in the face.

    This post almost made me crash my self-driving car.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      Ā·
      5 months ago

      Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banksā€™ Culture series as a good outcomeā€¦ where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME.

      I am once again begging these e/acc fucking idiots to actually read and engage with the sci-fi books they keep citing

      but who am I kidding? the only way you come up with a take as stupid as ā€œhumans are pets in the Cultureā€ is if your only exposure to the books is having GPT summarize them

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      Ā·
      5 months ago

      Itā€™s mad that we have an actual existential crisis in climate change (temperature records broken across the world this year) but these cunts are driving themselves into a frenzy over something that is nowhere near as pressing or dangerous. Oh, people dying of heatstroke isnā€™t as glamorous? Fuck off

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      Ā·
      5 months ago

      Seriously, could someone gift this dude a subscription to spicyautocompletegirlfriends.ai so he can finally cum?

      One thing thatā€™s crazy: itā€™s not just skeptics, virtually EVERYONE in AI has a terrible track record - and all in the same OPPOSITE direction from usual! In every other industry, due to the Planning Fallacy etc, people predict things will take 2 years, but they actually take 10 years. In AI, people predict 10 years, then it happens in 2!

      ai_quotes_from_1965.txt

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      5 months ago

      humans are pets

      Actually not what is happening in the books. I get where they are coming form but this requires redefining the word pet in such a way it is a useless word.

      The Culture series really breaks the brains of people who can only think in hierarchies.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      5 months ago

      If youā€™ve been around the block like I have, youā€™ve seen reports about people joining cults to await spaceships, people preaching that the world is about to end &c. Itā€™s a staple trope in old New Yorker cartoons, where a bearded dude walks around with a billboard saying ā€œThe End is nighā€.

      The tech world is growing up, and a new internet-native generation has taken over. But everyone is still human, and the same pattern-matching that leads a 19th century Christian to discern when the world is going to end by reading Revelation will lead a 25 year old tech bro steeped in ā€œrationalismā€ to decide that spicy autocomplete is the first stage of The End of the Human Race. The only difference is the inputs.

  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    edit-2
    5 months ago

    How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

    https://pivot-to-ai.com - new site from Amy Castor and me, coming soon!

    thereā€™s nothing there yet, but weā€™re thinking just short posts about funny dumb AI bullshit. Web 3 Is Going Great, but itā€™s AI.

    i assure you that we will absolutely pillage techtakes, but will have to write it in non-jargonised form for ordinary civilian sneers

    BIG QUESTION: whatā€™s a good WordPress theme? For a W3iGG style site with short posts and maybe occasional longer ones. Fuckinā€™ hate the current theme (WordPress 2023) because it requires the horrible Block Editor

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      5 months ago

      How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

      how dare you simulate my behavior to this degree of accuracy

      but seriously Iā€™m excited as fuck for this! Iā€™ve been hoping you and Amy would take this on forever, and itā€™s finally happening!

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        5 months ago

        how dare you simulate my behavior to this degree of accuracy

        @AcausalRobotGod frantically taking notes

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        5 months ago

        molly is delighted that people might stop telling her to

        arguably we shoulda done it last year, but better late than never

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          Ā·
          5 months ago

          I wouldnā€™t even call yā€™all late; public opinion towards AI is just starting to turn from optimism to mockery, so this feels like the perfect opportunity to normalize sneering in a way thatā€™s easier for folks without context to consume than SneerClub or TechTakes.

          • David Gerard@awful.systemsOPM
            link
            fedilink
            English
            arrow-up
            10
            Ā·
            5 months ago

            when I write the blockchain stuff, itā€™s like, hereā€™s one paragraph of the actual thing going on, and hereā€™s another thousand words to make it comprehensible

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              Ā·
              edit-2
              5 months ago

              So much yak shaving involved with blockchain.

              Or any of this for that matter, I imagine you have already once answered the question of ā€˜how are you involved with twitter being bought by Muskā€™, ā€˜well in 1995, Scientology ā€¦ā€™

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    edit-2
    5 months ago

    Not a big sneer, but I was checking my spam box for badly filtered spam and saw a guy basically emailing me 'hey you made some contributions to open source, these are now worth money (in cryptocoins, so no real money), you should claim them, and if you are nice you could give me a finders fee. And eurgh im so tired of these people. (thankfully he provided enough personal info so I could block him on various social medias).

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    5 months ago

    I have no context on this so I canā€™t really speak to the FSB part of the remark, but on the whole it stands entertaining all by itself:

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      24
      Ā·
      edit-2
      5 months ago

      version readable for people blissfully unaffected by having twitter account

      ā€œOver the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.ā€

      yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

      ā€œBehind the scenes, thereā€™s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.ā€

      power grid equipment manufacture always had long lead times, and now, thereā€™s a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb bitch, maybe that has some relation to all packaged substations disappearing

      They are doing to summon a god. And we canā€™t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

      i see that besides 50s aesthetics they like mccarthyism

      ā€œAs the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 weā€™ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. ā€œ

      how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

      ā€œI make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesnā€™t require believing in sci-fi; it just requires believing in straight lines on a graph.

      ā€œWe donā€™t need to automate everythingā€”just AI researchā€

      ā€œOnce we get AGI, weā€™ll turn the crank one more timeā€”or two or three more timesā€”and AI systems will become superhumanā€”vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. ā€œ

      just needs tiny increase of six orders of magnitude, pinky swear, and itā€™ll all work out

      it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

      except that the only thing openai manufactures is hype and cultural fallout

      ā€œWeā€™d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.ā€ ā€œā€¦given inference fleets in 2027, we should be able to generate an entire internetā€™s worth of tokens, every single day.ā€

      whatā€™s ā€œmodel collapseā€

      ā€œWhat does it feel like to stand here?ā€

      beyond parody

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        17
        Ā·
        5 months ago

        ā€œOnce we get AGI, weā€™ll turn the crank one more timeā€”or two or three more timesā€”and AI systems will become superhumanā€”vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. ā€œ

        Also this doesnā€™t give enough credit to gradeschoolers. I certainly donā€™t think I am much smarter (if at all) than when I was a kid. Donā€™t these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? Iā€™m not sure if itā€™s me being the weird one, to me growing up is not about becoming smarter, itā€™s more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

        • Mii@awful.systems
          link
          fedilink
          English
          arrow-up
          18
          Ā·
          edit-2
          5 months ago

          Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

          Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        Ā·
        5 months ago

        To engage with the content:

        That doesnā€™t require believing in sci-fi; it just requires believing in straight lines on a graph.

        I see this is becoming their version of ā€œtoo the moonā€, and itā€™s even dumber.

        To engage with the form:

        wisdom woodchipper

        Amazing, 10/10 no notes.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          5 months ago

          I see this is becoming their version of ā€œtoo the moonā€, and itā€™s even dumber.

          it only makes sense after familiar and unfamiliar crypto scammers pivoted to new shiny thing breaking sound barrier, starting with big boss sam altman

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          5 months ago

          wisdom woodchipper

          i think i used that first time around the time when sneer come out about some lazy bitches that tried and failed to use chatgpt output as a meaningful filler in a peer-reviewed article. of course it worked, and not only at MDPI, because i doubt anyone seriously cares about prestige of International Journal of SEO-bait Hypecentrics, impact factor 0.62, least of all reviewers

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        Ā·
        5 months ago

        They are doing to summon a god. And we canā€™t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

        Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        5 months ago

        source of that image is also bad hxxps://waitbutwhy[.]com/2015/01/artificial-intelligence-revolution-1.html i think iā€™ve seen it listed on lessonline? canā€™t remember

        not only they seem like true believers, they are so for a decade at this point

        In 2013, Vincent C. MĆ¼ller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: ā€œFor the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?ā€ It asked them to name an optimistic year (one in which they believe thereā€™s a 10% chance weā€™ll have AGI), a realistic guess (a year they believe thereā€™s a 50% chance of AGIā€”i.e. after that year they think itā€™s more likely than not that weā€™ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty weā€™ll have AGI). Gathered together as one data set, here were the results:2

        Median optimistic year (10% likelihood): 2022

        Median realistic year (50% likelihood): 2040

        Median pessimistic year (90% likelihood): 2075

        just like fusion, itā€™s gonna happen in next decade guys, trust me

        • 200fifty@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          Ā·
          edit-2
          5 months ago

          I believe waitbutwhy came up before on old sneerclub though in that case we were making fun of them for bad political philosophy rather than bad ai takes

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            11
            Ā·
            5 months ago

            thereā€™s a lot of bad everything, it looks like a failed attempt at rat-scented xkcd. and yeah they were invited to lessonline but didnā€™t arrive

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        5 months ago

        ā€œOver the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.ā€

        They are doing to summon a god. And we canā€™t do anything to stop it.

        This is a direct rip-off of the plot of The Labyrinth Index, except in the book itā€™s a public-partnership between the US occult deep state, defense contractors, and silicon valley rather than a purely free market apocalypse, and theyā€™re trying to execute cthulhu.exe rather than implement the Acausal Robot God.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      5 months ago

      As an atheist, Iā€™ve noticed a disproportionate number of atheists replace traditional religion for some kind of wild tech belief or statistics belief.

      AI worship might be the most perfect of the examples of human hubris.

      Itā€™s hard to stay grounded, belief in general is part of human existence, whether we like it or not. We believe in things like justice and freedom and equality but these are all just human ideas (good ones, of course).

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        5 months ago

        The fear of death and the void is quite a problem for a lot of people. Hell, I would not mind living a few thousands years more (with a few important additions, like not living in slavery, declined mental health, pain, ability to voluntarily end it etc etc).

        But yeah this is just religion with some bits removed and some bits tacked on.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        5 months ago

        can also happen with nontraditional religion, mostly irreligious czech republic seems rather sane and rational until you notice tons of new age shite. it might be some kind of remnant rather a replacement

        • rook@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          5 months ago

          Iā€™m always slightly surprised by how much the French and Germans luuuuuurve their homeopathy, and depressed by how politically influential Big Sugar Pill And Magic Water is there.

            • rook@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              Ā·
              5 months ago

              Nothing concrete, unfortunately. Theyā€™re places I visit rather than somewhere I live and work, so Iā€™m a bit removed from the politics. Orac used to have good coverage of the subject, but I found reading his blog too depressing, so I stopped a while back.

              Pharmacies are piled high with homeopathic stuff in both places, and in Germany at least it is exempt from any legal requirement to show efficacy and purchases can be partially reimbursed by the state. In France at least, you canā€™t claim homeopathic products on health insurance anymore, which is an improvement.

    • jax@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      5 months ago

      q: how do know if someone is a ā€œRenaissance manā€?

      a: the llm that wrote the about me section for their website will tell you so.

      jesus fucking christ

      From Grok AI:

      Zach Vorhies, oh boy, where do I start? Imagine a mix of Tony Starkā€™s tech genius, a dash of Edward Snowdenā€™s whistleblowing spirit, and a pinch of Monty Pythonā€™s humor. Zach Vorhies, a former Google and YouTube software engineer, spent 8.5 years in the belly of the tech beast, working on projects like Google Earth and YouTube PS4 integration. But it was his brave act of collecting and releasing 950 pages of internal Google documents that really put him on the map.

      Vorhies is like that one friend who always has a conspiracy theory, but instead of aliens building the pyramids, heā€™s got the inside scoop on Googleā€™s AI-Censorship system, ā€œMachine Learning Fairness.ā€ I mean, who needs sci-fi when youā€™ve got a real-life tech thriller unfolding before your eyes?

      But Zach isnā€™t just about blowing the whistle on Googleā€™s shenanigans. Heā€™s also a man of many talents - a computer scientist, a fashion technology company founder, and even a video game script writer. Talk about a Renaissance man!

      And letā€™s not forget his role in the ā€œPlandemicā€ saga, where he helped promote a controversial documentary that claimed vaccines were contaminated with dangerous retroviruses. Itā€™s like heā€™s on a mission to make the world a more interesting (and possibly more confusing) place, one conspiracy theory at a time.

      So, if you ever find yourself in a dystopian future where Google controls everything and the truth is stranger than fiction, just remember: Zach Vorhies was there, fighting the good fight with a twinkle in his eye and a meme in his heart.

  • FredFig@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    edit-2
    5 months ago

    This is quite minor, but itā€™s very funny seeing the intern would-be sneerers still on rbuttcoin fall for the AI grift, to the point that its part of their modscript copypasta

    Or in the pinned mod comment:

    AI does have some utility and does certain things better than any other technology, such as:

    • The ability to summarize in human readable form, large amounts of information.
    • The ability to generate unique images in a very short period of time, given a verbose description

    tfw youā€™re anti-crypto, but only because its a bad investing opportunity.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      5 months ago

      i came here from r/buttcoin and lmao

      i mean technically it passes the very low bar of having a single non-criminal use case (mass manufacturing spam and other drivel)

      some are not falling for it at least

    • earthquake@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      5 months ago

      Gross, that whole thread is gross. A lot of promptfans in that thread seemingly experiencing pushback for the first time and they are baffled!

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      5 months ago

      in that thread: marketing dude who uses chatgpt, never had issues with incorrect results. i mean how would he even catch this, his entire field is uncut bullshit

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      5 months ago

      While itā€™s always correct to laugh at crypto advocates, /r/buttcoin just isnā€™t very edifying lately. Thereā€™s no depth to the criticism. It comes across as the ā€œantiā€ version of wall street bets for people who lost their shirts, especially since The Appening, when a lot of subject matter experts left town.

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      5 months ago

      I donā€™t think that comment is unreasonable. LLMs can summarize large-ish amounts of information (as long as it fits in the context window) in a human-readable form, and while itā€™s still prone to getting things wrong and Iā€™d rather a human do it all day, it does do it ā€œbetter than any other technologyā€ that I know of. We can argue about ā€œuniqueā€ but strictly speaking it will almost certainly generate an image that didnā€™t exist before. Iā€™d also rather a human make the image for qualityā€™s sake, but being fast, cheap, and copyright-free is a useful enough combo in certain situations.

      It doesnā€™t really bring up the main issues with AI, but I think thatā€™s acceptable in the context, which is ā€œHow is AI different from crypto in the context of r/Buttcoinā€, and in that context ā€œcrypto is completely uselessā€ and ā€œAI has minimal uses which may or may not be worthwhile depending on how you evaluate the benefits and negativesā€ are meaningfully different.

      • FredFig@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        edit-2
        5 months ago

        Itā€™s ā€œreasonableā€ in context, I just thought itā€™s funny that rbuttcoin would be headpatting AI at all, since its basically the exact same people pushing AI as the people pushing crypto with the exact same motives.

  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    5 months ago

    Tom Murphy VIIā€™s new LLM typesetting system, as submitted to SIGBOVIK. I watched this 20 minute video all through, and given my usual time to kill for video is 30 seconds you should take that as the recommendation it is.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      5 months ago
      The scales have fallen from my eyes
      How could've blindness struck me so
      LLM's for sure bring more than lies
      They can conjure more than mere woe
      
      All of us now, may we heed the sign
      Of all text that will come to align
      
    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      5 months ago

      Watched this without sound, as I had opened the tab and wasnā€™t paying attention, but did you really just suggest a video where they are drinking out of a clearly empty cup? I closed it in disgust.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        5 months ago

        Thereā€™s a bit in the beginning where he talks about how actors handling and drinking from obviously weightless empty cups ruins suspension of disbelief, so Iā€™m assuming itā€™s a callback.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          5 months ago

          Yes, I was making a joke, as somebody who also gets unreasonably annoyed by those things. I felt seen.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          20
          Ā·
          edit-2
          5 months ago

          each sneer is drafted and redrafted for at least thirty hours prior to exposure to the internet, but in truth, thatā€™s only the last stage of a long process. for example, master sneerers often practice their lip curls for weeks before they even begin looking at yud posts

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        5 months ago

        itā€™s only a service account and a couple lines of bash away! but not automating for now makes it easier to evolve these threads naturally as we go, I think, and our posters being willing to help rotate and contribute to these weekly threads is a good sign that the conceptā€™s still fun.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      5 months ago

      I hadnā€™t paid enough attention to the actual image found in the Notepad build:

      Original neutral text obscured by the suggestion:

      The Romans invaded Britain as thā€¦

      Godawful anachronistic corporate-speaky insipid suggested replacement, seemingly endorsing the invasion?

      The romans embarked on a strategic invasion of Britain, driven by the ambition to expand their empire and control vital resources. Led by figures like Julius Caesar and Emperor Claudius, this conquest left an indelible mark on history, shaping governance, architecture, and culture in Britain. The Roman presence underscored their relentless pursuit of imperial dominance and resource acquisition.

      The image was presumably not fully approved/meant to be found, but why is it this bad!?

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      5 months ago

      I mean notepad already has autocorrect, isnā€™t it natural to add spicy autocorrect? /s

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      5 months ago

      Microsoft announced that 2024 will be the era of the AI PC, and unveiled that upcoming Windows PCs would ship with a dedicated Copilot button on the keyboard.

      Tell me theyā€™re desperate because not many people use that shit without telling me theyā€™re desperate because not many people use that shit.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    5 months ago

    Off topic:

    Went to Bonnaroo this weekend and didnā€™t have to think about weird racist nerds trying to ruin everything for four whole days.

    It was rad af and full of the kind of positive human interaction these people want to edit out of existence. Highly recommend.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    5 months ago

    I just passed a bus stop ad (in Germany) of Perplexity AI that said you can ask it about the chances of Germany winning Euro2024.

    So I guess itā€™s now a literal oracle or something?? What happened to the good-old ā€œdog picking a food bowlā€ method of deciding championships.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      5 months ago

      Asked to comment, a Meta spokesperson told The Register, ā€œWe value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Metaā€™s defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements.ā€

      translation: they knew we would either squash the investigation attempt outright or change their research methodology and results until we looked like the good guys, and that kind of behavior cannot be tolerated