• Contentedness@lemmy.nz
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    17
    ·
    3 months ago

    ChatGPT didn’t nearly destroy her wedding, her lousy wedding planner did. Also whats she got against capital letters?

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      58
      ·
      3 months ago

      Yea yea guns don’t kill people, bullet impacts kill people. Dishonesty and incompetence are nothing new, but you may note that the wedding planner’s unfounded confidence in ChatGPT exacerbated the problem in a novel way. Why did the planner trust the bogus information about Vegas wedding officiants? Is someone maybe presenting these LLM bots as an appropriate tool for looking up such information?

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    “Comment whose upvotes all come from programming dot justworks dot dev dot infosec dot works” sure has become a genre of comment.

  • DannyBoy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    14
    ·
    3 months ago

    I can make a safe assumption before reading the article that ChatGPT didn’t ruin the wedding, but rather somebody that was using ChatGPT ruined the wedding.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      3 months ago

      “blame the person, not the tools” doesn’t work when the tools’ marketing team is explicitly touting said tool as a panacea for all problems. on the micro scale, sure, the wedding planner is at fault, but if you zoom out even a tiny bit it’s pretty obvious what enabled them to fuck up for as long and as hard as they did

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        ·
        3 months ago

        do you think they ever got round to reading the article, or were they spent after coming up with “hmmmm I bet chatgpt didn’t somehow prompt itself” as if that were a mystery that needed solving

      • null@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        2 months ago

        “This hammer can’t plan a wedding. Hammers are useless.”

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          2 months ago

          almost all of your posts are exactly this worthless and exhausting and that’s fucking incredible

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            edit-2
            2 months ago

            I get the feeling you’re exactly the kind of person who shouldn’t have a proompt, much less a hammer

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              13
              ·
              2 months ago

              no absolutely, I shouldn’t ever “have a proompt”, whatever the fuck that means

              the promptfondlers really aren’t alright now that public opinion’s against the horseshit tech they love

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                10
                ·
                2 months ago

                istg these people seem to roll “b-b-b-but <saltman|musk|sundar|…> gifted this technology to me personally, how could I possibly look this gift horse in the mouth” on the inside of their heads

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    3 months ago

    As a fellow Interesting Wedding Haver, I have to give all the credit in the world to the author for handling this with grace instead of, say, becoming a terrorist. I would have been proud to own the “Tracy did nothing wrong” tshirt.

      • ugo@feddit.it
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        3 months ago

        But the article author wasn’t interfacing with chatgpt, she was interfacing with a human paid to help with the things the article author did not know. The wedding planner was a supposed expert in this interaction, but instead simply sent back regurgitated chatgpt slop.

        Is this the fault of the wedding planner? Yes. Is it the fault of chatgpt? Also yes.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          21
          ·
          3 months ago

          Scams are LLM’s best use case.

          They’re not capable of actual intelligence or providing anything that would remotely mislead a subject matter expert. You’re not going to convince a skilled software developer that your LLM slop is competent code.

          But they’re damn good at looking the part to convince people who don’t know the subject that they’re real.

      • Pandemanium@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        I think we should require professionals to disclose whether or not they use AI.

        Imagine you’re an author and you pay an editor $3000 and all they do is run your manuscript through ChatGPT. One, they didn’t provide any value because you could have done the same thing for free; and two, if they didn’t disclose the use of AI, you wouldnt even know your novel had been fed into one and might be used by the AI for training.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          16
          ·
          3 months ago

          I think we should require professionals not to use the thing currently termed AI.

          Or if you think it’s unreasonable to ask them not to contribute to a frivolous and destructive fad or don’t think the environmental or social impacts are bad enough to implement a ban like this, at least maybe we should require professionals not to use LLMs for technical information