• Melody Fwygon@lemmy.one
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”. In a statement, they said that the supermarket would “keep fine tuning our controls” of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18.

    In a warning notice appended to the meal-planner, it warns that the recipes “are not reviewed by a human being” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or suitable for consumption”.

    “You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot,” it said.

    Just another bit of proof that humans are not ready for AI. This AI needs to be deleted. This is not simply operator error; this is an administrative error, and an error of good common sense on the part of many many people involved with creating this tool.

    You cannot always trust that an end user will not be silly, malicious, or otherwise plainly predictable in how they use software.

    • 𝙣𝙪𝙠𝙚@yah.lol
      link
      fedilink
      English
      arrow-up
      37
      ·
      edit-2
      1 year ago

      That’s a bit dramatic of a take. The AI makes recipe suggestions based on ingredients the user inputs. These users inputted things like bleach and glue, and other non-food items, to intentionally generate non-food recipes.

      • chameleon@kbin.social
        link
        fedilink
        arrow-up
        33
        ·
        1 year ago

        If you’re making something to come up with recipes, “is this ingredient likely to be unsuitable for human consumption” should probably be fairly high up your list of things to check.

        Somehow, every time I see generic LLMs shoved into things that really do not benefit from an LLM, those kinds of basic safety things never really occurred to the person making it.

        • 𝙣𝙪𝙠𝙚@yah.lol
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Fair point, I agree there should be such a check. It seems for now that the only ones affected were people who tried to intentionally mess with it. It will be a hard goal to reach completely because what’s ok and healthy for some could also be a deathly allergic reaction for others. There’s always going to have to be some personal accountability for the person preparing a meal to understand what they’re making is safe.

          • DeltaTangoLima@reddrefuge.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            They’re a supermarket, and they own the data for the items they stock. No reason they couldn’t have used their own taxonomy to eliminate the ability to use non-food items in their poorly implemented AI.

            Love how they blame the people that tried it. Like it’s their fault the AI was released for public use without thinking about the consequences. Typical corporate blame shifting.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Would it be better to have a massive list of food items to pick from?

        Should take care of bad inputs somewhat

      • qyron@lemmy.pt
        link
        fedilink
        arrow-up
        17
        ·
        1 year ago

        Are we doing this shit here as well?

        Your reply adds zero value to the thread.

        If you want to make a point, try full paragraphs to express arguments.

        • money_loo@1337lemmy.com
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I really didn’t want to but their comment just reeks of it my guy.

          Unless by “doing this shit here as well”, you’re referring to the act of not reading the article, jumping to conclusions, and spreading fear and disinformation.

          • qyron@lemmy.pt
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            I really didn’t want to but their comment just reeks of it my guy.

            Except that you did want to. Otherwise, you wouldn’t have done.

            Unless by “doing this shit here as well”, you’re referring to the act of not reading the article, jumping to conclusions, and spreading fear and disinformation.

            In order to be as fair as possible, I went back and read the comment again.

            Is it inflammatory and excessive, while putting out an outlook of distrust towards a new technology? It can be understood as such. Yet, to a degree, I respect and understand that opinion.

            Spurting out “okay boomer” doesn’t dismantle that comment; it’s a personal attack.

            Either add to the conversation on just keep your peace. Makes the world a better place.