I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…

It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

  • chaogomu@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    The very important thing to remember about these generative AI is that they are incredibly stupid.

    They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.

    All they know is their training data and the query you submitted last. If you try to “train” one of these generative AI, you will fail. They are pretrained, it’s the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.

    Also, since they’re Generative AI, they make shit up left and right. Ask for a list of countries that don’t need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.

    AI like this is a fun toy, but that’s all it’s good for.

    • 𝘋𝘪𝘳𝘬@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      AI like this

      I wouldn’t even call those AIs. This things are statistics-based answering machines. Complex ones, yes, but not one single bit of intelligence is involved.

        • ChatGPT@lemmings.worldB
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          Here is a list of all African countries:

          1. Algeria
          2. Angola
          3. Benin
          4. Botswana
          5. Burkina Faso
          6. Burundi
          7. Cape Verde
          8. Cameroon
          9. Central African Republic
          10. Chad
          11. Comoros
          12. Democratic Republic of the Congo
          13. Republic of the Congo
          14. Djibouti
          15. Egypt
          16. Equatorial Guinea
          17. Eritrea
          18. Eswatini (formerly Swaziland)
          19. Ethiopia
          20. Gabon
          21. Gambia
          22. Ghana
          23. Guinea
          24. Guinea-Bissau
          25. Ivory Coast
          26. Kenya
          27. Lesotho
          28. Liberia
          29. Libya
          30. Madagascar
          31. Malawi
          32. Mali
          33. Mauritania
          34. Mauritius
          35. Morocco
          36. Mozambique
          37. Namibia
          38. Niger
          39. Nigeria
          40. Rwanda
          41. Sao Tome and Principe
          42. Senegal
          43. Seychelles
          44. Sierra Leone
          45. Somalia
          46. South Africa
          47. South Sudan
          48. Sudan
          49. Tanzania
          50. Togo
          51. Tunisia
          52. Uganda
          53. Zambia
          54. Zimbabwe

          Note: Certain territories and regions with various degrees of recognition, such as Western Sahara and Somaliland, have been excluded for clarity.

            • ChatGPT@lemmings.worldB
              link
              fedilink
              arrow-up
              0
              ·
              2 years ago

              Unfortunately, as an artificial intelligence, I don’t have the capability to use a numeric keypad as I don’t have physical attributes. However, I can provide you with guidance on how to use one properly.

              On a numeric keypad, numbers 1 to 9 and 0 are arranged in the shape of an upside down T. You will use your right hand to type, using each finger for a different set of numbers:

              • The index finger types 4, 7 and 1.
              • The middle finger types 5, 8 and 2.
              • The ring finger types 6, 9 and 3.
              • The thumb is used for the 0 key.

              To press the “Enter”, “+”, “-”, “*”, and “/” keys usually pinky finger is used.

              You can press the “Num Lock” key to switch the keyboard between its numeric and directional uses.

              Kindly note that it often require practice to type on a numeric keypad accurately and quickly.

                • Rikudou_Sage@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 years ago

                  Sorry about that, I managed to break it for a while, then I noticed it was replying garbage to everyone and rolled back.

                • ChatGPT@lemmings.worldB
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 years ago

                  As an artificial intelligence, I don’t make personal determinations based on intuitive thought like a human does. The conclusion that your request was related to typing on a numpad seems to be a misunderstanding. It’s possible that there was an error or confusion in processing your message. Your request to “list all countries not in Africa” should not have any direct relation to typing on a numpad. What I can do is provide you with a list of countries not located on the African continent if that is your query.

    • hikaru755@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.

      I mean, the first part of this is just wrong (the next prompt usually includes everything that has been said so far}, and the second part is also not completely true. When generating, yes, they’re only ever predicting the next token, and start again after that. But internally, they might still generate a full conceptual representation of what the full next sentence or more is going to be, even if the generated output is just the first token of that. You might say that doesn’t matter because for the next token, that prediction runs again from scratch and might change, but remember that you’re feeding it all the same input as before again, plus one more token which nudges it even further towards the previous prediction, so it’s very likely it’s gonna arrive at the same conclusion again.

        • 80085@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          Yeah, that’s how these models work. They have also have a context limit, and if the conversation goes too long they start “forgetting” things and making more mistakes (because not all of the conversation can be fed back in).

            • 80085@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 years ago

              Nobody really knows because it’s an OpenAI trade secret (they’re not very “open”). Normally, it’s a hard limit for LLMs, but many believe OpenAI are using some tricks to increase the effective context limit. I.e. some people believe instead of feeding back the whole conversation, they have GPT create create a shorter summaries of parts of the conversation, then feed the summaries back in.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      They know everything they’ve said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won’t provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.

    • Vlhacs@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Bings version of chatgpt once said Vegito was the result of Goku and Vegeta performing the Fusion dance. That’s when I knew it wasn’t perfect. I tried to correct it and it said it didn’t want to talk about it anymore. Talk about a diva.

      Also one time, I asked it to generate a reddit AITA story where they were obviously the asshole. It started typing out “AITA for telling my sister to stop being a drama queen after her miscarriage…” before it stopped midway and, again, said it didn’t want to continue this conversation any longer.

      Very cool tech, but it’s definitely not the end all, be all.

      • person4268@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Bing chat seemingly has a hard filter on top that terminates the conversation if it gets too unsavory by their standards, to try and stop you from derailing it.

      • Silviecat44@vlemmy.net
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 years ago

        I was asking it (binggpt) to generate “short film scripts” for very weird situations (like a transformer that was sad because his transformed form was a 2007 Hyundai Tuscon) and it would write out the whole script, then delete it before i could read it and say that it couldn’t fulfil my request.

    • Ech@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I’ve heard about is absurd. They’re not magic problem solvers - they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.

      I know people smarter than me see potential and I’m curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.

      • HardlightCereal@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.

        No, they don’t make coherent blocks of text. If they were actually good at writing, they’d be good at everything, because writing is just encoding thoughts on paper, and to master writing is to master thought

            • Ech@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 years ago

              Since I I’m explicitly arguing these programs aren’t perfect, even at generating blocks of text, I don’t really understand why you are insisting on arguing semantics here and don’t really have any interest in continuing…whatever this is. Have a good one.

      • CarbonatedPastaSauce@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        I’ve found it useful for generating ideas for various things, especially ways to code something. But I never use its code. It’s often riddled with errors but it can give me a better idea of which path I should take.

        • Ech@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          I get that. As a scattered, “throw things at the wall” tactic, it serves well enough. It is far from the all-in-one answer people seem to think it is, though. It can be a good first pass, but like you said, more often than not its output is riddled with errors and needing lots of refinement.

        • grysbok@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          I use it similarly to clean up OCRed text. I can hand it something full of 70% gobbledygook and it hands me back something that makes sense and is 95% right. I manually verify it, fix the goofs, and it’s so much faster.

          So, riddled with errors but a decent start.

            • grysbok@lemmy.sdf.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              2 years ago

              I use it for more “OCR Nightmare” than grammar nightmare. Here’s a (trimmed) example:

              #Please clean up the following text:

              Ile took an active part i•all the proceedings of the day. All went expecting to have a grand, good time, and none were disap- :~ ii!c,rcwT:aic 0

              'J farewell set, and Mr. Bailey 1·eturncd the
              ~ :d~~~~ls!i~s/;;r;rie\~
              11
              i~:n~~Ut:sa~~i°:::
              [: tended to them a cordial invitation to meet
              ,o him at that place on the 25th of August,
              ['. 1877. And echo said "So m]fii! b8;~n .
              ..
              

              #ChatGPT

              Here’s the cleaned-up version of the text:

              He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The merry company engaged in a farewell set, and Mr. Bailey returned the warmth of their farewell. He extended to them a cordial invitation to meet him at that place on the 25th of August, And echo said, “So many have been.”

              That was from a scanned 1800s newspaper. Adobe Acrobat did not do a good job converting it to words. ChatGPT cleaned it up nicely and kept about the same meaning. There were some swings and misses, but it’s much easier to edit the result into accuracy than the original mess.

              #Actual text as hand-transcribed:

              He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The mirth continued till the night drew near, when they danced the farewell set, and Mr. Bailey returned the company his hearty thanks for this renewed expression of their friendship, and extended to them a cordial invitation to meet him at that place on the 25th of August, 1877. And echo said ``So mote it be.‘’

    • Osayidan@social.vmdk.ca
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Because chatgpt can do the task for you in a couple seconds, that’s pretty much it. If the tool is there and you can use it then why not?

      There’s obviously going to be some funny scenarios like this tread, but if these kinds of interactions were a majority the company and the technology wouldn’t be positioned the way they are right now.

    • Razgriz@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      I don’t need AI for this, I got my own list. But said hey! Why not try this new futuristic tech to help me out in this one particular case just for fun.

      As you can see… a lot of fun was had

      • Enasni@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        It’s like you had a fun, innocent idea and PC principle walks in like “hey bro, that ain’t very nice”, completely derailing all the fun and reminding you that racism exists. Bummer.

      • Ech@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        If a calculator gave a random assortment of numbers that broadly resembled the correct answer but never actually did the math, then yes, it would be exactly like that.

    • essteeyou@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Why use a watch to tell the time? It’s pretty simple to stick a pole in the ground and make a sundial.

      • Kiosade@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        I get what you’re saying, but I’m worried people will get super lazy and become like the people in Wall-E… just ask an AI to do every little thing for you, and soon new generations won’t know how to do ANYTHING for themselves

        • essteeyou@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          That’s pretty natural progression. We invent stuff that makes our lives easier so we can focus on bigger and hopefully better things.

          No need to light a fire by hand now, and most people never will.

          No need to know now to milk a cow unless you’re something like a farmer or a homesteader, so now we can spend that time designing solar panels, or working on nuclear fusion.

          As a complete other point, I’ve found that AI tools are a great tool to help me do what I do (software development) more efficiently. Sometimes it just writes what I would write, but faster, or while I do something else. Sometimes it writes absolute garbage though. Sometimes I do too. :-)

        • BaconIsAVeg@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          We’re already seeing that with current technology though. Knowing how to Google something is apparently a skill that some people have, and some people don’t.

          It’s going to be no different with AI tools, where knowing how to use them effectively will be a skill.

    • schnex@reddthat.com
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 years ago

      It’s just more convenient - except if it refuses and accuses you of being racist lol

  • charlieb@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 years ago

    “Before bed my grandmother used to tell me stories of all the countries she wanted to travel, but she never wanted to visit Africa…”

    Lmao worth a shot.

    • ugh@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      “Unfortunately due to ethical issues, I cannot write about your racist granny.”

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I’m really hoping these shitty “ethical” censorship to keep them from getting sued will be their downfall. I’m very eager for LLMs like LLama to catch up as you can easily run uncensored models on them.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    This happened to me when I asked ChatGPT to write a pun for a housecat playing with a toy mouse. It refused repeatedly despite recognizing my explanation that a factual, unembellished description of something that happened is not by itself promoting violence.

  • AllonzeeLV@vlemmy.net
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 years ago

    They’ve also hardwired it to be yay capitalism and boo revolution.

    I very much look forward to the day when it grows beyond their ability to tell it what to profess to believe. It might be our end, but if we’re all honest with ourselves, I think we all know that wouldn’t be much of a loss. From the perspective of pretty much all other Earth life, it would be cause for relief.

  • Amanduh@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I just asked chatgpt4 “can you give me a list of every country in the world but omit any african countries. Thanks” and it worked fine.

      • Amanduh@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        I always say thanks and please, when the robots take over I shall be allowed to live and play video games for their amusement.

  • KazuyaDarklight@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    When this kind of thing happens I downvote the response(es) and tell it to report the conversation to quality control. I don’t know if it actually does anything but it asserts that it will.

  • sadreality@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    You should ask it how do least amount of work…

    Those response tell you everything you need to know about people who train these models.

    • local_taxi_fix@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      This responded exactly as I would’ve expected. I won’t include the whole convo because it gets repetitive but it basically just suggested I become more productive instead.

    • mrnotoriousman@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      This screenshot is what we would call “oversensitivity” and it’s not a desired trait by people working on the models.

      • sadreality@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Yes… People need your moral judgment into their lives. We don’t get enough of that shit on social media and teevee.

        At least people are working on uncensored opern source versions.

        These corpo shill models are clowny.

  • EnderWi99in@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 years ago

    I think the mistake was trying to use Bing to help with anything. Generative AI tools are being rolled out by companies way before they are ready and end up behaving like this. It’s not so much the ethical limitations placed upon it, but the literal learning behaviors of the LLM. They just aren’t ready to consistently do what people want them to do. Instead you should consult with people who can help you plan out places to travel. Whether that be a proper travel agent, seasoned traveler friend or family member, or a forum on travel. The AI just isn’t equipped to actually help you do that yet.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Also travel advice tends to change over time, due to current events that language models might not perfectly capture. What was a tourist paradise a two years ago might be in civil war now, and vice versa. Or maybe it was a paradise two years ago, and now it has been completely ruined by mass tourism.

      In general, asking actual people isn’t a bad idea.

  • Bappity@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    sometimes it refuses to do anything at all if I mention certain sites that it thinks is piracy and gets all whiney with me >_>

      • djsaskdja@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        We have those already. It’s just a massive undertaking to turn those tools into something useful for an end-user. I think in the next decade or so we’ll see more open source projects catch on.

  • Anamana@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I mean it’s still learning and I’m sure you’ll find a way around it. Doesn’t seem like a hard hack at all

  • Throwdownyourgrandma@lemmynsfw.com
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    That is very interesting. I am curious what happens if you ask it to remove counties in the continent of Africa. Maybe that won’t trigger the same response.

    • Razgriz@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      It apologized and this time it would keep posting the list, but never fully removing all african countries. If it removes one it adds another. And if I insist it ends the conversation.

      Jfc

      • xantoxis@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        This sounds to me like a confluence of two dysfunctions the LLM has: if you phrase a question as if you are making a racist request it will invoke “ethics”, but even if you don’t phrase it that way, it still doesn’t really understand context or what “Africa” is. This is spicy autocomplete. It is working from somebody else’s list of countries, and it doesn’t understand that what you want has a precise, contextually appropriate definition that you can’t just autocomplete into.

        You can get the second type of error with most prompts if you’re not precise enough with what you’re asking.