• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    6 hours ago

    ChatGPT is a tool. Use it for tasks where the cost of verifying the output is correct is less than the cost of doing it by hand.

    • qarbone@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 hours ago

      Honestly, I’ve found it best for quickly reformatting text and other content. It should live and die as a clerical tool.

    • tacobellhop
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 hours ago

      Youre still doing it by hand to verify in any scientific capacity. I only use ChatGPT for philosophical hypotheticals involving the far future. We’re both wrong but it’s fun for the back and forth.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        It is not true in general that verifying output for a science-related prompt requires doing it by hand, where “doing it by hand” means putting in the effort to answer the prompt manually without using AI.

  • PartiallyApplied@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    4 hours ago

    I feel this hard with the New York Times.

    99% of the time, I feel like it covers subjects adequately. It might be a bit further right than me, but for a general US source, I feel it’s rather representative.

    Then they write a story about something happening to low income US people, and it’s just social and logical salad. They report, it appears as though they analytically look at data, instead of talking to people. Statisticians will tell you, and this is subtle: conclusions made at one level of detail cannot be generalized to another level of detail. Looking at data without talking with people is fallacious for social issues. The NYT needs to understand this, but meanwhile they are horrifically insensitive bordering on destructive at times.

    “The jackboot only jumps down on people standing up”

    • Hozier, “Jackboot Jump”

    Then I read the next story and I take it as credible without much critical thought or evidence. Bias is strange.

      • PartiallyApplied@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        “Wet sidewalks cause rain”

        Pretty much. I never really thought about the causal link being entirely reversed, moreso that the chain of reasoning being broken or mediated by some factor they missed, which yes definitely happens, but now I can definitely think of instances where it’s totally flipped.

        Very interesting read, thanks for sharing!

    • CheeseToastie@lazysoci.al
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Can you give me an example of conclusions on one level of detail can’t be generalised to another level? I can’t quite understand it

      • PartiallyApplied@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        Perhaps the textbook example is the Simpson’s Paradox.

        This article goes through a couple cases where naively and statically conclusions are supported, but when you correctly separate the data, those conclusions reverse themselves.

        Another relevant issue is Aggregation Bias. This article has an example where conclusions about a population hold inversely with individuals of that population.

        And the last one I can think of is MAUP, which deals with the fact that statistics are very sensitive in whatever process is used to divvy up a space. This is commonly referenced in spatial statistics but has more broad implications I believe.


        This is not to say that you can never generalize, and indeed, often a big goal of statistics is to answer questions about populations using only information from a subset of individuals in that population.

        All Models Are Wrong, Some are Useful

        • George Box

        The argument I was making is that the NYT will authoritatively make conclusions without taking into account the individual, looking only at the population level, and not only is that oftentimes dubious, sometimes it’s actively detrimental. They don’t seem to me to prove their due diligence in mitigating the risk that comes with such dubious assumptions, hence the cynic in me left that Hozier quote.

  • DicJacobus@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    I have frequentley seen gpt give a wrong answer to a question, get told that its incorrect, and the bot fights with me and insists Im wrong. and on other less serious matters Ive seen it immediatley fold and take any answer I give it as “correct”

  • Kane@femboys.biz
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    Exactly this is why I have a love/hate relationship with just about any LLM.

    I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn’t made some random mistake.

    I’m not entirely sure we’re going to resolve this without additional technology, outside of ‘the LLM’-itself.

  • Alloi@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    7 hours ago

    i mainly use it for fact checking sources from the internet and looking for bias. i double check everything of course. beyond that its good for rule checking for MTG commander games, and deck building. i mainly use it for its search function.

  • foxlore@programming.dev
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 hours ago

    Talking with an AI model is like talking with that one friend, that is always high that thinks they know everything. But they have a wide enough interest set that they can actually piece together an idea, most of the time wrong, about any subject.

  • lowside@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    10 hours ago

    One thing I have found it to be useful for is changing the tone if what I write.

    I tend to write very clinicaly because my job involves a lot of that style of writing. I have started asked chat gpt to rephrase what i write in a softer tone.

    Not for everything, but for example when Im texting my girlfriend who is feeling insecure. It has helped me a lot! I always read thrugh it to make sure it did not change any of the meaning or add anything, but so far it has been pretty good at changing the tone.

    Also use it to rephrase emails at work to make it sound more professional.

    • taxiiiii@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      I do that in reverse, lol. Except I’m also not a native speaker. “Rephrase this, it should sound more scientific”.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    17 hours ago

    I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.

    • BlackPenguins@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Same. It’s an idea generator. I asked what kinda pie should I should make. I saw one I liked and then googled a real recipe.

      I needed a SQL query for work. It gave me different methods of optimization. I then googled those methods, implemented, and tested it.

  • RabbitBBQ@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    3
    ·
    22 hours ago

    If the standard is replicating human level intelligence and behavior, making up shit just to get you to go away about 40% of the time kind of checks out. In fact, I bet it hallucinates less and is wrong less often than most people you work with

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      21 hours ago

      And it just keeps improving over time. People shit all over ai to make themselves feel better because scary shit is happening.

  • SirSamuel@lemmy.world
    link
    fedilink
    arrow-up
    82
    arrow-down
    4
    ·
    1 day ago

    First off, the beauty of these two posts being beside each other is palpable.

    Second, as you can see on the picture, it’s more like 60%

    • morrowind@lemmy.ml
      link
      fedilink
      arrow-up
      25
      ·
      24 hours ago

      No it’s not. If you actually read the study, it’s about AI search engines correctly finding and citing the source of a given quote, not general correctness, and not just the plain model

      • SirSamuel@lemmy.world
        link
        fedilink
        arrow-up
        28
        ·
        23 hours ago

        Read the study? Why would i do that when there’s an infographic right there?

        (thank you for the clarification, i actually appreciate it)

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    158
    arrow-down
    1
    ·
    1 day ago

    I love that this mirrors the experience of experts on social media like reddit, which was used for training chatgpt…

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        9
        ·
        22 hours ago

        i was going to post this, too.

        The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      arrow-up
      44
      ·
      edit-2
      1 day ago

      Also common in news. There’s an old saying along the lines of “everyone trusts the news until they talk about your job.” Basically, the news is focused on getting info out quickly. Every station is rushing to be the first to break a story. So the people writing the teleprompter usually only have a few minutes (at best) to research anything before it goes live in front of the anchor. This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic. It also means they’re going to be misleading or blatantly wrong a lot of the time, because they’re basically just parroting the top google result regardless of accuracy.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        One of my academic areas of expertise way back in the day (late '80s and early '90s) were the so-called “Mitochondrial Eve” and “Out of Africa” hypotheses. The absolute mangling of this shit by journalists even at the time was migraine-inducing and it’s gotten much worse in the decades since then. It hasn’t helped that subsequent generations of scholars have mangled the whole deal even worse. The only advice I can offer people is that if the article (scholastic or popular) contains the word “Neanderthal” anywhere, just toss it.

          • ChickenLadyLovesLife@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            23 hours ago

            Are you saying neanderthal didn’t exist, or was just homo sapiens? Or did you mean in the context of mitochondrial Eve?

            All of these things, actually. The measured, physiological differences between “homo sapiens” and “neanderthal” (the air quotes here meaning “so-called”) fossils are much smaller than the differences found among contemporary humans, so the premise that “neanderthals” represent(ed) a separate species - in the sense of a reproductively isolated gene pool since gone extinct - is unsupported by fossil evidence. Of course nobody actually makes that claim anymore, since it’s now commonly reported that contemporary humans possess x% of neanderthal DNA (and thus cannot be said to be “extinct”). Of course nobody originally (when Mitochondrial Eve was first mooted) made any claims whatsoever about neanderthals: the term “neanderthal” was imported into the debate over the age and location of the last common mtDNA ancestor years later, after it was noticed that the age estimates of neanderthal remains happened to roughly match the age estimates of the genetic last common ancestor. And this was also after the term “neanderthal” had previously gone into the same general category in Anthropology as “Piltdown Man”.

            Most ironically, articles on the subject today now claim a correspondence between the fossil and genetic evidence, despite the fact that the very first articles (out of Allan Wilson’s lab and published in Nature and Science in the mid-1980s) drew their entire impact and notoriety from the fact that the genetic evidence (which supposedly gave 100,000 years ago and then 200,000 years ago as the age of the last common ancestor) completely contradicted the fossil evidence (which shows upright bipedal hominids spreading out of Africa more than a million and half years ago). To me, the weirdest thing is that academic articles on the subject now almost never cite these two seminal articles at all, and most authors seem genuinely unaware of them.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        There’s an old saying along the lines of “everyone trusts the news until they talk about your job.”

        This is something of a selection bias. Generally speaking, if you don’t trust a news broadcast then you won’t watch it. So of course you’re going to be predisposed to trust the news sources you do listen to. Until the news source bumps up against some of your prior info/intuition, at which point you start experiencing skepticism.

        This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic.

        Investigative journalism has historically been a big part of the industry. You do get a few punchy “If it bleeds, it leads” hit pieces up front, but the Main Story tends to be the result of some more extensive investigation and coverage. I remember my home town of Houston had Marvin Zindler, a legendary beat reporter who would regularly put out interconnected 10-15 minute segments that offered continuous coverage on local events. This was after a stint at a municipal Consumer Fraud Prevention division that turned up numerous health code violations and sales frauds (he was allegedly let go by an incoming sheriff with ties to the local used car lobby, after Zindler exposed one too many odometer scams).

        But investigative journalism costs money. And its not “business friendly” from a conservative corporate perspective, which can cut into advertising revenues. So it is often the first line of business to be cut when a local print or broadcast outlet gets bought up and turned over for syndication.

        That doesn’t detract from a general popular appetite for investigative journalism. But it does set up an adversarial economic relationship between journals that do carry investigative reports and those more focused on juicing revenues.