A prototype is available, though it’s Chrome-only and English-only at the moment. How this’ll work is you select some text and then click on the extension, which will try to “return the relevant quote and inference for the user, along with links to article and quality signals”.

How this works is it uses ChatGPT to generate a search query, utilizes WP’s search API to search for relevant article text, and then uses ChatGPT to extract the relevant part.

  • Irdial@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    5
    ·
    9 months ago

    Is it that hard to fact-check things?? Not to mention, a quick web search uses much less power/resources compared to AI inference…

    • swordsmanluke@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      9 months ago

      a quick web search uses much less power/resources compared to AI inference

      Do you have a source for that? Not that I’m doubting you, just curious. I read once that the internet infrastructure required to support a cellphone uses about the same amount of electricity as an average US home.

      Thinking about it, I know that LeGoog has yuge data centers to support its search engine. A simple web search is going to hit their massive distributed DB to return answers in subsecond time. Whereas running an LLM (NOT training one, which is admittedly cuckoo bananas energy intensive) would be executed on a single GPU, albeit a hefty one.

      So on one hand you’ll have a query hitting multiple (comparatively) lightweight machines to lookup results - and all the networking gear between. One the other, a beefy single-GPU machine.

      (All of this is from the perspective of handling a single request, of course. I’m not suggesting that Wikipedia would run this service on only one machine.)

  • quicksand@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    AI is going to start writing entire fake research papers and books written by fake authors, just so it can be cited as a source for a high school kid using it to cheat on a 500 word essay.

    • afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      We have become like really shitty gods. All this power to do freaken nothing. At least the Greek gods made lighting and thunder.

  • BlueBockser@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 months ago

    I’m skeptical given how confident many recent AI models are at making wrong claims. Fact checking seems to be a rather poor use case for current AI models IMO.

    • swordsmanluke@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      This looks less like the LLM is making a claim so much as using an LLM to generate a search query and then read through the results in order to find anything that might relate to the section being searched.

      It leans into the things LLMs are pretty good at (summarizing natural language; constructing queries according to a given pattern; checking through text for content that matches semantically instead of literally) and links directly to a source instead of leaning on the thing that LLMs only pretend to be good at (synthesizing answers).

  • ViscloReader@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 months ago

    While I see this as one of the rare nice use of IA, if the use is just to fact check some text found on the web. It could also just fetch it on the site instead of using an AI.

    Might be overkill to use LLMs here I think…

    • Aatube@kbin.melroy.orgOP
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      The problem arises when a site uses different wording. Wikipedia’s search engine isn’t that good, so that problem could make the extension fail enough times to stunt retention.