Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

  • deadcade@lemmy.deadca.de
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    4 months ago

    Research on this topic exists, and it is possible to alter the output of an LLM in minor ways, that statistically “watermark” the results without drastically changing the quality of the output. OpenAI has probably implemented this into ChatGPT.

    https://www.youtube.com/watch?v=2Kx9jbSMZqA

    I think the tool exists, and is (at least close to) as good as they claim it is. They can’t release it, because once the public can tell with high accuracy whether ChatGPT wrote some text, another AI can be developed to circumvent detection from this method, making the tool useless.

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      4 months ago

      That is a long video, is the paper published somewhere?

      Im willing to accept that you can statistically “watermark” the text, but I’m not convinced that it would be tamper resistant, which is a large part of what makes a watermark useful. If it can’t survive an idiot with a thesaurus, its probably not gonna be terribly useful.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 months ago

        It can likely also be defeated by adding “In the style of X” to a prompt, changing the distribution and pattern of the responses.

          • archomrade [he/him]
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            You could feed it through a different, smaller model that could even be self-hosted. It isn’t difficult to make a model that rephrases an input in another style.