• floofloof@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 day ago

    The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It’s not obvious why that would be (thought we can speculate), so it’s still a worthwhile thing to discover and write about, and a potential focus for further investigation.

      • floofloof@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.

          • floofloof@lemmy.caOP
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

            • vrighter@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              1 day ago

              we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff