The article discusses the mysterious nature of large language models and their remarkable capabilities, focusing on the challenges of understanding why they work. Researchers at OpenAI stumbled upon unexpected behavior while training language models, highlighting phenomena such as “grokking” and “double descent” that defy conventional statistical explanations. Despite rapid advancements, deep learning remains largely trial-and-error, lacking a comprehensive theoretical framework. The article emphasizes the importance of unraveling the mysteries behind these models, not only for improving AI technology but also for managing potential risks associated with their future development. Ultimately, understanding deep learning is portrayed as both a scientific puzzle and a critical endeavor for the advancement and safe implementation of artificial intelligence.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago
    It converts the prompt into French, then operates on French tokens.
    
    It operates on English tokens, then converts the output to French tokens.
    
    It converts the logical problem itself into an abstract layer, then into French.
    

    What does any of that actually mean?

    You download an LLM. Now what? How do you test this?

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      What does any of that actually mean?

      I was partially rambling so I expressed the three hypotheses poorly. A better way to convey it would be which set of tokens is the LLM using to solve the problem? 1. from French?, 2. from English?, or 3. neither?

      In #1 and #2 it’s still doing nothing “magic”, it’s just handling tokens as it’s supposed to. In #3 it’s using the tokens for something more interesting - still not “magic”, but cool.

      You download an LLM. Now what? How do you test this?

      For maths problems, I don’t know a way to test it. However, for general problems:

      If the LLM is handling problems through the tokens of a specific language, it should fall for a similar “trap” as plenty monolinguals do, when 2+ concepts are conveyed through the same word and they confuse said concepts.

      For example. Let’s say that we train an LLM with the following corpuses:

      1. English corpus talking about software, but omitting any clarification distinguishing between free “unrestricted” (as Linux) and free “costless” (as Skype).
      2. French corpus that includes the words “libre” (free/unrestricted) and “gratuit” (free/costless), enough context to associate each with their semantic fields, and to associate both with English “free”.

      Then we start asking it about free software, in both languages. Will the LLM be able to distinguish between both concepts?

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        This makes some very strong assumptions about what’s going on inside the model. We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans.

        Suppose a model sometimes seems to confuse the concept. There will be wrong examples in the training data. For all we know, it may have learned that this should be done if there was an uneven number of words since the last punctuation mark.

        To feed text into an LLM, it has to be encoded. The normal schemes are for different purposes and not suitable. A text is broken down into tokens. A token can be a single character or an emoji, part of a word, or even more than a word. A token is represented by numbers and that’s what the model takes as input and gives as output. A text, turned into numbers, is called an embedding.

        The process of turning a text into an embedding is quite involved. It uses its own neural net. The numbers should already relate to the meaning. Because of the way these are trained, English words are often a single token, while words from other languages are dissected into smaller parts.

        If an LLM “thinks” in tokens, then that’s something it has learned. If it “knows” that a token has a language, then it has learned that.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          This makes some very strong assumptions about what’s going on inside the model.

          I explicitly marked the potential explanations as “hypotheses”, acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).

          We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans. [implied: “you’re assuming that LLMs represent concepts internally.”]

          The implication is incorrect.

          “Concept” in this case is simply a convenient abstraction, based on how humans would interpret the output. I’m not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I’m placing my bets on the second one.

          The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).


          Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).