• zed_proclaimer [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 months ago

    Can’t they just have an API that reaches out to wolfram alpha or something and does the math problem for them? What’s the obsession with having LLMs do arithmetic when that is not its purpose? Why reinvent the wheel instead of just us lapping on the wheels we already have?

    • HexLlama [it/its, she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      6 months ago

      Hi, I do AI stuff. This is what RAG is. However, its not really teaching the AI anything, technically its a whole different process that is called and injected at an opportune time. By teaching the AI more stuff, you can have it reason on more complex tasks more accurately. So teaching it how to properly reason through math problems will also help teach it how to reason through more complex tasks without hallucinating.

      For example, llama3 and various Chinese models are fairly good at reasoning through long form math problems. China probably has the best math and language translation models. I’ll probably be doing a q&a on here soon about qwen1.5 and discussing Xi’s Governance of China.

      Personally, I’ve found llms to be more useful for text prediction while coding, translating a language locally (notably: with qwen you can even get it to accurately translate to english creoles or regional dialects of Chinese without losing tone or intent, it makes for a fantastic chinese tutor), or writing fiction. It can be OK at summarizing stuff too.

    • hexaflexagonbear [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      Even beyond arithmetic, computer algebra systems are very sophisticated. Like so much so even in the late 2000s people thought most computation would be automated. So it would totally make sense to hook up an llm to a cas. But I think the goal is to get more general reasoning out of an llm. Which, doesn’t seem likely. Like the paper here is actually a pretty clever solution to an issue with llms, and it still breaks down for larger integers and as I said elsewhere, doesn’t notice an important property of arithmetic