Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    edit-2
    5 months ago

    Current flavor AI is certainly getting demystified a lot among enterprise people. Letā€™s dip our toes into using an LLM to make our hoard of internal documents more accessible, itā€™s supposed to actually be good at that, right? is slowly giving way to ā€œWhat do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?ā€

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      5 months ago

      ā€œWhat do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?ā€

      Our BI team is trying to implement some RAG via Microsoft Fabrics and Azure AI search because we need that for whatever reason, and theyā€™ve burned through almost 10k for the first half of the running month already, either because itā€™s just super expensive or because itā€™s so terribly documented that they canā€™t get it to work and have to try again and again. Normal costs are somewhere around 2k for the whole month for traffic + servers + database and I havenā€™t got the foggiest whatā€™s even going on there.

      But someone from the C suite apparently wrote them a blank check because itā€™s AI ā€¦

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      5 months ago

      Confucius, the Buddha, and Lao Tzu gather around a newly-opened barrel of vinegar.

      Confucius tastes the vinegar and perceives bitterness.

      The Buddha tastes the vinegar and perceives sourness.

      Lao Tzu tastes the vinegar and perceives sweetness, and he says, ā€œFellas, I donā€™t know what this is but it sure as fuck isnā€™t vinegar. How much did you pay for it?ā€

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      5 months ago

      What do you mean RAG is basically LLM flavored elasticsearch

      I always saw it more as LMGTFYaaS.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        5 months ago
        NSFW (including funny example, don't worry)

        RAG is ā€œRetrieval-Augmented Generationā€. Itā€™s a prompt-engineering technique where we run the prompt through a database query before giving it to the model as context. The results of the query are also included in the context.

        In a certain simple and obvious sense, RAG has been part of search for a very long time, and the current innovation is merely using it alongside a hard prompt to a model.

        My favorite example of RAG is Generative Agents. The idea is that the RAG query is sent to a database containing personalities, appointments, tasks, hopes, desires, etc. Concretely, hereā€™s a synthetic trace of a RAG chat with Batman, who I like using as a test character because he is relatively two-dimensional. We ask a question, our RAG harness adds three relevant lines from a personality database, and the model generates a response.

        > Batman, what's your favorite time of day?
        Batman thinks to themself: I am vengeance. I am the night.
        Batman thinks to themself: I strike from the shadows.
        Batman thinks to themself: I don't play favorites. I don't have preferences.
        Batman says: I like the night. The twilight. The shadows getting longer.
        
      • pyrex@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        5 months ago

        Itā€™s the technique of running a primary search against some other system, then feeding an LLM the top ~25 or so documents and asking it for the specific answer.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        5 months ago

        so, uh, you remember AskJeeves?

        (alternative answer: the third buzzword in a row thatā€™s supposed to make LLMs good, after multimodal and multiagent systems absolutely failed to do anything of note)

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      5 months ago

      Maybe hot take, but I actually feel like the world doesnā€™t need strictly speaking more documentation tooling at all, LLM / RAG or otherwise.

      Companies probably actually need to curate down their documents so that simpler thinks work, then it doesnā€™t cost ever increasing infrastructure to overcome the problems that previous investment actually literally caused.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        5 months ago

        Companies probably actually need to curate down their documents so that simpler thinks work, then it doesnā€™t cost ever increasing infrastructure to overcome the problems that previous investment actually literally caused

        Definitely, but the current narrative is that you donā€™t need to do any of that, as long as you add three spoonfulls of AI into the mix youā€™ll be as good as.

        Then you find out what you actually signed up for is to do all the manual preparation of building an on-premise search engine to query unstructured data, and you still might end up with a tool thatā€™s only slightly better than trying to grep a bunch of pdfs at the same time.