The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
Why am I not surprised? People who know nothing about these things think we just created a brain simulation: they think it’s magic! While those who are tech-savvy know just what these things can and can’t do and know just how unreliable they can be.
just how unreliable they can be.
Even if we somehow manage to make AI 100% accurate, it won’t actually be factual. AI will never be factual.
If you think about what an LLM actually is, its basically not more than someone making a tar file, which just takes a lot of time, energy, and user input in order to untar again. But it still depends on the maker of the tar file what they will put in it. For example, a zuckerberg will put other data in it than an LLM made by Bernie sanders. Therefore, the LLM will always output data similar to the views of the person who made it, be it political or other. Therefore, you would need to use every AI there is in order to see a truly factual answer.
So, TL;DR, Even if you use an LLM, you still need to use every LLM there is in order to get an at least close to factual answer. Therefore, you are not better off than just using SearXNG with a good adblock and blocking the search results of all the clickbait AI generated slop sites.
Yup. These “AI” machines are not much more than glorified pattern recognition software. They are hallucination machines that sometimes get things right by accident.
Comparing them to .tar or .zip files is an interesting way of thinking about how the “training process” is nothing more than adjusting the machine sot that it copies the training data (backwards propagation). Since training works is such a way that the machine’s definition of success is how well if copies the training data:
- If the output is similar to the training data, then it is a success
- If the output is different for the training data, it is a failure
Comparing them to .tar or .zip
Dont give me the credit, I just once saw a Video about how you could theoretically use an llm as a compression algorithm for password (or in this case prompt) protected files. Like, if you make that work, you can literally retcon someone (like the Feds) about cracking your file.
Odd way of phrasing this. It’s not like as you start knowing less about AI, your trust on it increases. It’s the other way around: the more you know, the less you trust.
I’m aware from a strictly formal logic point-of-view these are technically the same but the cognitive impact of how the message is delivered seems relevant.