Honestly it’s either that or some rosewattasttone level translation errors which sounds like a pretty good risk to take for me.
Honestly it’s either that or some rosewattasttone level translation errors which sounds like a pretty good risk to take for me.
So the guys who have been burning almost as much VC money as they have water and electricity in the name of building AGI have announced that they’re totally gonna do it this time? Just a few more training runs man I swear this time we’re totally gonna turn everyone into paperclips just let me have a few more runs.
Not gonna lie, “enforcing the line between ketchup and tomato sauce” isn’t the sort of thing I’d expect the government to be into, but I guess I’m not mad about it?
Gotta be cheaper than buying new planes which would also have new engines. Generally there needs to be a pretty substantial increase in capability before it’s worth retiring an existing platform, especially in a logistics role where you don’t get as much benefit from the bleeding edge because nobody’s supposed to be shooting at you in the first place.
I think the missing piece here is that B-52 isn’t just a pretty good cargo hauler, it’s a pretty good cargo hauler that we don’t need to buy a whole new airframe to get. Think of it less as “we’re commissioning these B-52s” and more as “hey look we found a way to use all these B-52s we already had” only this just keeps working forever.
It’s not an exhaustive search technique, but it may be an effective heuristic if anyone is planning The Revolution™.
AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn’t matter to anyone.
I mean, doesn’t somebody still need to validate that those keys only get to people over 18? Either you have a decentralized authority that’s more easily corrupted or subverted or else you have the same privacy concerns at certificate issuance rather than at time of site access.
I mean, the whole point of declaring this era post-truth is that these people have basically opted out of consensus reality.
Why don’t they just hire a wizard to cast an anti-tiktok spell over all of Australia instead? It would be just as workable and I know a guy who swears he can do it for cheaper than whatever server costs they’re gonna try and push.
Okay apparently it was my turn to subject myself to this nonsense and it’s pretty obvious what the problem is. As far as citations go I’m gonna go ahead and fall back to “watching how a human toddler learns about the world” which is something I’m sure most AI researchers probably don’t have experience with as it does usually involve interacting with a woman at some point.
In the real examples that he provides, the system isn’t “picking up the wrong goal” as an agent somehow. Instead it’s seeing the wrong pattern. Learning “I get a pat on the head for getting to the bottom-right-est corner of the level” rather than “I get a pat on the head when I touch the coin.” These are totally equivalent in the training data, so it’s not surprising that it’s going with the simpler option that doesn’t require recognizing “coin” as anything relevant. This failure state is entirely within the realms of existing machine learning techniques and models because identifying patterns in large amounts of data is the kind of thing they’re known to be very good at. But there isn’t any kind of instrumental goal establishing happening here as much as the system is recognizing that it should reproduce games where it moves in certain ways.
This is also a failure state that’s common in humans learning about the world, so it’s easy to see why people think we’re on the right track. We had to teach my little on the difference between “Daddy doesn’t like music” and “Daddy doesn’t like having the Blaze and the Monster Machines theme song shout/sang at him when I’m trying to talk to Mama.” The difference comes in the fact that even as a toddler there’s enough metacognition and actual thought going on that you can help guide them in the right direction, rather than needing to feed them a whole mess of additional examples and rebuild the underlying pattern.
And the extension of this kind of pattern misrecognition into sci-fi end of the world nonsense is still unwarranted anthropomorphism. Like, we’re trying to use evidence that it’s too dumb to learn the rules of a video game as evidence that it’s going to start engaging in advanced metacognition and secrecy.
That’s the goal. The reality is that it doesn’t actually reproduce the skills it imitates well enough to actually give capital access to them, but it does a good enough job imitating them that they’re willing to give it a chance.
I mean a lot of the services that companies are using are cloud-hosted, meaning that especially if you have branch offices or a lot of remote workers a normal firewall in the datacenter introduces an unnecessary bottleneck. Putting the logical edge of your organization’s network in the cloud too makes sense from a performance perspective in that case, and then turning the actual firewalls into SaaS seems much less absurd.
Brief overlapping thoughts between parenting and AI nonsense, presented without editing.
The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of “thinking” (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.
Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesn’t actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.
Human children are also not immune to “hallucinations” in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a “Happy Birthday” sticker doesn’t immediately make it their (or anyone else’s) birthday.
Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.
IDK. Felt thoughtful, might try to organize it on morewrite later.
This is what the AI-is-useful-actually argument obscures. There are parts of this technology that can do legitimately cool things! Machine learning identifying patterns in massive volumes of data that would otherwise be impractical to analyze is really cool and has a lot of utility. But once you start calling it “Medical AI” then people start acting like they can turn their human brains off. “AI” as a marketing term is not a tool that can help human experts focus their own analysis or enable otherwise-unfeasible kinds of statistical analysis. Will Smith didn’t get into gunfights with humanoid iMacs because they were identifying types of bread too effectively. The whole point is that it’s supposed to completely replace the role of a person in the relevant situations.
I mean, considering only the relationships between words and symbols in the complete absence of context and real-world referents is a good description of how a certain brand of tech dunce thinks.
I’m glad I’m not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.
Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; it’s the “because I’m your Dad and I said so” for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.
There are three kinds of programmers. From smallest to largest: Those smart enough to write good math-intensive libraries, those dumb about to think they can, and those smart enough to just use what the first kind made.
You’ve got to make sure you’re not over-specializing. I’d recommend trying to roll your own time zone library next.
They have enough money to make it complex, no matter how simple the underlying issue.