Amazon’s humanoid warehouse robots will eventually cost only $3 per hour to operate. That won’t calm workers’ fears of being replaced.::The robot’s human-like shape is bound to reignite workers’ fears of being replaced, but Amazon says they’re designed to “work collaboratively.”
I’m sure we’ll get there eventually, but robots still suck at doing stuff like this. Maybe when they marry robots up with AI, we’ll have robots that can figure out what to do when there’s the slightest deviation to the operating conditions, like a piece of trash shows up on the line, or they get twisted 30 degrees off from their station, or a part of the line gets moved 2 inches. For now though, robots are only great at following pre-programmed instructions EXACTLY the same way every time. Even then, they still manage to fuck that up some of the time. I worked with welding robots for years that only had one task and one task only, to apply welds to car seat parts, and they fucked up on us all the time, on a daily basis. The technology will get there one day, but I doubt we’re there.
I work with a system of distribution robots and can attest to everything you’ve just said. The only caveat I’d add is that “some day” may be sooner than you think. Moore’s law is a helluva force.
Considering how each generation of Boston Dynamics robots becomes more and more graceful, I don’t see how the problems you suggested won’t be non-issues incredibly fast.
Also, unrelated to your comment, people are delusional if they don’t think this is the ultimate goal, right? Amazon’s reassurances are bunk - if they could eliminate people they would, they just can’t do without them yet.
Boston Dynamic’s robots are works of art - the pinnacle of engineering - but its all designed movement. By this I mean the control systems, their movement plans - it is built and designed by experts in their field. It’s not quite as simple as “go from A to B and do some parkour on the way”. There’s a very large gap between “what is mechanically possible to do” and “Just let the robot figure out how to do that”.
Mechanically we’re ahead of software for manipulation and kinodynamic planning.
Removed by mod
I’m actually working on this problem right now for my master’s capstone project. I’m almost done with it; I can have it generating a series of steps to try and fetch me something based on simple objectives like “I’m thirsty”, and then in simulation fetching me a drink or looking through rooms that might have a fix, like contextually knowing the kitchen is a great spot to check.
There’s also a lot of research into using the latest advancements in reasoning and contextual awareness via LLMs to work towards better more complicated embodied AI. I wrote a blog post about a lot of the big advancements here.
Outside of this I’ve also worked at various robotics startups for the past five years, though primarily in writing data pipelines and control systems for fleets of them. So with that experience in mind, I’d say we are many years out from this being in a reasonable product, but maybe not ten years away. Maybe.
Do you want skynet?! Cause that’s how you get skynet!