As robots become more capable of operating in different environments — and perhaps as they get new input about the world around them from AI and foundation models — it could be “transformative to our societies and economies,” says Matthew Johnson-Roberson of Carnegie Mellon University, as “large swath of the things we do right now could be automated.”
Johnson-Roberson directs the Robotics Institute at the Pittsburgh university, and at the South by Southwest conference last week, he floated the idea that the convergence of robotics and foundation models “could help us solve problems we couldn’t solve before.” (Foundation models, trained on vast data sets, are behind the ability of generative AI tools like ChatGPT to write poems, offer life coaching, and plan vacations.)
Here are five ideas Johnson-Roberson touched on in his session. Audio of the session is below.
- The robotics field could launch a new wave of important tech companies. Despite the growth of the digital economy in over the past 25 years, “everything that is not digital comprises a large section of economic activity, and our lives,” Johnson-Roberson said. “It doesn’t matter how big Facebook gets; it’s not the chair I’m sitting on” — and social networks can’t move around atoms in the real world.
- Despite the hype about humanoid robotics being developed by companies like Honda, Tesla, and the startup Figure, which recently raised $675 million, don’t look for them at your neighborhood Starbucks or assisted living facility anytime soon. It’s simply “hard to get them to move as fluidly and flexibly as we do,” Johnson-Roberson says, and it’s a big challenge to get enough power on board to enable them to stay on the job for long periods of time. But the attraction is that much of the world is designed for humans, from stairs to door handles — and potentially humanoid bots would have an easier time operating in those environments.
- Instead of programming robots to perform tasks, there’s interest in the concept of “behavior cloning” — using foundation models and other types of AI to understand the steps required for a human to, say, make a salad, and then having a robot follow them.
- Rather than having robots learn and improve their skills by trial and error in physical environments — like testing self-driving cars on public roads — Johnson-Robson said that one future possibility is using software simulations to help robots learn: “Could we let robots do billions of hours of simulated driving or work [in a data center], rather than doing it linearly in the real-world?”
- Speaking of self-driving cars, those proved to be tougher and more expensive to perfect than people expected, Johnson-Robson said. But while individually-owned autonomous vehicles may still be a ways off, he said that fleets of robo-taxis, trucks, and perhaps smaller “last-mile” delivery robots will likely roll out sooner. One reason: it’s easier for companies operating fleets to ensure that they’re well maintained, and perhaps remotely monitored for issues.
Other areas where Johnson-Robson expects to see progress in the years ahead: robotic surgery, agricultural robotics for better crop yields; disaster response; and in the supply chain.
Featured image by Possessed Photography on Unsplash.