Imagine a revolutionary slime-like robot capable of morphing its shape to navigate through tight spaces, potentially allowing it to retrieve unwanted items from within the human body. While this concept remains experimental, researchers are ambitiously working to advance reconfigurable soft robots for exciting applications in healthcare, wearable technology, and industrial systems.
The challenge lies in controlling these flexible robots, which lack traditional joints, limbs, or fingers. Instead, they can dramatically reshape themselves to achieve specific tasks. A team at MIT is tackling this intriguing problem by developing a sophisticated control algorithm that enables the robot to learn autonomously how to move, stretch, and alter its shape for various tasks, even those requiring multiple morphological changes.
To put their algorithm to the test, the team created a simulator to evaluate control methods on a series of complex shape-changing tasks. Impressively, their approach successfully completed eight intricate tasks, outperforming existing algorithms, particularly in scenarios that demanded multi-faceted adjustments to the robot’s form. For instance, during one simulation, the robot deftly reduced its height while extending two tiny legs to traverse a narrow pipe, before retracting its legs and elongating its torso to unscrew the pipe’s lid.
Though reconfigurable soft robots are still in early development, this innovative technique may someday facilitate the creation of versatile robots that can adapt fluidly to various tasks. “When people think about soft robots, they typically envision elastic robots that revert to their original shape. Our robot is unique like slime, as it can fundamentally change its morphology,” explains Boyuan Chen, an EECS graduate student and co-author of a study detailing this approach.
Co-authors of this research include lead author Suning Huang, a visiting undergraduate from Tsinghua University in China; Huazhe Xu, an assistant professor at Tsinghua University; and Vincent Sitzmann, an assistant professor at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The team will present their findings at the prestigious International Conference on Learning Representations.
Mastering Dynamic Motion Control
Typically, robots are taught to accomplish tasks through a machine-learning method known as reinforcement learning, which involves rewarding robots for achieving goals through trial and error. This method works well for robots with defined moving parts, like a three-fingered gripper. For such models, the algorithm processes one finger’s movement at a time, iteratively improving its actions.
However, shape-shifting robots controlled by magnetic fields can dynamically compress, bend, or extend their entire forms. “Imagine a robot with thousands of minute muscles to control; traditional learning methods aren’t effective for this complexity,” Chen notes.
To overcome this issue, Chen and his colleagues took a fresh approach. Rather than controlling each individual muscle, their reinforcement learning algorithm first focuses on controlling groups of adjacent muscles that function collectively. Once the algorithm has mapped out possible actions for these clusters, it refines its strategy by optimizing the policy or action plan it has developed.
This “coarse-to-fine” methodology means that any random action is likely to yield significant outcomes since multiple muscles are influenced simultaneously. “By treating a robot’s action space like an image, our model can effectively navigate the complex movements of the robot,” adds Sitzmann.
The machine-learning model employs images of the robot’s environment to create a 2D action space, overlaying the action area with a grid. Similar to pixels in an image, points representing potential actions in this space are spatially related, allowing the algorithm to predict movements efficiently.
Creating the DittoGym Simulator
To validate their innovative control algorithm, the team established a simulation environment dubbed DittoGym. This platform features eight different tasks designed to assess the reconfigurable robot’s dynamic shape-changing abilities. One specific challenge requires the robot to elongate and curve its body to navigate around obstacles, while another tests its capacity to mimic letters of the alphabet.
According to Huang, “Our task design in DittoGym adheres to both general reinforcement learning benchmarks and the unique requirements of reconfigurable robots. Each task reflects crucial properties such as long-horizon exploration capabilities, environmental analysis, and interaction with external objects.” The robust performance of their algorithm surpassed baseline methods, proving it to be the only approach capable of completing multi-stage tasks that demanded several shape alterations.
Chen emphasizes, “The stronger correlation between nearby action points is key to the success of our approach.” While the practical applications of shape-shifting robots remain a distant ambition, the team hopes their work inspires future research in reconfigurable soft robotics and the potential use of 2D action spaces for other sophisticated control challenges.
Photo credit & article inspired by: Massachusetts Institute of Technology