Researchers at MIT have developed a new learning system, known as a learning-based particle simulator that allows robots to create shapes by molding materials together. The robots also have the ability to make predictions about interacting with solid objects and liquids. The system could also help to refine the robot's approach into shaping models of different shapes, like clay or rolling rice for sushi.
Physical simulators are used in robotic models that can obtain data on how materials respond to force. Robots are taught by the models developed by researchers in an attempt to predict the outcomes of their interactions with different objects. Common learning-based simulators focus on inflexible objects so they cannot handle softer materials. There are more accurate simulators that have been put to use, but they have more errors when robots interact with objects due to the simulator relying on approximation techniques.
Particle simulator can help the robots to mold materials into a shape and allow them to interact with solid objects and liquids. (Image Credit: MIT)
Researchers have introduced a new model that learns to observe how particles of materials react when they've been touched. The model is taught from data where physics of movements aren't yet known. Robots will then use the model to predict how liquids will reach to its touch. The robot's control is also further refined as it handles the objects.
A robotic hand called "RiceGrip" that contains two fingers was used in experiments, shaping foam into a specified shape, like a "T," which is used for sushi rice. The model developed by researchers is meant to act as a learning tool for the robots by teaching it physics, allowing them to shape 3D objects the same way humans do.
A new method for the model, called the "particle interaction network" (DPI-Nets), created dynamic interaction graphs, which are made up of thousands of nodes that can observe more complex behaviors of particles. Each node represents a particle in the graphs, and the nodes are linked together using guided edges, showing the interaction from one particle to the next. Particles are made up of hundreds of small spheres coupled together to create a liquid or a deformed object. The graphs are created from a machine-learning system called a graph neural network. While the model is training, it learns how all the particles in different material react. This is done by calculating properties for each particle, with mass and elasticity being measured. This allows the model to calculate where the particles will move in the graph when touched. The model then spreads a signal throughout the graph that predicts where each particle will be positioned at a certain time step. At each step, the signal moves and reconnects the particles, if they've been disconnected.
Researchers put the model to use by giving the two-fingered RiceGrip robot a task of putting shapes together. These shapes were made out of deformable foam. The robot uses a depth-sensing camera and object recognition to identify the foam material. Researchers then selected random particles from the shape to determine the position of the particle. The model then adds in edges between each particle and constructs a graph from the foam's shape customized for deformable materials. Due to the learning process through simulations, the robot has a distinct idea of how touch can affect particles in the graph.
Once the robot indents the foam, it meets the true position of the particles to its targeted position. If the particles fail to align, it sends a signal to the model indicating an error. The signal then makes adjustments to correct the position of the material.
Researchers are hoping to make improvements on the model to allow better predictions for interactivity with partially observable scenarios. An example of this is knowing how a stack of boxes will move when a force is being applied to it, such as pushing, even if it's only visible on the surface while the remaining boxes are hidden.
The team is also looking into adding an end-to-end perception module to the model that operates on images.
Have a story tip? Message me at: cabe(at)element14(dot)com