In 2018, MIT researchers teamed up with BMW to closely observe how human workers and robots might work alongside each other when it comes to assembling car parts. On a factory floor mock-up, the team put together a robot on rails that deliver car parts between each work station, while human workers walked across their path to work at their own station. The robot was designed and programmed to stop functioning if it sensed a worker crossed its path, however researchers realized the robot would get stuck in its position before a human walked into its path. If this were to take place in the real world, it would cause inefficiencies.
The new algorithm developed my MIT will help humans and robots work together in a suitable role, like putting car parts together. (Image Credit: ipopba via Getty Images)
The team’s analysis found issues in the robot’s trajectory alignment algorithms used by the robot’s motion prediction system. It was possible for the robot to predict where a human would walk. However, it couldn’t assess how long the human had spent at any position on the predicted path, including how long it would take for a human to pause, turn around and walk on the robot’s path again.
That same team has created an algorithm that matches partial trajectories in real-time, making it possible for motion predictors to sense the timing of a human’s movement. In recent experiments, researchers watched how the new algorithm took effect. Instead of getting stuck in its position, the robot continued along its path, staying out of the human’s way by the time they walked on the robot’s path again. Results will be presented at the Robotics Science and Systems conference in Germany later this month.
In order for robots to have predictability with human movements, researchers use algorithms based on music and speech processing, which are designed to link two sets of data or a complete time series together. Similar algorithms have been used by researchers in the past in order to sync real-time movements with previously recorded measurements. This allows accuracy to be improved on the system and allows them to predict where a human may move to in a short amount of time. However, these readings may not always be accurate or precise as some movements by humans can be uncoordinated and clumsy.
Current algorithms that process motion data are made up of dots, revealing the position of a human over time. The data is then compared to the trajectory path of those dots with a library of common trajectories for any specified movement. An algorithm then creates a map of a trajectory made of the distance between each dot. Algorithms that predict trajectories solely on distance can get mixed up in certain scenarios, like short pauses. While the human has paused, dots collected from the trajectory mapping process can get squeezed in together in the same place. This affects overlapping trajectories, as well, such as when a human moves from one location to another on the same path. It creates a problem because existing algorithms can’t tell the difference between whether or not that trajectory is moving away or returning on the same path.
In order to overcome this, researchers developed a “partial trajectory” algorithm that can line up segments of a human’s trajectory in real-time with a library of trajectories used for reference. Additionally, the algorithm lines up trajectories with distance and timing sequences. This creates more accurate stop-points while overlapping a human’s path.
Researchers tested the algorithm using two human motion datasets. One of which involved a human intentionally walking across a robot’s path in a factory, and the other where researchers captured hand motions of participants reaching across a table, installing a bolt and afterward, a robot would secure the bolt in place by brushing it with sealant.
Using the newly created algorithm, the team was able to create an improved predictive pattern of a person’s progress in a trajectory, compared with two partial trajectory algorithms. Additionally, the team was also able to evaluate how a robot could make more accurate predictions of human movement by combining the alignment algorithm with their motion predictors. The robot was then able to freeze up less frequently, but instead, it carried on with its task in a short amount of time after a human walked on its path.
Even though the main focus for this algorithm was intended for motion prediction, it could also be used as a preprocessing step for other interactions involving humans and robots, like gesture detection and action recognition. The algorithm will be a major step forward for robots in recognizing and responding to human behaviors and movements. This can also allow humans and robots to cooperate when working together in a factory or maybe even a home in the future.
Have a story tip? Message me at: cabe(at)element14(dot)com