Making things happen by themselves is easier said than done.
To enable a robot to perform tasks as a result of acquiring its own inputs involves a wide variety of complementary disciplines including EE, Mechanical Engineering (ME), Computer Engineering and Real-Time Software and Control. Taken together, these fields of study comprise what is called mechatronics, a word that comes from "mecha" for mechanical and "tronics" for electronics.
Ongoing advances and new developments in mechanical manipulators, motors, processors, sensors and actuators in have accelerated the pace of implementing smart robotic systems with a variety of emerging applications in industrial plant automation, consumer electronics, automobile and defense applications. Let’s look at some recent trends.
Get a grip
Robots require some way to manipulate objects; i.e., grip, pick up, move, etc. Charged with this task is a an "arm," which is referred to as a manipulator, plus a device at the end of a robotic arm called an end effector. Most robot arms have replaceable effectors, each allowing them to perform some small range of tasks.
The "muscles" of a robot, the parts which convert stored energy into movement are called actuators. By far the most popular actuators are electric motors that spin a wheel or gear, and linear actuators that control industrial robots in factories.
The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. One problem with traditional electromagnetic motors is that they become less efficient at smaller sizes. This is because more and more of the electrical drive power is converted to heat rather than to mechanical motion. A recent alternative to DC motors are piezoelectric ultrasonic motors, already popular for mobile phone cameras and in other miniature product applications. These use tiny piezoceramic elements, vibrating many thousands of times per second, to directly produce linear or rotary motion without gears or other additional parts. The advantages of these motors are speed and the amount of available force for their size.
Linear actuators control industrial robots in factories, moving in and out instead of spinning. They are typically powered by compressed air (pneumatic actuator) or oil (hydraulic actuator). Recently, a new breed of linear motors has been introduced for applications such as pick-and-place machinery.
Nippon Pulse, for example, employs a linear shaft motor design aimed at replacing pneumatic or hydraulic cylinder applications. Called Green Drive, the new linear motor reportedly uses 50% less energy than other linear motors producing the same output. Green Drive's linear shaft motor design is a brushless, direct drive linear servomotor in a tubular design. Consisting of a magnetic shaft and coil assembly the linear shaft motor is driven and controlled by the flow of current. Different from conventional linear motors, its magnets are placed in the motor shaft, rather than in a U-shaped channel around the motor's coils. The result, according to Nippon Pulse engineers, is significantly higher efficiency.
Elastic carbon nanotubes are a promising actuator technology in early-stage experimental development. These materials have the ability to deform elastically by several percent, and, in this way, can emulate human biceps. Nanotube sheet actuators have been shown to operate at low voltages (~1 Volts or less) and provide higher work densities per cycle than alternative technologies. Testing of carbon nanotube actuators has been quite successful and the technology is ready for scalable applications considering that there are now methods available for the large scale synthesis of carbon nanotubes.
Similarly, Electroactive Polymer (EAP, also known as electroactive polymer artificial muscle or EPAM) actuators used in small, mobile robots and micro-machine applications employ a plastic material that can contract substantially (up to 400%) when electricity is applied and have been used in facial muscles and arms of humanoid robots.EPAM devices have been developed producing pressures greater than 100 psi, and specific energy densities exceeding piezoelectric and magnetostrictive materials in response to an applied voltage. These artificial muscles are much smaller than the servos and motors of conventional robots, allowing EPAM-enabled walking and jumping robots and very small form factor robots that emulate fish and birds.
Do the locomotion
For simplicity most mobile robots have four wheels or two continuous tracks much like a tank or bulldozer. Some researchers have created triangular platforms for three wheeled robots. Apart from allowing a robot to navigate in confined places that a four-wheeled robot might not be able to, three-wheelers overcome a problem inherent with four-wheeled vehicles: four points are not guaranteed to be on the same plane. If a four-wheeled robot encounters uneven terrain, there is a good chance that one of its wheels will not be in contact with the ground.
Robot developers are also taking a page from the design book of concept vehicles such as Nissan’s Pivo 2, whose in-wheel electric motor can propel it in any direction (even sideways) and NASA’s off-road ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer) which has in-wheel electric motors in its "foot pads". Based on six 6 DoF (Degrees-of-Freedom) limbs, each with a 1 DoF wheel attached, ATHLETE uses its wheels for efficient driving over stable, gently rolling terrain, but each limb can also be used as a general purpose leg. In the latter case, wheels can be locked and used as feet to walk out of excessively soft, obstacle laden, steep, or otherwise extreme terrain.
A side benefit of the wheel-on-limb approach is that each limb has sufficient DoF for use as a general-purpose manipulator (hence the name "limb" instead of "leg"). The prototype ATHLETE vehicles have quick-disconnect end effector adapters on the limbs that allow tools to be drawn out of a "tool belt" and maneuvered by the limb. Mechanical action of the wheel rotation also actuates the tools, so that they can take advantage of the one horsepower motor usually used for driving to instead enable drilling, gripping or other power-tool functions. ATHLETE is envisioned as a test vehicle for future human exploration of the lunar surface, useful for unloading bulky cargo from stationary landers and transporting it long distances.
I can see you
Control of today’s robots is often remote which requires advanced computer vision capabilities as well as sophisticated sensor and interface techniques. Vision is one of several sensory components that work together to fulfill specific tasks such as to navigate toward a given target location while avoiding obstacles, to find a person and react to the person’s commands, or to detect, recognize, grasp and deliver objects. Examples include applications in industrial settings, medical, and space applications such as using vision to estimate surface terrain for landing or navigating on Mars.
Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The technology necessitates extracting information from single images or video sequences and understanding what the objects in the image represent.
Among the major questions being worked on in both academia and industrial research labs are seemingly basic issues such as what makes a given object that object: what are the properties that make, say, John’s pencil uniquely John’s pencil? In the past much research concentrated on example-based recognition of objects by learned features, either visual or shape-based; essentially John’s pencil is shown (or described) to the robot and in this way can be recognized again. Even today in most practical computer vision applications computers are pre-programmed to solve a particular task.
But methods based on learning are now becoming increasingly common. The next step of development in robotics will be the incorporation of artificial intelligence, allowing robots to perform complex, specialized and non-routine tasks such as to remotely control medical or surgical instruments.
Toward that end IBM researchers have unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.
Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brain’s structural and synaptic plasticity.
IBM has two working prototype designs. Both cores were fabricated in 45nm SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.
Get Involved
Major chip suppliers are getting increasingly active in providing development kits for robot applications. For example, Freescale's FSLBot Robot Kit operates with its Tower System Mechatronics Board and is an easy-to-use mechatronics development and demonstration platform. The kit includes four PWM controlled servos (actuators), metal legs and the Tower System Mechatronics board that has a 3-axis accelerometer and a 12 channel touch sensor. Through building the kit, engineers can experience what the 4 DoF bipedal walking robot can do.
The Tower System Mechatronics board is programmable in C/C++ using CodeWarrior and an on-board flash programming tool. For fast prototyping or for individuals without C/C++ experience, the Tower System Mechatronics Board is supported by the Robot Vision Toolkit and RobotSee (a language as simple as BASIC) .
At its recent Freescale Technology Forum (FTF) in San Antonio 125 innovators participated in the Make It Lab to create unique designs using either a FSLBOT Mechatronics robot kit or various tower controller and peripheral boards. Entries were judged on application innovation and creativity as well as their ability to integrate different elements of the tool kit provided. First place was awarded to Gabriel Zapata who created Mine Finder, a metal detecting (and avoiding) robot using the Freescale Extrinsic MAG3110 magnetometer.