The team’s new memecapacitor device stores artificial neural network models and could be used for speech recognition applications. (Image Credit: Max Planck Institute of Microstructure Physics)
Training and implementing artificial neural networks requires advanced devices that can perform data-intensive computations. Now, Max Planck Institute of Microstructure Physics and SEMRON GmbH researchers recently developed energy-efficient memcapacitors that could be used for machine-learning algorithm applications. These operate through charge shielding exploitation.
While looking over previous studies, the team discovered that other memcapacitive devices are difficult to upscale and feature a poor dynamic range. Then, they drew inspiration from brain synapses and neurotransmitters to develop the new highly-efficient memcapacitive devices, which are also easier to upscale and provide a higher dynamic range.
The device controls the electric field coupling between a top gate electrode and a bottom read-out electrode through the shielding layer. In turn, an analog memory adjusts the shielding layer. This analog memory stores various artificial neural network weight values, similar to how brain neurotransmitters store and convey information.
The researchers set up 156 of these devices in a crossbar arrangement, using them to train a neural network to recognize three Roman alphabet letters, M, P, and T. They discovered that their devices achieved energy efficiencies of over 3,500 TOPS/W at 8-Bit precision, 35-500 times higher than other memristive techniques. As a result, the team’s new devices could be applicable for running large and complex deep learning models while consuming very little power.
Improvements in speech recognition techniques could lead to users communicating with computers and other electronic devices. However, this can’t be achieved without implementing large neural network-based models with billions of parameters. Devices that implement these models, such as the team’s memcapacitor, could eventually unlock AI’s full potential.
SEMRON has already sent in applications for deep learning model patents that focus on speech recognition. In the future, the team plans on developing more neural network-based models while attempting to scale up the memcapacitor-based system by boosting its efficiency and device density.
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell