Enhancing system architecture implementation for AI applications
As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery powered and deeply embedded devices are challenged to perform AI functions, like computer vision and voice recognition. Microchip Technology, via its Silicon Storage Technology (SST) subsidiary, is addressing this challenge.
The company is doing this by significantly reducing power with its analog memory technology, the memBrain neuromorphic memory solution. Based on its industry proven SuperFlash technology and optimised to perform vector matrix multiplication (VMM) for neural networks, Microchip’s analog flash memory solution improves system architecture implementation of VMM through an analog in-memory compute approach, enhancing AI inference at the edge.
As current neural net models may require 50M or more synapses (weights) for processing, it becomes challenging to have enough bandwidth for an off-chip DRAM, creating a bottleneck for neural net computing and an increase in overall compute power.
In contrast, the memBrain solution stores synaptic weights in the on-chip floating gate—offering significant improvements in system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers ten to 20 times lower power and significantly reduced overall BOM.
“As technology providers for the automotive, industrial and consumer markets continue to implement VMM for neural networks, our architecture helps these forward-facing solutions realise power, cost and latency benefits,” said Mark Reiten, Vice President of the License Division at SST. “Microchip will continue to deliver highly reliable and versatile SuperFlash memory solutions for AI applications.”
The memBrain solution is being adopted by today’s companies looking to advance machine learning capacities in edge devices. Due to its ability to significantly reduce power, this analog in-memory compute solution is ideal for any AI application.
“Microchip’s memBrain solution enables ultra-low-power in-memory computation for our forthcoming analog neural network processors,” said Kurt Busch, CEO of Syntiant Corp. “Our partnership with Microchip continues to offer Syntiant many critical advantages as we support pervasive machine learning for always-on applications in voice, image and other sensor modalities in edge devices.”
SST will showcase this analog memory solution and present Microchip’s memBrain product tile array-based architecture at the AI/ML session track on flash performance scaling at the 2019 Flash Memory Summit from August 6th to 8th, 2019, at the Santa Clara Convention Center in Santa Clara, California.