Immediate availability of a new Convolutional Neural Network has been announced by Omnitek, delivering high performance per watt at full FP32 accuracy in a midrange SoC FPGA. Optimised for the Intel Arria 10 GX architecture, the Omnitek Deep Learning Processing Unit (DPU) achieves 135 GOPS/W at full 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150.
The innovative design employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve this very high compute density with zero loss of accuracy.
Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the DPU can be tuned for low cost or high performance in either embedded or data centre applications. The DPU is fully software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models. No FPGA design expertise is required to do this.
Roger Fawcett, CEO at Omnitek, commented: “We are very excited to apply this unique innovation, resulting from our joint research program with Oxford University, to reducing the cost of a whole slew of AI-enabled applications, particularly in video and imaging where we have a rich library of highly optimised IP to complement the DPU and create complete systems on a chip.”
FPGAs are being adopted as the platform of choice for many intelligent video and vision systems. They are suited to Machine Learning applications due to their massively parallel DSP architecture, distributed memory and ability to reconfigure the logic and connectivity for different algorithms.
To this latter point, Omnitek’s DPU can be configured to provide optimal compute performance for CNNs, RNNs, MLPs and other neural network topologies which exist today and more importantly, the as yet unknown algorithms and innovative optimisation techniques that will exist in future due to the significant research in this field.