Machine learning inference for embedded reference design

This reference design demonstrates how to use Texas Instruments Deep Learning (TIDL)/Machine Learning on a Sitara AM57x System-on-Chip (SoC) to bring deep learning inference to an embedded application. This design shows how to run deep learning inference on either C66x DSP cores (available in all AM57x SoCs) and Embedded Vision Engine (EVE) subsystems, which are treated as black boxed deep learning accelerators on the AM5749 SoC.

This reference design is applicable to any application that is looking to bring deep learning/machine learning inference into an embedded application. Customers looking to quickly get started with a deep learning network or to evaluate their own networks performance on an AM57x device will find a step-by-step guide on how to use TIDL available as part of TI’s free AM57x Processor SDK.

To discover more, click here.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post

Processors bring cryptographic acceleration for embedded products

Next Post

Lead times cut on MLCCs