How Edge AI will transform the next generation of connected systems

AI is entering a new era – one where processing can be performed at the Edge and is no longer confined to a data centre. Now, a new generation of AI-native System-on-Chips (SoCs), accelerator cores, and advanced wireless connectivity is making it cost-effective to extend AI all the way to the new network frontier, adding increasingly powerful capabilities to even the smallest IoT devices.

There’s a good argument to be made that technological progress in AI has outpaced expectations in the IoT market. Knowing that the processors used to perform AI workloads in data centres are power-hungry and expensive, can a system architect really justify adding AI to a factory quality-control monitoring system, let alone to a residential security camera?

The short answer is an emphatic yes. New AI-native processors can run on even the smallest of batteries, and while processors designed for AI workloads have typically relied on separate wireless ICs, the two are now increasingly combined into single-chip solutions that offer even greater cost efficiency. Multi-modal operation is easily supported. It is easier than ever before to develop lightweight AI models for any application. Application development tools are available and easy to use.

In addition, though the Edge AI ecosystem is still young, open-source support is already appearing, making Edge AI increasingly easier to adopt. A prominent example is Google’s RISC-V-based Coral Neural Processing Unit (NPU), a Machine Learning (ML) accelerator. Open source will make Edge AI increasingly easier to adopt.

Why Edge AI will define the next decade

The global market for AI in IoT is on pace to be worth about $93 billion by the end of 2025, and it is projected to increase to approximately $161 billion by 2034, according to Precedence Research. Estimates of the AI/IoT market typically cover far more than just chips and AI software; they include a variety of consumer, enterprise, and industrial IoT devices that will integrate AI capabilities. The market projections assume that much more intelligence is coming to the network Edge.

Multimodal AI, combining inputs such as vision, audio, and motion, is quickly becoming the default expectation for IoT devices. While it’s easy to include a light detector, motion sensor, or even a microphone in an Edge device, vision has historically been more challenging to implement, particularly if it needs to be added later in a product design. New Edge AI processors and advances in associated AI software now make it practical to add machine vision to a much wider variety of products than ever before, from residential appliances to industrial monitoring systems.

The new requirements at the Edge

AI is no longer just capable of running at the Edge – in many cases, it needs to. The way AI originally worked, and the way it still mostly works now, requests for an AI result get sent upstream through the network to some distant data centre. There, servers optimised to run AI workloads process the data and send a response back downstream.

Latency is inherent in this arrangement. The long round trip inevitably creates a lag between request and response. In a conversation with voice-activated virtual assistants, the delay is rarely long enough to be noticeable. However, for a growing number of IoT applications where decisions must be made in real time, typically for safety applications, such as closing a valve in a critical system, locking a door, activating a warning signal, latency is intolerable and potentially dangerous.

Edge AI also provides a solution to support growing privacy concerns. When data is transmitted through a network, every relay point in the round trip is a potential security vulnerability. If data remains local to the Edge device, it significantly reduces that risk. Edge devices are generally low-reward targets for hackers, and, further, on-chip data protection mechanisms are becoming well-established and effective.

Where Edge AI makes the biggest impact

Edge AI can be appropriate for everything from residential kitchen appliances to building management systems, or from smart city devices to factory automation equipment. All the potential benefits become stark in the example of medical wearables, such as pacemakers, heart rate monitors and blood glucose monitors. These devices demand absolute reliability and durability, as well as the intelligence to make increasingly complex decisions – for example, determining when a blood glucose monitor should deliver insulin and how much. They also require ultra-low power consumption, careful use of connectivity to conserve power, and strict protections for patient privacy.

Enabling the next wave of IoT intelligence

Many devices now face demanding requirements that are difficult to meet without Edge AI. Drawing on its long history in embedded intelligence, Synaptics has developed the Astra SL2600 series, a family of AI-native Edge processors that deliver efficient, high-performance compute and integrate Google’s Coral NPU to support this new class of smarter devices. This architecture anticipates the shift toward increasingly multimodal, autonomous IoT systems that will define the next generation of Edge innovation.

The new product line is suited to a wide range of IoT applications, including smart home appliances, wearables, automation gateways, smart retail, and industrial vision systems. Looking ahead to 2026-2030, these sectors will demand increasingly intelligent, secure, and energy-efficient devices capable of running sophisticated AI models locally. Platforms like the Synaptics Astra SL2600 series are designed to meet this shift, supporting the next wave of intelligent, real-time Edge innovation.

Explore how Synaptics Astra SL 2600 series SoCs can enable you to bring your Edge AI IoT applications to life.

About the author:

Neeta Shenoy, Vice President, Marketing, Synaptics

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post

The importance of anti-tamper solutions in an era of evolving security threats

Next Post

How will the European Union AI Act impact you?