At embedded world 2026, on the DigiKey booth, Paige Hookway speaks with Violet Su, Business Development Manager and Shuyang Zhou, Team Lead of AI Sensing Product Line at Seeed Studio, about Edge AI.
Seeed Studio operates across four product lines: AI sensing, AI robotics, maker platforms, and sensor networks. The AI sensing line covers vision and sound AI hardware; robotics provides components from processing units to sensors and actuators for building everything from desktop robots to mobile platforms; the maker line offers MCUs and HMIs for rapid prototyping; and the sensor network line supports a range of connectivity options and protocols. Underpinning all four lines is a commitment to open source, which Su describes as the thread connecting more than 1,000 products in the company’s catalogue.
Talking about Edge AI, Zhou outlines three driving factors: speed, privacy, and cost. Because processing happens on the device itself, there is no round-trip to a server, which is a fundamental factor in robotics and smart factory environments where latency has direct operational consequences. Data stays local, removing the exposure that comes with Cloud transmission. And the economics are straightforward – a one-time hardware cost versus an ongoing Cloud subscription.
One of the harder problems in the space, Zhou explains, is toolchain fragmentation. Every major silicon vendor ships its own SDK and optimisation workflow – CUDA for NVIDIA, OpenVINO for Intel, RKNN for Rockchip – and developers are expected to navigate all of them. “All the developers need to learn from different SDK and different optimisation methods and also different deployment processes,” she says. Seeed’s response is to abstract those differences and build toward a unified development experience across hardware targets.
On open source, Su makes the case that it is not just a licensing philosophy but a practical answer to AI’s privacy problem. With open source, there are no hidden algorithms and no undisclosed data collection – users can verify exactly how a model is using their information. Beyond transparency, it compresses development timelines. A camera algorithm built by one developer becomes the foundation that the next person improves, and the one after adapts for a robotics application. “It speeds up … very much compared to [if] everyone is just working alone,” she says.
Looking at the roadmap, the priorities map onto each product line. AI sensing will expand its range of vision and sound products, targeting affordability alongside capability. Robotics will add more modular accessories to support varied build configurations. The maker line will broaden chipset compatibility and bundled software. Sensor networks will tackle outdoor robustness and the growing number of connectivity protocols in the field.
On the broader trajectory of the industry, Su draws the line from electricity to IoT to AI – each wave of technology expanding the scope of what can be automated. The direction of travel she sees now is AI moving from screens and Cloud services into the physical world: homes, offices, wearables, and eventually remote and rural environments that have so far been left out of the conversation. The goal, she says, is to make AI accessible not just to large organisations but to individuals and small teams – and reliable enough to work where the infrastructure is thin.
Watch the full conversation here: