Semidynamics announced the Semidynamics Inferencing Tools, a new software suite that lets developers deploy AI applications on the Cervell RISC-V NPU in a fraction of the time.
Sitting above the Aliado SDK and leveraging Semidynamics’ ONNX Runtime Execution Provider, the suite streamlines everything from session setup to tensor management, so teams can move from a trained model to a running product quickly and confidently.
“Developers want results,” said Pedro Almada, lead software developer, Semidynamics. “With the Inferencing Tools, you point to an ONNX model, choose your configuration, and you’re running on Cervell – prototype in hours, then harden for production.”
Organisations building AI features – assistants, agents, vision pipelines – can now target Cervell with less integration overhead, shorter development cycles, and a clearer path from prototype to production. The Inferencing Tools focus teams on application logic and user value while maintaining a clean, maintainable codebase.
Highlights
- Faster time-to-production: High-level library on top of Semidynamics’ ONNX Runtime Execution Provider for Cervell – no model conversion required.
- Built-in guidance: Clean APIs handle session setup, tensor management, and inference orchestration, reducing repetitive code and integration risk.
- Production-grade examples: Ready-to-adapt samples for LLM chatbots (e.g., Llama, Qwen), object detection (YOLO family), and image classification (ResNet, MobileNet, AlexNet).
- Validated at scale: Tested by Semidynamics across a wide range of ONNX models to help ensure predictable performance and robust deployment.
- One ecosystem, two lanes:
- Aliado SDK for kernel-level control and peak performance.
- Inferencing Tools for rapid iteration, cleaner app code, and faster shipping.