Raspberry Pi bring GenAI to the Edge with HAT+ 2. So what’s new?

Raspberry Pi bring GenAI to the Edge with HAT+ 2. So what’s new? Raspberry Pi bring GenAI to the Edge with HAT+ 2. So what’s new?

Raspberry Pi has announced the Raspberry Pi AI HAT+ 2, the latest iteration of its AI accelerator add-on for the Raspberry Pi 5. The HAT+ 2 builds on the foundation set by the original Raspberry Pi AI HAT+ range, expanding its capabilities beyond computer vision workloads to support GenAI at the Edge.

The AI HAT+ 2 integrates a Hailo-10H neural network accelerator. Rated at 40 TOPS (trillion operations per second) (INT4), it offers significantly more inference performance than its predecessor’s Hailo-8 variants. Combined with 8GB of dedicated on-board RAM, this enables the board to host and run large language models (LLMs) and vision-language models (VLMs) locally on Raspberry Pi 5. By running this locally, it reduces dependence on external networks, lowers latency, and enhances data privacy and security for Edge applications.

A key improvement in the AI HAT+ 2 is its support for Generative AI workloads. While earlier HAT+ boards delivered strong performance for object detection, pose estimation, and scene segmentation, users were limited by compute and memory when working with larger LLMs. With the upgrade to 8GB LPDDR4X memory on the HAT itself – separate from the host Pi’s memory – developers can now run models such as Llama 3.2 (1 billion parameters) and DeepSeek-R1-Distill (1.5 billion parameters) without consuming the Pi’s main RAM.

For vision-based applications, the AI HAT+ 2 keeps on par with the original HAT+ despite its greater overall performance focus. The additional memory ensures that YOLO-style object detection and similar computer vision models perform at similar levels to the earlier 26 TOPS versions, while software integration remains tightly coupled with Raspberry Pi’s camera stacks such as libcamera, rpicam-apps, and Picamera2. This compatibility makes migrating existing AI HAT+ projects to the new board relatively straightforward.

Beyond performance, the AI HAT+ 2 is designed to simplify development workflows. Raspberry Pi has ensured that generative and vision models can be installed easily via available software repositories, and Hailo’s developer tools support custom model fine-tuning and deployment. Local inference reduces operational costs and mitigates connectivity challenges in industrial, robotic, or secure facility-management environments.

HAT+ vs HAT+ 2

Accelerator chip

  • HAT+ Hailo-8 (13 TOPS) or Hailo-8 (26 TOPS)
  • HAT+ 2 Hailo-10H (40 TOPS INT4, 20 TOPS INT8)

Onboard RAM

  • HAT+ none
  • HAT+ 2 8 GB LPDDR4X

GenAI support

  • HAT+ limited support
  • HAT+ 2 supported (LLM and VLM)

Vision performance

  • HAT+ strong performance
  • HAT+ 2 comparable to HAT+

Integration

  • HAT+ camera stack support
  • HAT+ 2 same as HAT+ with added GenAI capabilities

Local model execution

  • HAT+ vision only
  • HAT+ 2 vision plus GenAI

The AI HAT+ 2 shows that the developers at Raspberry Pi are paying attention to the growing demand for on-device GenAI, and the HAT+2 is a move forward in its Edge AI strategy without sacrificing the familiar workflows of existing AI HAT+ users. By enabling LLMs and vision models to run locally, the board supports a broader set of use cases while keeping latency, security, and cost within practical bounds for developers and industrial customers alike.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
Physicist wins top engineering research prize to further nanoscale chip technology

Physicist wins top engineering research prize to further nanoscale chip technology

Next Post

End-to-end solutions for next-generation busbars