Memory

How innovation in HBM-PIM is transforming memory tech

20th December 2022
Paige West
0

Artificial intelligence (AI) is rapidly transforming our world. Increasingly AI-based applications are showing up in our smartphones, the cars we drive and the smart speakers that we ask questions of. As a result, there’s rising pressure on high-bandwidth memory (HBM) capabilities to meet demand.

Richard Walsh, Vice President and Head of Memory Marketing, Samsung Semiconductor Europe further explores.

While the performance, efficiency, and speed of HBM has been improving over the past decade at pace with the technology it supports, it’s starting to be left behind in the wake of the explosion of advanced and intensive AI.

AI applications are processing data in much larger volumes and speeds than we’ve previously seen, with algorithms squeezing HBM systems to near breaking point in order to access enormous datasets. Time is of the essence for innovation in HBM to ease the pressure and reach beyond current limitations to a future of AI-driven possibility.

What is PIM and how can it improve HBM?

The idea of processing-in-memory (PIM) technology in HBM is not new to the industry but it represents an important step in overcoming current technical limitations.

In HBM-PIM tech, a DRAM-optimised AI processer is installed right at the point where data is stored – within each sub-unit of the HBM – to reduce the distance that the data needs to travel and facilitate parallel processing.

There are several pros of HBM-PIM:

  • It reduces energy use by more than 60%
  • It doubles up on system performance when compared with HBM alone
  • It allows engineers to write simple commands for the computing unit to speed up repetitive local tasks
  • It offers seamless integration with systems since you don’t need to amend any existing hardware or software

HBM hurdles to overcome

Despite the benefits, chip manufacturers have been slow off the mark to adopt HBM-PIM tech. A significant hurdle is the process through which PIM tech pairs logic and memory – engineers need to find a way to optimise transistor performance while maximising storage density. This costs time and money, meaning that PIM-enabled devices did not offer a great ROI.

Furthermore, engineers typically look to von Neumann architecture – where discrete memory and process units are designed for complex data processing – when building HBM systems, but the drawback of this approach is that the data needs to be constantly delivered back and forth. As the volume of data expands, the funnel gets blocked.

Reassuringly, things are looking up. Necessity is the mother of invention and as demand for AI and ML applications has taken flight, so has investment in PIM. Innovations have allowed engineers to reduce the data transfer load thanks to optimised kernels that precisely plot data points to enable parallel processing. With this, the chances of data bottlenecks in von Neumann memory chips are massively reduced.

The next chapter for memory tech

2021 saw the first HBM-PIM released on the market. Based on the JEDEC-standard HBM2 spec, one silicon chip allows for DRAM, high-performance and parallel data processing – and has demonstrated fantastic success rates in integrating with AI and ML applications. Semiconductor fabricators are already planning to include PIM tech in their future HBM3 manufacturing.

This is a positive turning point for the semiconductor industry. New ideas will fuel improvements in memory tech capabilities. Bandwidth is no longer the biggest obstacle to AI and ML performance – it’s only our imagination. 

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier