Analysis

3D layer stacking & TSV technologies set to refresh memory

8th December 2014
Barney Scott
0

According to Yole Développement, both the compute (DDR3/DDR4) and mobile varieties (LPDDR3/LPDDR4) of DDR memory will reach the end of their respective journeys soon, as the DDR interface reportedly cannot run at data rates higher than 3.2Gb/s in a traditional computer main memory environment. Several DRAM memory architectures based on 3D layer stacking and Through-Silicon-Via (TSV) have evolved to carry memory technology forward.

The challenges for DRAM are to reduce power consumption, satisfy bandwidth and miniaturisation requirements, and maintain low cost. Applications are evolving with different demands on these basic requirements. For example, graphics in a smartphone may require bandwidth of 15GB/s, while a networking router may require 300GB/s.

With the recent Samsung announcement of mass production of 64GB DDR4 DIMMs that use TSV technology for enterprise servers and cloud-based applications, all three of the major DRAM memory manufacturers (Samsung, Hynix and Micron) have introduced commercial TSV-based memory architectures.

Emerging DRAM technologies such as Wide I/O, Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) are being optimised for different applications and present different approaches to address bandwidth, power, and area challenges. Wide I/O increases the bandwidth between memory and its driver IC logic by increasing the I/O data bus between the two circuits, typically using TSVs, interposers and 3D stacking technologies.

The 2014 Wide I/O 2 standard JESD229-2 from JEDEC, is designed for high-end mobile applications that require high bandwidth at the lowest possible power. Wide I/O 2 provides up to 68GB/s bandwidth, with a 1.1V supply. From a packaging standpoint, the Wide I/O 2 is optimised to stack on top of a SoC to minimise power consumption and footprint. This standard trades a significantly larger I/O pin count for a lower operating frequency, and stacking reduces interconnect length and capacitance. The net effect is a reduction in I/O power whilst higher bandwidth is enabled.

In the 2.5D-stacked configuration, cooling solutions can be placed on top of the two dies. With the 3D-stacked form of Wide I/O 2, heat dissipation can be an issue since there is no standard way to cool stacked die.

The HMC is a specialised form of the wide I/O architecture, developed by Micron and IBM, and is expected to be in mass production by the end of 2014. This architecture consists of 3D stacked DRAM layers on top of a controller logic layer. For example, 4 DRAM die are divided into 16 ‘cores’ and then stacked. The logic base at the bottom has 16 different logic segments, each controlling the four DRAMs cores that sit directly on top of it. This type of memory architecture supports a large number of I/O pins between the logic and DRAM cores, which deliver bandwidths as high as 400GB/s. According to the HMC Consortium, a single HMC can deliver more than 15 times the performance of a DDR3 module and consume 70% less energy per bit than DDR3.

In addition to Micron and IBM, the HMC architecture developer members include Samsung, Hynix, ARM, Open Silicon, Altera, and Xilinx (HMC specs).

HBM consists of stacked DRAM die, built with Wide I/O and TSV, and supports 128-256GB/s. The 2013 JEDEC HBM memory standard, JESD235, was developed for high end graphics and gaming applications.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier