Memory

DRAM: how one transistor and one capacitor have changed the world

31st August 2023
Kristian McCann
0

In June 1968 a patent was granted for the idea of a single transistor, single capacitor memory cell idea. That was the birth of DRAM memory. This memory cell structure is so ubiquitous today, that it’s hard to see how radical this new approach was.

This article originally appeared in the July'23 magazine issue of Electronic Specifier Design – see ES's Magazine Archives for more featured publications.

Because when he set out in 1966, Robert Dennard, the inventor of DRAM, was trying to create an alternative to SRAM and magnetic core memory which required either six MOS transistors for each bit of data or a semi-hard ferrite core ring. From his observation that MOS technology was capable of building capacitors, he concluded that one transistor and one capacitor would be all it needed for a memory chip. This simple structure is still how DRAMs are set up today.

But even though the one transistor, one capacitor structure is very simple, DRAM had its teething troubles. In fact, the first five DRAM generations had a very low yield as the analogue behaviour of the DRAM cell only started to reveal itself. While the memory cell in and of itself contains a digital signal (0 or 1), in order to produce a digital signal, an analogue signal must be read out, amplified, and written back.

However, once these issues were overcome, wave after wave of innovation followed, driven by Moore’s Law and Dennard’s scaling principles until DRAM paved the way for modern computing in the 70s with the first Apple computers and the dawn of the PC. By 1990 rich graphics software, computer games, and operating programs all demanded increased speed and storage capacity which led to an increased demand for DRAM, and especially DRAM modules.

The unstoppable growth of DRAM

That’s when a number of new memory manufacturers emerged to address this need. Intelligent Memory was one of them, starting its business with a focus on DRAM modules in various configurations and form factors specifically for the range of existing and emerging industrial applications and special requirements.

In the early days of 1,000 DRAM cells per memory IC, it was fairly easy to ensure that all of the cells worked correctly. With the shrinking of the DRAM processing nodes that advanced quickly in the 1990s, this became an impossible task. That’s when it turned out that electrical or magnetic interference inside a computer system can cause a single bit of DRAM memory to spontaneously flip to the opposite state. While this may not have a huge impact on most applications, such corruption was critical in aviation, scientific and financial computing applications, or in database and file servers. 

Research revealed that the majority of these one-off soft errors in DRAM chips occur as a result of background radiation which may change the contents of one or more memory cells or interfere with the circuitry used to read or write to them. Hence, the error rates increase rapidly with rising altitude from sea level, but also in energy-intensive industries. In fact, in large-scale production sites, memory errors were one of the most common causes of machine crashes.

To mitigate these errors, Intelligent Memory was one of the first memory manufacturers that used an error-correcting code (ECC) in DRAM modules that include extra memory bits and memory controllers to run error-correcting Hamming Code.

This innovation significantly helped to make computing more stable and reduce data loss and crashes. ECC can also reduce the number of crashes in multi-user server applications and maximum-availability systems. Today this is a standard feature in memory modules.

A technology that doesn’t eat its own

The evolution of DRAM was one of the reasons why the innovation drivers in semiconductors and IT, in general, are no longer commercial applications but a growing array of consumer devices. DRAM enabled high-performing computing devices that were so cost-effective that they took the consumer market by storm. So, while consumer PCs today are adopting DDR5 memories, embedded PCs are still using DDR3 to a large extent. In fact, the scaling of memory technologies happens so fast that industrial customers find it hard to source the components they need over the entire life cycle of their products, especially in automotive and avionic applications: this covers a range between seven to 30+ years. 

Although it all started with one design idea, today we have a wide range of DRAM generation and Low Power versions that are all still used. And while some memory manufacturers focus on providing the latest technologies in high volumes for consumer markets, there are specialised manufacturers that focus on providing current DRAM technologies in industrial temperature ranges as well as extended life for older DRAM generations. For many industrial and embedded customers, innovation in DRAM can also mean providing multiple configurations of a memory product as well as a longevity roadmap that covers the entire lifetime of an industrial product.

So, when you think about it, it really is quite unique to have memory technologies with a lifespan of several decades in an industry that moves to a new generation every 18 to 24 months. All the more reason to celebrate this amazing technology that has shaped the world so substantially.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier