Electronics

HBM VS DRAM : A Detailed Analysis

HBM vs DRAM

By now, you guys must have heard of news saying the demand for HBM (high-bandwidth memory) due to the AI sector boom can lead to a shortage of DRAM for consumer electronics.

But have you thought about why the demand for HBM will lead to a DRAM shortage for consumer electronics and what the relationship is between them?.

In this detailed article, we will try to understand the relationship between HBM and DRAM, and how HBM is different from DRAM (HBM VS DRAM) .

HBM and DRAM are both types of random access memory (RAM) , i.e., volatile memory, just serving different purposes and having different characteristics.

Just to give you a brief idea of DRAM and HBM , you can think of DRAM as a large library with bookshelves lining the walls. Accessing a specific book (data) requires some walking around (data retrieval). HBM, on the other hand, is like a smaller, specialised library designed for quick reference. It has fewer books (limited capacity) but is arranged in a way that allows for much faster retrieval (higher bandwidth).

HBM vs DRAM

What is DRAM ?

DRAM Module
DRAM Module

DRAM is a type of random access memory that uses a combination of transistors and capacitors to store data . It is the most common type of memory found in computers and laptops.

It is used to store information that the processor needs to access frequently for running programs and applications.

Since it uses capacitors to store data , data needs to be refreshed periodically as capacitors lose charge over time. Also, once power is lost , all the data is lost, making it a volatile memory in contrast to HDDs or SSDs, which are able to retain data even after the power supply is cut.

What exactly is HBM ?

HBM die
HBM die

High-bandwidth memory (HBM) is a specialised type of DRAM designed for high-performance applications like artificial intelligence , machine learning, high-performance computing (HPC), etc.

As the name suggests , a HBM has a higher bandwidth compared to DRAM , thus allowing faster data transfer between memory and processors.

But how is HBM able to achieve faster data transfer speeds compared to DRAM when it is basically a type of DRAM?

Why is HBM faster than DRAM ?

Unlike DRAM, which uses flat modules with multiple DRAM chips arranged side-by-side on a printed circuit board, due to which data needs to travel longer distances between the processor and memory chips, limiting bandwidth, a HBM uses a 3D stacking approach.

HBM 3d stacking
HBM 3D stacking technique

In this 3D approach, multiple layers of DRAM chips are stacked vertically on top of each other, connected by tiny through-silicon vias (TSVs) and microbumps. This creates a shorter physical distance for data to travel between memory and processor, thus significantly increasing the bandwidth. Thus leading to faster data transfer rates between memory and processors while at the same time reducing the overall size of the physical memory.

But since this 3D stacking needs sophisticated manufacturing , the cost of HBM also increases by approximately 5 times.

By now , I guess you must have a brief idea of what the differences are between HBM and DRAM, Let us try to summarise it in the form of a table for our convenience

  

HBM vs DRAM Comparison Table

FeatureDRAMHBM
TypeDynamic Random-Access MemoryHigh-Bandwidth Memory
PurposeGeneral-purpose memory for everyday tasksHigh-performance applications requiring fast data transfer
Access SpeedFaster than storage drives (HDD/SSD)Significantly faster than DRAM
BandwidthModerateHigh
CapacityHigher capacities availableLower capacities compared to DRAM
VolatilityVolatile (loses data on power off)Volatile (loses data on power off)
CostLower cost per unitHigher cost per unit
Power ConsumptionLower power consumptionHigher power consumption
ApplicationsComputers, laptops, smartphones, etc.AI, Machine Learning, HPC, High-end GPUs
DesignFlat modules3D stacked architecture
 

Conclusion

HBM is basically a specialized version of DRAM built using 3D stacking technique for higher bandwidth purposes like machine learning , high performance computing etc which comes at roughly 5 times the cost of DRAM. 


Discover more from WireUnwired

Subscribe to get the latest posts sent to your email.

Senior Writer
Abhinav is a graduate from NIT Jamshedpur . He is an electrical engineer by profession and analog engineer by passion . His articles at WireUnwired is just a part of him following his passion.

    2 Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Discover more from WireUnwired

    Subscribe now to keep reading and get access to the full archive.

    Continue reading