By now, you guys must have heard of news saying the demand for HBM (high-bandwidth memory) due to the AI sector boom can lead to a shortage of DRAM for consumer electronics.
But have you thought about why the demand for HBM will lead to a DRAM shortage for consumer electronics and what the relationship is between them?.
In this detailed article, we will try to understand the relationship between HBM and DRAM, and how HBM is different from DRAM (HBM VS DRAM) .
HBM and DRAM are both types of random access memory (RAM) , i.e., volatile memory, just serving different purposes and having different characteristics.
Just to give you a brief idea of DRAM and HBM , you can think of DRAM as a large library with bookshelves lining the walls. Accessing a specific book (data) requires some walking around (data retrieval). HBM, on the other hand, is like a smaller, specialised library designed for quick reference. It has fewer books (limited capacity) but is arranged in a way that allows for much faster retrieval (higher bandwidth).
- Apple Achieves Self-Sufficiency in Smartphone Chip Production: First Modem Named Sinope Slated for 2025
- Trend of Layoffs Continues : 7,505 job cuts in November 2024
- TSMC affirms 2nm Chips Production in the U.S By 2028
- 3D Transistor Technology : MIT’s Groundbreaking Discovery
- The Future of Processors Lies in Adaptability, Not Size: Universal Processors.
Table of Contents
ToggleHBM vs DRAM
What is DRAM ?
DRAM is a type of random access memory that uses a combination of transistors and capacitors to store data . It is the most common type of memory found in computers and laptops.
It is used to store information that the processor needs to access frequently for running programs and applications.
Since it uses capacitors to store data , data needs to be refreshed periodically as capacitors lose charge over time. Also, once power is lost , all the data is lost, making it a volatile memory in contrast to HDDs or SSDs, which are able to retain data even after the power supply is cut.
What exactly is HBM ?
High-bandwidth memory (HBM) is a specialised type of DRAM designed for high-performance applications like artificial intelligence , machine learning, high-performance computing (HPC), etc.
As the name suggests , a HBM has a higher bandwidth compared to DRAM , thus allowing faster data transfer between memory and processors.
But how is HBM able to achieve faster data transfer speeds compared to DRAM when it is basically a type of DRAM?
Why is HBM faster than DRAM ?
Unlike DRAM, which uses flat modules with multiple DRAM chips arranged side-by-side on a printed circuit board, due to which data needs to travel longer distances between the processor and memory chips, limiting bandwidth, a HBM uses a 3D stacking approach.
In this 3D approach, multiple layers of DRAM chips are stacked vertically on top of each other, connected by tiny through-silicon vias (TSVs) and microbumps. This creates a shorter physical distance for data to travel between memory and processor, thus significantly increasing the bandwidth. Thus leading to faster data transfer rates between memory and processors while at the same time reducing the overall size of the physical memory.
But since this 3D stacking needs sophisticated manufacturing , the cost of HBM also increases by approximately 5 times.
By now , I guess you must have a brief idea of what the differences are between HBM and DRAM, Let us try to summarise it in the form of a table for our convenience
HBM vs DRAM Comparison Table
Feature | DRAM | HBM |
---|---|---|
Type | Dynamic Random-Access Memory | High-Bandwidth Memory |
Purpose | General-purpose memory for everyday tasks | High-performance applications requiring fast data transfer |
Access Speed | Faster than storage drives (HDD/SSD) | Significantly faster than DRAM |
Bandwidth | Moderate | High |
Capacity | Higher capacities available | Lower capacities compared to DRAM |
Volatility | Volatile (loses data on power off) | Volatile (loses data on power off) |
Cost | Lower cost per unit | Higher cost per unit |
Power Consumption | Lower power consumption | Higher power consumption |
Applications | Computers, laptops, smartphones, etc. | AI, Machine Learning, HPC, High-end GPUs |
Design | Flat modules | 3D stacked architecture |
Conclusion
HBM is basically a specialized version of DRAM built using 3D stacking technique for higher bandwidth purposes like machine learning , high performance computing etc which comes at roughly 5 times the cost of DRAM.
Discover more from WireUnwired
Subscribe to get the latest posts sent to your email.
2 Comments