Micron Technology has taken a significant step forward in the realm of high-performance memory solutions with the commencement of volume production for its 12-layer HBM3E chips. These cutting-edge memory stacks, with a capacity of 36GB, are specifically tailored for artificial intelligence (AI) graphics processing units (GPUs) and are now being delivered to key partners in the AI industry for validation.
Announced on September 9th, Micron's 12-layer HBM3E memory is poised to serve the advanced processing needs of AI and high-performance computing (HPC) workloads, such as those found in NVIDIA's H200 and B100/B200 GPUs. The new memory stacks offer a 50% increase in capacity compared to their 8-layer predecessors (24GB), enabling data centers to run larger AI models on a single processor. This enhancement reduces the need for frequent data offloading from the CPU and minimizes communication latency between GPUs, thereby accelerating data processing speeds.
In terms of performance, Micron's 12-layer HBM3E stack delivers a memory bandwidth of over 1.2TB/s with data transfer rates exceeding 9.2Gb/s. According to Micron, their HBM3E not only provides 50% more memory capacity than competing products but also consumes less power than the 8-layer HBM3E stack.
The 12-layer HBM3E from Micron includes a fully programmable Memory Built-In Self-Test (MBIST) system, ensuring faster time-to-market and reliability for customers. This technology can simulate system-level traffic at full speed, allowing for comprehensive testing and quicker validation of new systems.
Furthermore, Micron's HBM3E memory devices are compatible with TSMC's CoWoS packaging technology, which is widely used in the packaging of AI processors such as NVIDIA's H100 and H200.
Micron has also begun development on its next-generation memory solutions, including HBM4 and HBM4E, which will continue to meet the growing demands for advanced memory in AI processors, including NVIDIA GPUs based on the Blackwell and Rubin architectures.
|