SK Hynix was the first memory vendor to start talking about HBM3 and was the first company complete development of memory under that spec. Today the company said that it had begun to mass produce HBM3 memory and these DRAMs will be used by Nvidia for its H100 compute GPUs and DGX H100 systems that will ship in the third quarter.
SK Hynix’s HBM 3 known good stack dies (KGSDs) offer peak memory bandwidth of 819 GB/s, which means that they support data transfer rates of up to 6400 GT/s. As for capacity, each stack packs eight 2GB DRAM devices for a total of 16GB per package. SK Hynix also has 12-Hi 24GB KGSDs, but since Nvidia seems to be the company’s primary customer for HBM3, the company kicks off production with 8-Hi stacks.
The start of HBM3 mass production is good news for SK Hynix’s bottom line; for a while, at least, the company will be the only supplier of this memory type and will be able to charge a hefty premium for these devices. What is important for SK Hynix’s public image is that it is beginning mass production of HBM3 ahead of its arch-rival Samsung.
Eventually, SK Hynix and other makers of memory will offer HBM3 packages with up to 16 32Gb DRAM devices and with capacities of 64GB per KGSD, but this is a longer-term question.
Nvidia’s H100 compute GPU is equipped with 96GB of HBM3 DRAM, though because of ECC support and some other factors, users can access 80GB of ECC-enabled HBM3 memory connected using a 5120-bit interface. To win the contract with Nvidia, SK Hynix has worked closely with the company to ensure perfect interoperability between the processor and memory devices.
“We aim to become a solution provider that deeply understands and addresses our customers’ needs through continuous open collaboration,” said Kevin (Jongwon) Noh, president and chief marketing officer at SK Hynix.
But Nvidia will not be the only company to use HBM3 in the foreseeable future. SiFive taped out its first HBM3-supporting system-on-chip on TSMC’s N5 node about a year ago, so the company can offer similar technology to its clients. Furthermore, Rambus and Synopsys have both offered silicon-proven HBM3 controllers and physical interfaces for quite a while and have landed numerous customers, so expect an arrival of various HBM3-supporting SoCs (primarily for AI and supercomputing applications) in the coming quarters.