NVIDIA is currently the market leader in the supply of CPUs used in large-scale synthetic AI servers, being exploited by Microsoft, Google and many other tech giants globally. This position has just been further cemented with the launch of the NVIDIA GH200 Grace Hopper high-performance synthetic AI platform, dubbed the most powerful AI processing GPU ever created.
Information about the new generation GH200 Grace Hopper AI chip was introduced by NVIDIA on the stage of the SIGGRAPH graphics developers conference taking place from August 6 at the Los Angeles Convention Center. The company says the combination of the new Grace Hopper chip with the HBM3e processor will have three times the memory capacity and three times the bandwidth of the current-generation Grace Hopper platform. As a result, the marked improvement in performance that the market can expect as a key factor:
The new platform uses the Grace Hopper Superchip, which can be connected to additional Superchips using NVIDIA NVLink, allowing them to work together to deploy the massive scale demanding models commonly found in total AI operations fit. This high-speed, hybrid technology gives the GPU full access to CPU memory, providing a total of 1.2TB of fast memory capacity when in dual configuration.
Simply put, this is the first system equipped with the latest technology HBM3e memory, both increasing the capacity and increasing the memory bandwidth. Systems equipped with the GH200 GPU will now have up to 282GB of memory for AI (machine learning) training and processing operations.
The new HBM3e memory is said to be 50% faster than the current HBM3 standard. NVIDIA says each supercomputing system using the new GPU can reach a combined memory bandwidth threshold of 10TB/s (5 GB/s individually).
NVIDIA says the current-generation Grace Hopper GH200 chip will enter full production by the end of 2023. While the newly announced next-generation Grace Hopper platform still has a long way to go. The first supercomputer systems and data centers equipped with the GH200 GPU will be operational from the second quarter of 2024, slower than the AMD Instinct MI300X 192GB of HBM3 RAM.