Supermicro has announced new server offerings that incorporate NVIDIA’s latest HGX H200 GPUs (with H200 Tensor Core GPUs), which are set to provide significant advancements in the field of generative AI and LLM training.
Supermicro has announced new server offerings that incorporate NVIDIA’s latest HGX H200 GPUs (with H200 Tensor Core GPUs), which are set to provide significant advancements in the field of generative AI and LLM training.
The company is preparing to release AI platforms that include both 8U and 4U Universal GPU Systems, fully equipped to support the HGX H200 with 8-GPU and 4-GPU configurations. These systems boast an enhanced memory bandwidth with the introduction of HBM3e technology, which presents nearly double the capacity and 1.4 times higher bandwidth compared to previous generations. This leap in hardware capability is expected to cater to the growing demands for more intricate computational tasks in AI research and development.
In addition, Supermicro announced a high-density server with NVIDIA HGX H100 8-GPU systems in a liquid-cooled 4U system, which incorporates the company’s newest cooling technologies. Claiming it as the industry’s most compact high-performance GPU server, Supermicro indicates that this system allows for the highest density of AI training capacity in a single rack unit to date and will help ensure cost and energy efficiency.
Partnership with NVIDIA has allowed Supermicro to be at the front of AI system design, providing optimized solutions for AI training and HPC workloads. The company’s commitment to rapid innovation is evident in their system architecture which allows for quick market deployment of technological advances. The new AI systems feature NVIDIA’s interconnect technology, like NVLink and NVSwitch, to support extreme high-speed data transfers at 900GB/s, and offer up to 1.1TB of HBM3e memory per node, optimizing performance for parallel processing of AI algorithms.
Supermicro offers a diverse range of AI servers, including the widely-used 8U and 4U Universal GPU systems. These systems featuring four-way and eight-way NVIDIA HGX H100 GPUs are now drop-in ready for the new H200 GPUs (which feature 41GB of memory with a bandwidth of 4.8TB/s), allowing for even faster training time of larger language models.
Supermicro will be showcasing the 4U Universal GPU System at the upcoming SC23.
Engage with StorageReview
Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed