September 30, 2022

COSFONE

Networking, PBX, IT, DIY Solution


Warning: sprintf(): Too few arguments in /www/wwwroot/cosfone.com/wp-content/themes/newsphere/lib/breadcrumb-trail/inc/breadcrumbs.php on line 254

NVIDIA H100 SXM physical exposed: Core area 814mm² 80GB HBM3 memory

3 min read

NVIDIA H100 SXM physical exposed: Core area 814mm² 80GB HBM3 memory



 

NVIDIA H100 SXM physical exposed: Core area 814mm²  80GB HBM3 memory

At GTC 2022, NVIDIA released a new generation of H100 based on the Hopper architecture for the next generation of accelerated computing platforms. It has 80 billion transistors, is a CoWoS 2.5D wafer-level package, has a single-chip design, and is manufactured using a 4nm process tailored by TSMC for Nvidia.

 

Recently, ServeTheHome released a recent photo of the NVIDIA H100 SXM, you can see the new design of the SXM shape, the PCB model is PG520.

It is understood that the equipped GH100 chip has an area of ​​about 814 mm² and is located in the middle, surrounded by six HBM3 memory with a capacity of 80GB.

Compared to the previous generation A100, the connection layout of the H100 has also changed, making it a bit shorter.

The TDP of the NVIDIA H100 SXM is as high as 700W, which is 250W to 300W higher than similar products based on the Ampere and Volta architectures, but the PCIe version of the H100 is only 350W.

 

NVIDIA H100 SXM physical exposed: Core area 814mm², 80GB HBM3 memory

 

NVIDIA H100 SXM physical exposed: Core area 814mm², 80GB HBM3 memory

 

The complete GH100 chip is equipped with 8 sets of GPC, 72 sets of TPC, 144 sets of SM, and a total of 18432 FP32 CUDA cores.

It uses the fourth-generation Tensor Core, a total of 576, and is equipped with a 60MB L2 cache. However, not all of them are turned on in the actual product.

In the SXM version, 132 groups of SMs are enabled, with a total of 16896 FP32 CUDA cores, 528 Tensor Cores and 50MB of L2 cache, while the PCIe 5.0 version enables 114 groups of SMs and FP32 CUDA cores The number is only 14592.

In addition, the GH100 supports NVIDIA’s fourth-generation NVLink interface, which can provide up to 900 GB/s of bandwidth.

At the same time, the GH100 is the first GPU to support the PCIe 5.0 standard and the first to use HBM3.

It supports up to six HBM3s with a bandwidth of 3TB/s, which is 1.5 times that of the A100 using HBM2E.

 

Recently, some retailers in Japan have listed the NVIDIA H100 PCIe, and the display price is 4,745,950 yen (about 36,567.5 US dollars).

Since the NVIDIA H100 SXM has a higher specification and has more CUDA cores, the price may be higher.

 



Copyright © All rights reserved. | Newsphere by AF themes.