Latest Post

Google issues security alert for Samsung Smart Phones BYD released smart watch with keyless function for BYD vehicles

NVIDIA H100 computing card launched in Japan: About US$36460


NVIDIA H100 computing card launched in Japan: About US$36460.

At GTC 2022, NVIDIA released a new generation of H100 based on the Hopper architecture for the next generation of accelerated computing platforms.

The NVIDIA H100 has 80 billion transistors, is a CoWoS 2.5D wafer-level package, a single-chip design, and is manufactured using TSMC’s 4nm process, and is a tailored version for NVIDIA.


NVIDIA H100 computing card launched in Japan: About US$36460.


Nvidia said it expects to start supply in the third quarter of this year, but did not give a price for the H100 computing card.

Recently, some retailers in Japan have listed the H100, showing a price of 4,745,950 yen (about 36,567.5 US dollars). The price of the change includes shipping and taxes. If only the card itself is calculated, it is 4,313,000 yen (about 33,231.7 US dollars ).

The H100 is available in SXM and PCIe form factors to support different server design requirements. The Japanese retailer released a PCIe-based version this time.


NVIDIA H100 computing card launched in Japan: About US$36460.


The complete GH100 chip is equipped with 8 sets of GPC, 72 sets of TPC, 144 sets of SM, and a total of 18432 FP32 CUDA cores.

It uses the fourth-generation Tensor Core, a total of 576, and is equipped with a 60MB L2 cache. However, not all of them are turned on in the actual product.

In the SXM5 version, 132 groups of SMs are enabled, with a total of 16896 FP32 CUDA cores, 528 Tensor Cores and 50MB of L2 cache, while the PCIe 5.0 version enables 114 groups of SMs and FP32 CUDA cores The number is only 14592.

In addition, the TDP of the former reaches 700W, and the latter is 350W.


In addition, the H100 supports NVIDIA’s fourth-generation NVLink interface, which can provide up to 900 GB/s of bandwidth.

At the same time, H100 is the first GPU to support PCIe 5.0 standard, and also the first GPU to use HBM3.

It supports up to six HBM3s with a bandwidth of 3TB/s, which is 1.5 times that of A100 using HBM2E. The default memory capacity is 80GB.