Nvidia is developing the H100 120GB PCIe version of the computing card
- An American company made 0.7nm chips: EUV lithography machines can’t do it
- CVE-2007-4559 Python vulnerability ignored for 15 years puts 350,000 projects at risk of code execution
- RISC-V only takes 12 years to achieve the milestone of 10 billion cores
- 14000 cores + 450W: RTX 4080 graphics card perfectly replaces the RTX 3080
- Big upgrade: The difference between Bluetooth 5.0 and 5.2
- Geeks Disappointed that RTX 4080/4090 doesn’t come with PCIe 5.0
- What are advantages and disadvantages of different load balancing?
Nvidia is developing the H100 120GB PCIe version of the computing card, The GPU specifications are the same as the SXM version.
NVIDIA released a new generation of H100 based on the Hopper architecture at the GTC earlier this year for the next generation of accelerated computing platforms. It has 80 billion transistors, is a CoWoS 2.5D wafer-level package, a single-chip design, and is manufactured using a 4N process tailored by TSMC for Nvidia.
According to the s-ss report , Nvidia is developing a H100 120GB PCIe version computing card, which adds 40GB of video memory than the existing H100 80GB PCIe version computing card, but it is uncertain whether it is using HBM2e or HBM3, which belongs to the PCIe form factor.
It is understood that the GH100 chip configuration of this H100 120GB PCIe version computing card is higher than the 114 groups of SMs and 14592 FP32 CUDA cores of the existing PCIe version, but the same chip as the SXM version, that is, 132 groups of SMs, a total of 16896 FP32 CUDA core, 528 Tensor Cores and 50MB of L2 cache.
This makes the single-precision performance of the H100 120GB PCIe version on par with the SXM version, with a single-precision floating-point performance of about 60TFLOPS. I don’t know what the power consumption of the H100 120GB PCIe version will be.
Currently, the H100 80GB PCIe version is 350W, while the H100 80GB SXM5 version is 700W.
In addition, the GH100 chip area is about 814mm², supports NVIDIA’s fourth-generation NVLink interface, and can provide up to 900 GB/s of bandwidth.
At the same time, the GH100 is the first GPU to support the PCIe 5.0 standard, and also the first GPU to use HBM3.
It supports up to six HBM3s with a bandwidth of 3TB/s, which is 1.5 times that of the A100 using HBM2E.
The photo shows that there is also a GeForce RTX ADLCE engineering sample in the device. Although it is not marked, it can be understood that it belongs to the Ada Lovelace architecture GPU.
Its TDP is said to be limited to 350W, and the single-precision performance is only 63 to 70TFLOPS (the normal version is 82TFLOPS).
- DIY a PBX (Phone System) on Raspberry Pi
- How to host multiple websites on Raspberry Pi 3/4?
- A Free Intercom/Paging system with Raspberry pi and old Android phones
- DIY project: How to use Raspberry Pi to build DNS server?
- Raspberry Pi project : How to use Raspberry Pi to build git server?