NVIDIA revealed several infrastructure, technology, and sources for scientific research and business at the International Conference for High Performance Computing, Networking, Storage, and Analysis, held Nov. 17 to Nov. 22 in Atlanta. The H200 NVL AI accelerator’s future public presence was one of the most notable presentations.
December will see the release of the newest Grinder device.
In a press briefing on November 14, NVIDIA announced that H200 NVL PCIe GPU-based systems will be accessible in December 2024. An H200 NVL Enterprise Reference Architecture can be consulted by business clients. A five-year NVIDIA AI Enterprise subscription is included with the new GPU purchase at an organization level.
Dion Harris, NVIDIA’s chairman of accelerated technology, said at the briefing that the H200 NVL is ideal for information centres with lower energy — under 20kW — and air-cooled pedal cabinet designs.
” Companies you fine-tune LLMs within a few hours” with the approaching GPU, Harris said.
H200 NVL shows a 1.5x memory boost and 1.2x speed raise over NVIDIA H100 NVL, the organization said.
Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro may support the new Gigabit GPU. It will also surface in platforms from Aivres, ASRock Rack, GIGABYTE, Inventec, MSI, Pegatron, QCT, Wistron, and Wiwynn.
Notice: Businesses like Apple are putting in a lot of effort to train chip makers.
Grace Blackwell device implementation proceeding
Harris also emphasized that partners and vendors have the NV GB200 NVL4 ( Grace Blackwell ) chip in hand.
” The implementation of Blackwell is proceeding smoothly”, he said.
Blackwell chips expire the following month.
Unveiling the Next Phase of Real-Time Omniverse Simulations
In production, NVIDIA introduced the Omniverse Blueprint for Real-Time CAE Digital Twins, then in first entry. This updated research pipeline demonstrates how organizations and researchers can accelerate real-time visualizations and simulations, including real-time electronic wind tunnel testing.
Omniverse Blueprint for Real-Time CAE Digital Twins, a simulation platform built on NVIDIA NIM AI services, allows real-time models to be performed in real time, typically in weeks or months. Light Cloud will demonstrate how it can be used in a fluid relationships simulation at SC ’24.
” We built Omniverse so that everything can have a modern twin”, Jensen Huang, founder and CEO of NVIDIA, said in a press release.
In the same hit release, Ansys president and CEO Ajei Gopal stated,” We’re enabling our clients to tackle extremely complicated and precise calculations more quickly and accurately by integrating NVIDIA Omniverse Blueprint with Ansys technology.”
CUDA-X collection updates accelerate clinical analysis
NVIDIA’s CUDA-X books help promote the real-time models. Additionally, these books are receiving updates that focus on scientific studies, such as the release of a new cuPyNumeric version and changes to CUDA-Q.
Dynamics model features will be included in CUDA-Q, NVIDIA’s development program for developing quantum computers. Implementing classical models in real-world situations, such as an afternoon rather than a year, is the goal. Google works with NVIDIA to create images of their packets using CUDA-Q, “bringing them closer to the goal of achieving useful, large-scale classical computing”, Harris said.
The accelerated clinical research processing library, cuPyNumeric, was also unveiled by NVIDIA. CuPyNumeric, which was created for medical applications that frequently run NumPy programs and are CPU-only nodes, enables those projects to scale to thousands of GPUs without making any significant changes to the code. It is now being used in limited research institutions.