Google Cloud announced on October 30 at the App Day &, Infrastructure Summit that it would improve AI sky system with fresh TPUs and NVIDIA GPUs.
Then in teaser for cloud users, the sixth-generation of the Trillium NPU forces some of Google Cloud‘s most famous companies, including Search and Maps.
” Through these improvements in AI system, Google Cloud helps companies and experts to reinvent the frontiers of AI innovation”, Mark Lohmeyer, VP and GM of Compute and AI Infrastructure at Google Cloud, wrote in a media release. We anticipate the revolutionary new AI software that may come from this solid base.
Trillium NPU speeds up conceptual AI techniques
As the size of large language types increases, so must the golden that supports them.
The fifth generation of the Arbor NPU delivers education, assumption, and distribution of big speech design applications at 91 exaflops in one TPU cluster. According to Google Cloud, the sixth-generation edition offers a 4.7-times boost in maximum maximum compute performance per chip compared to the second generation. The High Bandwidth Memory and Interchip Interconnect speed are both doubled.
Arbor meets the great computational requirements of Firm Diffusion XL, a large-scale diffusion model. At its peak, Arbor infrastructure is link tens of thousands of cards, creating what Google Cloud describes as” a building-scale supercomputer”.
Enterprise customers have been asking for more cost-effective AI acceleration and increased inference performance, said Mohan Pichika, group product manager of AI infrastructure at Google Cloud, in an email to TechRepublic.
In the press release, Google Cloud customer Deniz Tuna, head of development at mobile app development company HubX, noted:” We used Trillium TPU for text-to-image creation with MaxDiffusion &, FLUX. 1 and the results are amazing! In comparison to our current system, we were able to generate four images in 7 seconds, which is a 35 % lower in cost/image and 35 % improvement in response latency.
New virtual assistants anticipate receiving NVIDIA Blackwell chips soon.
In November, Google will add A3 Ultra VMs powered by NVIDIA H200 Tensor Core GPUs to their cloud services. The A3 Ultra VMs run AI or high-powered computing workloads on Google Cloud’s data center-wide network at 3.2 Tbps of GPU-to-GPU traffic. They also offer customers:
- Integration with NVIDIA ConnectX-7 hardware.
- 2x the GPU-to-GPU networking bandwidth compared to the previous benchmark, A3 Mega.
- Up to 2x higher LLM inferencing performance.
- Nearly double the memory capacity.
- 1.4x more memory bandwidth.
Google Cloud or Google Kubernetes Engine will be the new VMs’ providers.
SEE: At an investor’s meeting in October, Nvidia CEO Jensen Huang stated that Blackwell GPUs are sold out for the upcoming year.
The expanding enterprise LLM sector is supported by additional Google Cloud infrastructure updates.
Naturally, Google Cloud’s infrastructure offerings interoperate. For example, the A3 Mega is supported by the Jupiter data center network, which will soon see its own AI-workload-focused enhancement.
With its new network adapter, Titanium’s host offload capability now adapts more effectively to the diverse demands of AI workloads. The Titanium ML network adapter uses NVIDIA ConnectX-7 hardware and Google Cloud’s data-center-wide 4-way rail-aligned network to deliver 3.2 Tbps of GPU-to-GPU traffic. The benefits of this combination flow up to Jupiter, Google Cloud’s optical circuit switching network fabric.
The processing power needed for AI training and inference is another important component of Google Cloud’s AI infrastructure. Hypercompute Cluster, which contains A3 Ultra VMs, brings together a large number of AI accelerators. Hypercompute Cluster can be configured via an API call, leverages reference libraries like JAX or PyTorch, and supports open AI models like Gemma2 and Llama3 for benchmarking.
In November, Google Cloud customers can access Hypercompute Cluster using A3 Ultra VMs and Titanium ML network adapters.
According to Pichika, these products address enterprise customer requests for simplified access to high-performance AI infrastructure and optimized GPU utilization.
He wrote in an email that Hypercompute Cluster offers an easy-to-use solution for businesses to use the power of AI hypercomputers for large-scale AI training and inference.
Google Cloud is also preparing racks for NVIDIA’s upcoming Blackwell GB200 NVL72 GPUs, anticipated for adoption by hyperscalers in early 2025. Once available, these GPUs will connect to Google’s Axion-processor-based VM series, leveraging Google’s custom Arm processors.
We’re excited to keep working together to provide customers with the best of both technologies, according to Pichika, who did not specify whether the timing of Titanium ML or Hypercompute Cluster was related to the delays in the delivery of Blackwell GPUs.
Two more services, the Hyperdisk ML AI/ML focused block storage service and the Parallestore AI/HPC focused parallel file system, are now generally available.
Google Cloud services are accessible in a variety of foreign countries.
Google Cloud competitors for AI hosting
In terms of large language models, Google Cloud competes primarily with Amazon Web Services and Microsoft Azure. Alibaba, IBM, Oracle, VMware, and others offer similar stables of large language model resources, although not always at the same scale.
According to Statista, Google Cloud held 10 % of the cloud infrastructure services market worldwide in Q1 2024. Microsoft Azure held 25 % and Amazon AWS held 34 %.