Nvidia is riding the Artificial wave as it continues to see unmatched demand for its most recent generation of Blackwell GPU computers. Nvidia CEO Jensen Huang stated to Morgan Stanley experts at a meeting that the supply for the upcoming 12 weeks is over.
According to Morgan Stanley Analyst Joe Moore, a comparable position occurred with Hopper GPUs a few months ago.
Nvidia’s standard customers are driving the overwhelming desire for Blackwell GPUs, including big tech companies such as AWS, Google, Meta, Microsoft, Oracle, and CoreWeave. These businesses have already purchased every Blackwell GPU that Nvidia and its producing partner TSMC may develop over the next four rooms.
Even with opposition from rivals like AMD, Intel, and a number of smaller cloud-service providers, the exceedingly high demand appears to support Nvidia’s currently formidable presence in the market for AI processors.
According to our assessment, Nvidia is still likely to dominate AI processors in 2025 because the biggest manufacturers of custom silicone will experience steep increases in demand for Nvidia solutions in the coming year, Moore said in a customer note. ” Anything that we heard this year reinforced that,” said the speaker.
Gartner had predicted that AI device income would increase by$ 1.5 billion in 2024, so Gartner made the announcement.
Designed for massive-scale AI installations
Nvidia introduced the Blackwell GPU app in March, hailing its capacity to “unlock breakthroughs in data processing, executive model, digital style automation, computer-aided drug design, quantum computing, and conceptual AI—all emerging opportunities for Nvidia”.
The Blackwell includes the B200 Tensor Core GPU and GB200 Grace” very chip”. These chips are designed to handle the demanding loads of huge language model ( LLM) inference, significantly reducing energy consumption, a growing issue in the sector. Nvidia claimed that the Blackwell architecture adds abilities at the device level to enable AI-based proactive maintenance to diagnose and resolve reliability issues at the time of its release.
The company stated in March that this increases system uptime and increases endurance for massive-scale AI deployments, reducing operating costs and allowing unbroken operation for weeks or even months at once.
Observe: AMD Reveals Fleet of Chips for Heavy AI Tasks
Storage issues remain a topic
Nvidia resolved package concerns it immediately faced with the B100 and B200 GPUs, which allowed the company and TSMC to ramp up manufacturing. Both B100 and B200 use TSMC’s CoWoS-L package, and there are still questions about whether the nation’s largest device contract manufacturer has much CoWoS-L power.
As the need for AI GPUs is skyrocketing, it’s also to be seen if memory manufacturers can provide adequate HBM3E memory for cutting-edge Graphics like Blackwell. In specific, Nvidia has not yet qualified Samsung’s HBM3E storage for its Blackwell GPUs, another factor influencing source.
Nvidia made it clear in August that its Blackwell-based items were producing lower yields and that some of the B200 processor’s layers needed to be re-spinned to increase production efficiency. Despite these difficulties, Nvidia appeared convinced that Blackwell’s creation would increase in the third quarter of 2024. In the final quarter of this year, it plans to send various billion dollars worth of Blackwell GPUs.
The most intricate AI architecture ever created is the Blackwell layout. It exceeds the needs of today’s models and prepares the system, architecture, and software that organizations will need to control the parameters and effectiveness of an LLM.
Nvidia is also working on technology running to meet the demands of these new models, as well as the three biggest challenges facing AI nowadays: energy consumption, overhead, and precision accuracy. According to the company, the Blackwell infrastructure was created to provide unmatched performance with better power efficiency.
Nvidia reported that data centre profits increased by 154 percent to$ 26.3 % in the second quarter from the same quarter the year before.