Meta Expands AI Capabilities with Nvidia Chip Partnership

This article was generated by AI and cites original sources.

Meta, formerly known as Facebook, has entered a multi-year agreement with Nvidia to enhance its data centers by incorporating millions of Nvidia’s Grace and Vera CPUs, as well as Blackwell and Rubin GPUs. While Meta has a history of utilizing Nvidia’s hardware for its AI initiatives, this deal marks the first large-scale deployment of Nvidia Grace chips, promising notable performance-per-watt improvements in Meta’s data centers. Additionally, Meta plans to integrate Nvidia’s upcoming Vera CPUs into its data centers next year.

Despite Meta’s efforts to develop its own AI-focused chips internally, challenges and delays have hindered its progress. Nvidia, on the other hand, faces issues related to asset depreciation and chip-backed loans used for AI infrastructure financing, compounded by competitive pressures. Following reports of Meta exploring Google’s Tensor chips for AI applications, Nvidia experienced a four percent stock decline. Concurrently, AMD secured chip deals with OpenAI and Oracle last year.

The financial specifics of the Nvidia-Meta partnership remain undisclosed, but Meta’s AI expenditure this year, alongside investments from tech giants like Microsoft, Google, and Amazon, is projected to surpass the entire budget of the Apollo space program. This collaboration underscores the pivotal role of cutting-edge AI hardware in driving the evolution of data center capabilities across the tech industry.

Source: The Verge