Google and Intel announced an expanded multiyear partnership on Thursday that centers on AI infrastructure. Google Cloud will continue using Intel AI infrastructure, while the companies plan to co-develop processors together. The partnership addresses two key components of today’s AI data center buildout: general-purpose CPUs for running inference and managing workloads, and specialized IPUs that offload data center tasks from CPUs.
Partnership Details
According to TechCrunch, the expanded agreement continues Google Cloud’s use of Intel’s Xeon processors for AI, cloud, and inference tasks. Google Cloud will use Intel’s latest Xeon 6 chips, and has used Intel’s Xeon processors for decades. The companies also plan to expand co-development of custom infrastructure processing units (IPUs), which help accelerate and manage data center tasks by offloading them from CPUs.
This chip development partnership began in 2021. In the expanded phase, the focus will be on custom ASIC-based IPUs—a direction that indicates the collaboration is moving from using existing CPU platforms toward designing specialized silicon intended to fit data center and AI workload patterns more precisely.
Intel did not disclose deal pricing. TechCrunch reports that Intel “declined to share any information regarding pricing for the deal,” leaving financial terms and procurement scale unspecified in the public announcement.
CPUs and the AI Infrastructure Stack
The announcement reflects a broader infrastructure reality: while GPUs are used for developing and training AI models, CPUs are crucial for running AI models and for general AI infrastructure. The data center stack depends on CPUs beyond model training pipelines, including serving, orchestration, and other compute needs that must be coordinated around accelerators.
TechCrunch notes the timing occurs amid “a growing shortage” for CPUs. This shortage context is relevant because it affects how quickly cloud providers scale AI systems, not just how they train them.
Intel CEO Lip-Bu Tan said in a company press release, as quoted by TechCrunch: “AI is reshaping how infrastructure is built and scaled,” adding that “Scaling AI requires more than accelerators — it requires balanced systems.” Tan continued: “CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” The quote ties the partnership directly to Intel’s framing of “balanced systems,” positioning CPUs and IPUs as complementary components.
Custom IPUs and ASIC Design
The technical focus of the expansion is the co-development of custom IPUs using ASIC-based designs. TechCrunch describes IPUs as infrastructure-focused units that accelerate and manage data center tasks by offloading them from CPUs. In practice, this means the collaboration aims to reduce CPU load and improve how data center systems handle the work that enables AI inference at scale.
The TechCrunch report does not provide architectural details such as performance targets, software interfaces, or deployment timelines. However, the emphasis on custom ASIC-based IPUs suggests the companies expect meaningful differentiation from purpose-built hardware rather than relying solely on general-purpose accelerators or CPUs.
The stated plan to “expand the co-development” indicates ongoing hardware iteration, though the report does not specify whether Google Cloud will use the custom IPUs immediately or in later phases of the multiyear effort.
Broader Industry CPU Demand
TechCrunch places the partnership in a broader semiconductor context: “More companies have been turning their focus to CPUs in recent months” due to a “growing shortage for the chips.” The report provides one example from outside the Google-Intel collaboration: SoftBank-owned Arm Holdings announced the Arm AGI CPU, described as “the first chip that the semiconductor giant has produced itself,” made public “amid a worldwide crunch for CPUs.”
This underscores that the CPU shortage extends across multiple vendor ecosystems. While the TechCrunch report does not compare performance or availability claims across chip families, it illustrates that multiple semiconductor and platform players are responding to the same constraint—capacity and supply for CPUs.
In this environment, partnerships like Google Cloud’s expanded use of Xeon 6 and the co-development of ASIC-based IPUs could address supply and scaling constraints. The source does not state these steps are intended to solve the shortage directly, but it does tie the expanded hardware work to a time when industry demand for CPUs is high.
Source: TechCrunch