2 Chip Stocks Benefitting from OpenAI’s Chip Strategy Expansion
- NVIDIA’s 12-month backlog for its Blackwell chips has OpenAI considering ways to reduce its dependence on NVIDIA’s AI chips.
- OpenAI pivoted from its initial plan to build a network of global foundries to working with Broadcom to design AI server chips in-house.
- In the meantime, OpenAI, along with Microsoft, Meta Platforms, and Oracle, has opted to use AMD’s AI chips.
ChatGPT developer OpenAI had initially planned on building a global network of foundries to produce AI chips but has instead opted to work with several companies in the computer and technology sector. Like Amazon.com (NASDAQ:) and Alphabet (NASDAQ:) Google, OpenAI also aspires to produce its own AI chips in an effort to reduce its reliance on NVIDIA (NASDAQ:). NVIDIA’s Blackwell chips are already sold out for the next 12 months, which is a lightyear too long for a wait during the AI revolution. Here are two stocks benefitting from OpenAI’s strategy of developing its own AI chips.
1. Broadcom: Co-Developing OpenAI’s First AI Chip
OpenAI has been hiring many hardware engineers from the Google Tensor Processing Unit (TPU) team, its AI chip development arm. These engineers have historically worked with Broadcom (NASDAQ:) in developing Google’s TPU, an AI accelerator chip specifically designed for machine learning tasks, including neural networks and deep learning.
Bringing over those engineers would make it no surprise they would encourage OpenAI to work with Broadcom for its custom AI chip. OpenAI needs its AI server chips to handle massive AI workloads; Broadcom has the experience and history of making custom AI accelerators specifically for hyperscalers, which also include Meta Platforms (NASDAQ:).
OpenAI would be a lucrative contract that further bolsters its AI accelerator business. Open AI also plans to outsource the chip production to Taiwan Semiconductor Manufacturing (NYSE:) as they produce most of the AI chips on the market.
Broadcom shares initially took a 15.8% haircut when it reported its Q3 2024 EPS of $1.24, barely beating consensus estimates by 2 cents. Revenues surged 47.3% YoY to $13.07 billion versus $12.98 billion consensus estimates. The disappointment came from its downside guidance for Q4, with revenues expected at around $14 billion versus $14.11 billion consensus estimates.
Broadcom CEO Hock Tan commented, “Broadcom’s third-quarter results reflect continued strength in our AI semiconductor solutions and VMware (NYSE:). We expect revenue from AI to be $12 billion for fiscal year 2024 driven by Ethernet networking and custom accelerators for AI data centers.”
2. AMD: MI300 AI Chips May Not Be Mr. Right, But They Are Mr. Right Now
While NVIDA’s AI chips are backlogged, sources claim that OpenAI has decided to use Applied Micro Devices (NASDAQ:) AI chips in the meantime. Even if NVIDIA is the proverbial “Mr. Right” when it comes to AI chips, AMD is the “Mr. Right Now” due to the availability of their MI300 chips, also produced by Taiwan Semi. OpenAI is already using AMD Instinct MI300 GPUs in its Microsoft (NASDAQ:) Azure infrastructure. Microsoft was one of the first major hyperscalers to adopt MI300 GPUs. AMD plans to start producing its MI325 AI chip at the end of 2024 in time for its 2025 launch. It claims to offer up to 1.3x more performance than NVIDIA’s GPUs. Oracle (NYSE:) has also adopted MI300X in its Oracle Cloud.
While AMD was left in the dust by NVIDIA in the data center GPU wars, it is making some headway just because its chips are available while NVIDIA’s are not. In its Q3 2024 report, it earned 92 cents per share, which was in line with consensus estimates. Revenues rose 17.6% YoY to $6.82 billion, firmly beating the $6.71 billion consensus estimates.
AMD’s market share gain is shown in the 25% sequential and 122% YoY growth in its Data Center segment revenue of $3.5 billion. This was primarily driven by its AMD Instinct GPU processors and AMD EPYC CPU sales. Its CPU sales continued to gain market share at data centers, citing it has become the CPU of choice for the modern data center. Meta Platforms has deployed over 1.5 million EPYC CPUs throughout its data centers, as well as deployed MI300X to power its inferencing infrastructure.
AMD CEO Lisa Su commented, “Turning to our Data Center AI business, Data Center GPU revenue ramped as MI300X adoption expanded with cloud, OEM and AI customers. Microsoft and Meta expanded their use of MI300X accelerators to power their internal workloads in the quarter. Microsoft is now using MI300X broadly for multiple co-pilot services powered by the family of GPT 4 models.”
#Chip #Stocks #Benefitting #OpenAIs #Chip #Strategy #Expansion