Oracle is making significant strides in the realm of artificial intelligence (AI) by operating some of the most advanced data centers designed specifically for processing AI workloads. As the demand for substantial computing power increases—something that the average personal computers typically cannot meet—more AI workloads are shifted to large-scale data centers filled with sophisticated chips from major suppliers like Nvidia, Advanced Micro Devices, Broadcom, and Micron Technology.
Recent investments in automation and innovative networking technologies have transformed Oracle’s cloud data centers into some of the fastest and most cost-efficient in the world. This efficiency has attracted partnerships with leading AI software developers, including OpenAI, Meta Platforms, and Elon Musk’s xAI, significantly boosting revenue for Oracle’s cloud infrastructure segment.
In a recent report detailing its operating results for the first quarter of fiscal 2026, Oracle revealed a substantial need for additional data centers, a development that could bode well for chipmakers. One notable highlight from the report was the dramatic surge in Oracle’s order backlog, indicating a high demand for their services. Developers utilizing Oracle’s cloud can access impressive processing capabilities, with configurations that allow workloads to scale up to 131,072 advanced GPUs from Nvidia—enough power to train and deploy some of the most formidable AI models in the industry. Additionally, Oracle is in the process of constructing new GPU superclusters powered by AMD’s latest MI355X processors.
Oracle’s proprietary networking technology—random direct memory access (RDMA)—enables faster data transfer than traditional Ethernet connections, allowing developers to optimize their operations. This speed can lead to significant cost reductions as developers typically pay for computing capacity by the minute. Moreover, Oracle’s focus on automation allows for streamlined operations within its data centers, minimizing the need for human resources and enabling rapid deployment of new infrastructure.
Chairman Larry Ellison has asserted that Oracle’s data centers outperform competitors in speed and cost efficiency, even when utilizing the same GPUs from Nvidia and AMD. In the last quarter, Oracle’s cloud infrastructure division reported a remarkable revenue of $3.3 billion, a 55% increase compared to the same period last year. However, the company’s remaining performance obligation (RPO)—an essential metric reflecting contracted but undelivered services—skyrocketed by 359% year-over-year to $455 billion, signifying a robust order backlog.
With demand for Oracle’s data center capacity surpassing supply at unprecedented levels, the company is faced with the need to rapidly expand its cloud infrastructure. This expansion will require significant capital expenditures, with Oracle anticipating a capex of over $35 billion for fiscal 2026, a considerable increase from the $25 billion forecast made merely three months earlier.
Key chipmakers like Nvidia and AMD are set to be significant beneficiaries of Oracle’s escalating capital investments, as they are primary suppliers of data center GPUs used in AI development. Meanwhile, Broadcom is also gaining traction in the market with its specialized AI accelerators, which offer more customization for customers. Broadcom is already set to deploy up to 1 million AI accelerators with at least three hyperscaler clients by 2027, creating a $90 billion market opportunity.
Micron, another key player, provides high-bandwidth memory solutions embedded in GPUs and stands to gain from Oracle’s surge in capital expenditures, positioning itself as an essential part of the supporting infrastructure for AI workloads.
As Oracle accelerates its investment in data centers and cutting-edge technologies, the ripple effects on the semiconductor industry could be profound, heralding a new era of growth and innovation in the AI landscape.