OpenAI’s recent partnership with Nvidia marks a significant milestone for both companies in the rapidly evolving landscape of artificial intelligence. Following a highly anticipated Q&A session in Abilene, Texas, OpenAI CEO Sam Altman discussed the implications of Nvidia’s substantial investment, which could total up to $100 billion over several years as new AI supercomputing facilities are established.
While the deal generates impressive figures, specific details about the timeline and costs associated with the construction of the data centers remain undisclosed. The first of these cutting-edge facilities is expected to launch in the latter half of 2026. The financial blueprint outlines that OpenAI will primarily pay for Nvidia’s advanced graphics processing units (GPUs) through leasing agreements, rather than making upfront purchases.
Nvidia’s CEO Jensen Huang characterized the arrangement as “monumental in size,” revealing that a gigawatt-capacity AI data center could cost around $50 billion, with Nvidia’s GPUs accounting for approximately $35 billion of that expenditure. This leasing model allows OpenAI to manage its financial commitments more effectively by distributing costs over the lifespan of the GPUs, which could extend up to five years.
In the initial stages, $10 billion of Nvidia’s investment will be made available to OpenAI, facilitating the deployment of the company’s first gigawatt of processing power. Despite being classified as a non-investment-grade startup with limited cash flow, OpenAI aims to finance its future expansion through a combination of equity and debt. The strategic partnership with Nvidia is expected to provide OpenAI with better terms for future loans as it navigates this complex financial landscape.
During the press event, OpenAI’s CFO Sarah Friar highlighted the collaborative role of partners like Oracle, which is responsible for leasing the newly constructed Abilene facility. She emphasized that the combined resources of these companies are essential to alleviating the current compute capacity shortage. “Folks like Oracle are putting their balance sheets to work to create these incredible data centers you see behind us,” Friar stated, while also reiterating that paying for Nvidia’s chips will be part of their operational expenditures.
However, the OpenAI-Nvidia agreement also raises questions about the sustainability of the AI industry’s rapid growth. Nvidia’s impressive market capitalization, currently at $4.3 trillion, has been bolstered by its GPU sales to a range of tech giants, including OpenAI, Google, Meta, Microsoft, and Amazon. Simultaneously, OpenAI has garnered significant investments from Microsoft and others, which have allowed it to operate at a loss while developing AI models that fuel services like ChatGPT.
Analyst Jamie Zakalik from Neuberger Berman pointed out the concern that the financial arrangements between OpenAI and Nvidia could contribute to a “circular nature” in the market. This dynamic raises questions about the real value being generated, as funds invested in OpenAI are largely being redirected back to Nvidia.
In response to these concerns, Altman reassured stakeholders by emphasizing the company’s commitment to generating real demand through product offerings. “We need to keep selling services to consumers and businesses — and building these great new products that people pay us a lot of money for,” he said. As long as demand remains strong, Altman believes that OpenAI can sustain its trajectory and adequately finance the expansive data center and GPU requirements ahead.

