This week, AMD and OpenAI shook things up with a massive partnership valued at tens of billions. With this agreement, AMD is set to supply OpenAI with 6 gigawatts of computing power through its specialized Instinct AI GPUs. Moreover, OpenAI has the chance to acquire up to a 10% share in AMD as part of the deal.
Not long before this, NVIDIA made headlines itself by pledging a whopping $100 billion to OpenAI. This investment will also bring over 10 gigawatts of NVIDIA’s AI GPUs into the mix, bolstering the AI firm’s infrastructure development.
With both tech giants pouring resources into the same project, questions are swirling about the sustainability of this burgeoning AI economy.
Cognitive scientist Gary Marcus recently highlighted some concerns in his newsletter. He pointed out that the total valuation of the tech sector appears to significantly outpace what these companies can realistically deliver. For instance, he noted discrepancies in OpenAI’s and Oracle’s $300 billion cloud partnership, calling into question their ability to meet such high costs.
While worries about a potential AI bubble increase, NVIDIA’s CEO Jensen Huang remains optimistic. He appeared on CNBC’s Squawk Box where he discussed the industry’s funding dynamics.
When asked about OpenAI’s financial capabilities, Huang remarked:
They don’t have the money yet. The way that it’s going to happen, for every gigawatt of AI factories, you’re probably going to need about $50 to $60 billion … They’re going to have to raise that cash through revenue, equity, or debt.
The discussion shifted to concerns about a market bubble, reminiscent of the tech crashes around 2000, especially with companies like Nortel and Lucent falling. Presenter Rebecca Quick queried how today’s market differs.
Huang countered:
What’s going on now versus 2000 is starkly different. Back then, the total size of online businesses was only about $30-40 billion. In contrast, today’s AI hyperscalers represent a massive $2.5 trillion in operational business.
Additionally, Huang opined that the ongoing AI infrastructure build will gain momentum in 2025 as the industry shifts from traditional computation to GPU-driven generative AI.
Tokens—which underpin Large Language Model (LLM) computing—are vital as they alter how AI interprets natural language commands. According to Huang, there’s been a major shift in token utility:
The new technology is now about reasoning. It conducts research before offering answers, allows AI to leverage tools, pull information from various resources, and create highly beneficial responses. I utilize it daily, with tokens now turning profitable.
Jensen Huang, NVIDIA CEO
Compute Challenges and the Pursuit of AGI
Generative AI is widely known to be resource-hungry, demanding significant hardware and energy to operate effectively. Leading AI firms, including OpenAI’s Sam Altman, have talked openly about power limitations, especially after the dissolution of Microsoft’s exclusive cloud provider role.
After the NVIDIA and OpenAI partnership announcement, Altman’s stance on capacity constraints seemed to change:
The compute burdens within our entire industry—and particularly for us—have been dreadful. We’re enormously limited in our service offerings. There exists way more demand than we can currently fulfill.
Sam Altman, OpenAI CEO
The prevalent needs for computational power hint that the trend of corporations swapping vast sums and acquiring stakes in each other could continue. Ultimately, the goal remains to realize Artificial General Intelligence (AGI), surpassing human intellect, which raises its own set of terrifying considerations. While opinions differ on AGI’s imminent arrival, one truth stands firm: AGI promises the greatest potential for profit.
One major concern looming is what if AGI doesn’t manifest before the cash flow dwindles?
Keep updated with the latest news from Windows Central on Google News.
