Surge in Cloud Spending Reaches $95.3 Billion Fueled by AI Demand

Cloud Spending Soars

The financial world has seen a huge spike in spending on cloud infrastructure services during the second quarter of 2025. What’s behind this upswing? Well, demand for artificial intelligence workloads is blazing, combined with ongoing transitions from legacy systems and a broader shift towards cloud-native solutions.

Impressive Numbers from Canalys

According to the research whizzes at Canalys, part of the Omdia group, the total expenditure during this quarter hit an eye-watering $95.3 billion. That’s a 22% jump compared to the same time last year!

They also highlighted that the market is getting stronger with consistent growth, boasting over 20% increases annually now for four quarters in a row.

Drivers of Demand

This rise reflects heightened AI consumption, a revival in traditional workload migrations, and quick deployments by digital-first businesses. Hyperscalers ramping up AI functionalities are pushing for customers to adopt multi-model strategies, allowing them to be more thoughtful about their costs and usage scenarios.

The Big Three: AWS, Azure, and Google Cloud

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud continue to lead the market, accounting collectively for 65% of all spending. The cash flow for these top three providers went up by 27% from last year.

Notably, Azure and Google Cloud saw growth rates exceeding 30%, while AWS reported a steady 17% increase, a consistent pace it had maintained over the past quarter.

Interestingly enough, while AWS had slower percentage growth, it still recorded the largest revenue increase in absolute dollar terms among these leaders.

Ongoing Innovations and Capacity Expansion

With AI workloads, a resurgence in enterprise migrations, and the expansion of digital firms, the hunger for cloud services remains hefty.

In July, Google elevated its capital expenditure plan for 2025 from $75 billion to $85 billion. AWS is predicting to cross over $100 billion in expenditure this year, and Microsoft is targeting around $80 billion in its ongoing fiscal cycle.

Yi Zhang, a Senior Analyst at Canalys, shared some insights: “Customer interest in AI has evolved from simply wanting easy access to demanding flexibility and proper model choices tailored for specific needs.” He added that companies want to toggle between various AI models based on their business requirements to strike the right balance in performance and cost.

Tech Vendors Fonting to New Strategies

Platforms like AWS Bedrock, Azure AI Foundry, and Google Vertex AI are broadening their offerings with both in-house and third-party models. This takes care of the varying needs across numerous industries from high-level reasoning to speedy applications.

Rachel Brindley, Senior Director at Canalys, noted that “coopetition”—where competitors collaborate on infrastructure but fiercely compete on model innovation—is the new normal in the generative AI sector.

The example she’s pointing to? AWS Bedrock packages models like Claude from Anthropic and OpenAI’s GPT, allowing them and even OpenAI benefiting from shared capacities and leveraging their complementary strengths.

Open-source AI on the Rise

Michael Azoff, Chief Analyst at Omdia, mentioned the gaining emphasis on open-source AI that promises to boost downstream innovation. An evident trend in the market is the intrigue behind releasing model weights—something companies like Meta, DeepSeek, and OpenAI have recently engaged in.

Examining AWS’s Market Share

AWS led the charge, securing a solid 32% share this quarter, with a 17% rise in revenue, steady across the last six quarters with minimal fluctuations.

Even with areas like power and semiconductor shortages limiting capacity growth, AWS reported a backlog hitting $195 billion by late June—an increase of 25% year-over-year, underlining the persistent demand.

New Services and Investments

In July, a notable event was the introduction of the Amazon Bedrock AgentCore and a fresh category titled “AI Agents & Tools” on the AWS Marketplace, which features upwards of 800 products aimed at making deployment and monetization of AI agents easier.

Additionally, in August, AWS amplified accessibility by including Claude Opus 4.1 and GPT-oss models in its Bedrock offerings, making advanced AI options readily accessible for enterprises.

Updated infrastructure developments are also on the table with multi-billion-dollar investments aimed at extending capabilities in regions like North Carolina, Pennsylvania, and Australia.

Performance Insights from Azure and Google Cloud

On the other hand, Microsoft’s Azure holds a substantial 22% share with a robust growth of 39% year-on-year, driven largely by both enterprise migrations and adoption of AI-forward processes.

Azure AI Foundry has extended its repertoire through collaborations with OpenAI and Meta while preparing to add features from Black Forest Labs and Mistral AI too. The rollout of unsecured Azure AI Foundry Agent services this May now has over 14,000 clients using it for automating intricate workflows.

They fully launched OpenAI’s GPT-5 model on Azure AI Foundry in August, expanding their model accessibility through both APIs and a model routing system.

Meanwhile, Google Cloud raked in a strong performance with 34% growth year-on-year resulting in an enhanced 11% market share, buoyed by lucrative deals including contracts above $250 million that doubled year-on-year thirds, filming agreements of more than $1 billion wrapped into the first half of 2025 which matches all of 2024’s totals.

At the close of June, Google Cloud’s order backlog chimed in at $108.2 billion, noticeably up from $92.4 billion in Q1, prompting an adjustment to directors spending from $75 billion to $85 billion.

Furthermore, in June, the world saw the introduction of Gemini 2.5 Flash and Pro—touted as their quickest and most economical models yet. Gemini’s user base hit over 450 million monthly, tracking a quarter-to-quarter persona increase by over 50%. In July, indications surfaced that Google Cloud might become part of OpenAI’s resource network to support mounting training and inference loads.

Related Posts: