Cloud demand is starting to change in a way that reflects how companies are building and running AI systems. The change is about the type of workloads moving into cloud environments. Recent comments from Andy Jassy, CEO of Amazon, point to a much larger market than previously expected, driven by enterprise AI adoption.
At a recent investor discussion, Jassy said revenue from Amazon Web Services could reach US$600 billion by 2036, roughly double earlier projections. He linked that growth to rising demand for AI workloads although did not give any detail on the split between income from ‘traditional’ AWS services and that of AI-related usage. The figures were reported by Reuters, which also noted that Amazon is preparing for sustained infrastructure.
Cloud demand
Enterprises are starting to use the cloud differently. Earlier growth came from storage, virtual machines, and basic application hosting. AI systems need large amounts of compute and fast networking, so unlike traditional workloads, they use more resources. Many also depend on specialised hardware.
Jassy said the company expects to spend tens of billions of dollars each year on AI-related infrastructure, including data centres, networking, and custom chips, with a level of investment could exceed US$200 billion.
Much of the current demand for AI infrastructure appears to be tied to inference, which involves using trained models in applications. Common examples include chatbots, coding tools, search features, and internal enterprise systems.
Training models still require large bursts of compute, while inference tends to keep systems running over longer periods. That helps explain why cloud providers have been investing in raw compute and systems that reduce latency and handle large numbers of requests. It also helps explain the attention to custom silicon, which may improve cost and performance for specific AI tasks and helps reduce the cloud provider’s reliance on a single provider of GPU chips, Nvidia.
Investment and infrastructure
Building AI data centres is a new process, significantly more complex than earlier cloud infra builds. Facilities require more power, advanced cooling, and high-speed links between servers. Access to specialised GPU chips is another issue, and supply remains tight in the industry.
The supply of high-performance chips remains limited and building new data centres takes longer than projects to build traditional infrastructure. Power availability has also become a concern. These issues slow how quickly cloud providers expand capacity for AI if demand rises.
Enterprise cloud strategy changes
Instead of choosing a provider based on cost or location, companies are paying closer attention to compute capacity, and the type of chips on offer. Access to AI infrastructure is a factor in vendor selection.
Cloud providers may prioritise customers who commit to larger, multi-year deals. Such agreements can help providers plan future capacity. Customers may face new issues, therefore, around flexibility and lock-in.
Putting AI systems into full production necessitates stable infrastructure and integration with existing systems, somewhere cloud providers may see sustained growth.
Jassy’s forecast offers a view into how one of the largest providers sees the next decade. It suggests that cloud growth will not come from more companies moving online, but from deeper use of cloud in enterprises. If AI systems become more of a part of everyday operations, they will require more resources than earlier applications.
(Photo by Igor Omilaev)
See also: AI demand pushes companies to invest billions in cloud infrastructure

Want to learn more about Cloud Computing from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
CloudTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

