AWS CEO estimates large city scale power consumption of future AI model training tasks — ‘an individual model may require somewhere between one to 5GW of power’

by Pelican Press
34 views 5 minutes read

AWS CEO estimates large city scale power consumption of future AI model training tasks — ‘an individual model may require somewhere between one to 5GW of power’

Amazon Web Services CEO Matt Garman estimates that large language model (LLM) training two to three generations from now will require as much power as a large city. According to The Transcript’s quote posted on X (formerly Twitter), from a WSJ interview, Garman believes one model might need anywhere between one to five gigawatts to complete training, and that AWS is investing in power projects that could help these growing demands.

If we look at the history of Meta’s Llama LLM, for comparison, it launched the first model in February 2023, followed by the Llama 2 in July. The company then launched Llama 3 in mid-April of this year. If other LLMs follow this training schedule, we could see new models every seven months, on average. On the other hand, LLM pioneer OpenAI launched GPT-3 in June 2020, followed by GPT-4 in March 2023. Although it also launched GPT-3.5 in 2022, it’s more of a refinement of GPT-3 rather than a new generation. Thus OpenAI company took nearly three years to launch a next-generation model.

With this information, we could say that a typical new LLM generation takes about a year or two to train at current hardware levels. While AI companies are using more AI GPUs to train their models, these LLMs, like Llama-4, are also far more complicated, requiring clusters that use more than 100,000 Nvidia H100 GPUs. OpenAI is also delaying its ChatGPT-5 model to 2025 due to limitations on available computing power. With that information, we could assume that we will hit the five-gigawatt power requirement in about five years.

This will give tech giants like OpenAI and Microsoft, Amazon, Google, and even Oracle some time to ramp up energy production. Garman said that AWS is “funding more than 500 projects, bringing new power onto the grid from renewable sources.” This is crucial for data centers, especially as it takes time to deploy renewable energy, unlike traditional power sources, like coal and natural gas, that have substantial greenhouse gas emissions. This is a major problem in the race for AI supremacy, with Google even falling behind its climate targets — with its emissions increasing by 48% from 2019 — due to data center energy demands. A former Google CEO even suggested dropping climate goals altogether to allow AI to run full tilt and solve our climate crisis in the future.

Nevertheless, these AI giants recognize the threat to the energy supply network (or the lack thereof). That’s why, aside from investing in renewables for the medium term, several of them have also spent money on developing nuclear power. Microsoft has already signed a deal to restart the Three Mile Island reactor for its data center needs, while both Google and Oracle have plans to build their own small nuclear reactors. Even Westinghouse, a legacy player in the traditional nuclear power plant industry, is working on building easily deployable microreactors to power next-gen AI data centers.

Power constraints are now the limiting factor on AI development, especially as it takes a lot of time to deploy new infrastructure — power plants, transmission lines, transformers, etc. — that these AI data centers need. And while AI companies could use portable generators, like what Musk uses in the Memphis Supercluster, and other non-renewable power sources to get the power they need, this isn’t sustainable in the long run. So, our only hope for continued AI development is that these alternative, renewable power sources go online sooner rather than later.




Source link

#AWS #CEO #estimates #large #city #scale #power #consumption #future #model #training #tasks #individual #model #require #5GW #power

You may also like