For seventy years, weather forecasting has been dominated by numerical weather prediction — a computationally intensive approach that divides the atmosphere into a three-dimensional grid and solves the equations of fluid dynamics at each grid point, stepping forward in time to simulate how the atmosphere will evolve. The approach works, but it requires enormous computational resources, and its accuracy degrades rapidly beyond about ten days because small errors in the initial conditions amplify exponentially.
A new class of AI-powered climate models is challenging this paradigm. GraphCast, developed by Google DeepMind, and Pangu-Weather, developed by Huawei, have both demonstrated the ability to generate 10-day forecasts that match or exceed the accuracy of the European Centre for Medium-Range Weather Forecasts — widely considered the gold standard of global weather prediction — at computational costs that are orders of magnitude lower. A forecast that takes the ECMWF's supercomputer cluster hours to generate can be produced by GraphCast in under a minute on a single GPU.
The implications of this cost reduction are significant. Weather forecasting has historically been the province of national meteorological agencies with access to large supercomputer installations. If AI models can produce comparable forecasts on commodity hardware, the barriers to entry drop dramatically. Smaller countries, research institutions, and private companies can now run their own high-quality forecast models — a democratization of capability that could have substantial humanitarian benefits in regions where weather forecasting has historically been limited.
"We are not replacing physics-based models — we are complementing them. The AI models are trained on the output of physics-based models, so they are learning the physics implicitly. But they can generalize in ways that the physics models cannot, and they can do it much faster."
— Lead researcher, Google DeepMind Weather Team
The more ambitious application is climate projection — using AI models to simulate how the climate will evolve over decades and centuries under different emissions scenarios. Traditional climate models require months of supercomputer time to generate century-scale projections. AI approaches could potentially compress this to hours, enabling a much larger ensemble of simulations and a much more thorough exploration of uncertainty.
This is where the scientific community is more cautious. Weather forecasting is a well-defined problem with abundant training data and clear evaluation metrics. Climate projection is fundamentally different: the AI models would need to extrapolate to conditions that have never existed in the historical record, and the consequences of errors in climate projections are measured in decades and trillions of dollars of policy decisions. The scientific community's concern is not that AI climate models are wrong — it is that we do not yet have good ways to know when they are wrong.
Several research groups are working on hybrid approaches that combine the physical interpretability of traditional models with the computational efficiency of AI. The idea is to use AI to accelerate the most computationally expensive parts of the simulation — typically the parameterization of sub-grid-scale processes like cloud formation and ocean mixing — while retaining the physics-based framework that provides interpretability and physical consistency.
The practical stakes are high. Climate projections inform infrastructure investment decisions, agricultural planning, insurance pricing, and international climate negotiations. If AI can make those projections faster, cheaper, and more accurate, the benefits are enormous. If it introduces systematic biases that are not caught before they influence major decisions, the consequences could be severe. Getting this right matters, and the scientific community is taking the challenge seriously.