Illustration by Fanny Luor

Work Culture

How AI is helping these scientists bring some order to the chaos of their work

By

Published on October 16, 2025

On a cold, overcast morning in December 1972, Ed Lorenz, a meteorologist from MIT, delivered a talk at the Sheraton Park Hotel in Washington, D.C., that posed an unusual, possibly heterodox question: Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?

When attempting to predict weather using early computers in his lab, Lorenz found that even infinitesimally small changes to otherwise identical models of weather patterns could produce dramatically different outcomes. It was a relatively simple premise with profound implications for our deterministic understanding of the universe.

Lorenz, in his typical candor, admitted that he didn’t have an immediate answer to the question he posed. But that wasn’t his goal in asking. Among other factors, he said, “We do not know exactly how many butterflies there are, nor where they are all located, let alone which ones are flapping their wings at any instant.”

What we can do, Lorenz suggested, is increase the volume and precision of our observations and boost our capacity to analyze them with computers. That, of course, was more than 50 years ago. With the rise of AI, today’s tools offer researchers and other knowledge workers exciting new opportunities to expand their ability to search through massive data sets and organize information in ways previously unimaginable. These tools are augmenting the way that researchers unlock key information about future outcomes.

Dropbox Dash: Find anything. Protect everything.

Find, organize, and protect your work with Dropbox Dash. Now with advanced search for video and images—plus generative AI capabilities across even more connected apps.

See what's new →

The butterfly question, and the research it led to, revolutionized predictive modeling, changed the way we think about causality across scientific disciplines, and made Lorenz the father of Chaos Theory. It’s a branch of mathematics and science that studies systems that appear random and unpredictable but are actually governed by underlying patterns. (The idea permeated deep enough into the collective consciousness as to inspire the 2004 Ashton Kutcher vehicle, The Butterfly Effect, which currently maintains, perhaps unfairly, a 34% on the Tomatometer.)

Chaos is the reason we can’t predict the weather beyond two weeks ahead. It’s also why, as the famous statistician George E. P. Box put it, “All models are wrong.” For those trying to see into the future, like climate scientists modeling the impacts of climate change, it’s the unmoving obstacle that all of their work ultimately crashes up against.

AI can’t change that. But a rising cohort of researchers is using new generative models to build on, or supplant, the classic physics-based models to try and see around the chaos obstacle. Building forecasting models used to require teams of scientists to hand-code equations, resulting in codebases tens of thousands of pages long and simulations that could take weeks to run on supercomputers the size of tennis courts. As the models added layers of complexity, their accuracy improved, but they also got even more labor- and energy-intensive. That’s something AI actually can help with. In an interview with Nature, climate scientist Tapio Schneider said it has taken the drudgery out of his work: “Machine learning makes this science a lot more fun… It’s vastly faster, more satisfying, and you can get better solutions.”

With the rise of AI, today’s tools offer researchers and other knowledge workers exciting new opportunities

What does a generative AI model do? It makes predictions about what the future will look like based on a present set of observations. “The future,” in this case, could be the next word in a ratatouille recipe, the next pixel in a deepfake video of Tom Cruise tripping in a menswear shop, or it could be the next state of a weather system, given information about the recent past.

Aditya Grover is a researcher at UCLA, where he applies generative AI to scientific discovery. His work on ClimaX, the first AI foundation model for weather and climate, landed him a spot on the Forbes “30 Under 30” list. Last year, he won a half-million dollar grant from the National Science Foundation to pursue this work further. He’s also on the more controversial side of an ongoing debate between those who build existing knowledge of physics into their climate and weather models and those who prefer to avoid encumbering the AI with human assumptions. Grover is something of an AI purist. He prefers to rely on the power of data and massive compute.

“There is a big movement which thinks that physics should come in everywhere in the whole pipeline—not just in the data, but also how you train these models and how you evaluate them,” Grover said. “I would argue that we should be pretty careful about putting in too much physics.” 

Grover points to a short essay from Rich Sutton, winner of the Turing Award and godfather of reinforcement learning, entitled “The Bitter Lesson.” Sutton argues that time and again, when scientists inject their own domain expertise into models, it yields better outcomes in the short term. But in the long run, search algorithms and learning with sufficiently massive computation ultimately win out. 

The best example may be the computer that in 1997 beat Garry Kasparov, the world champion in chess. Early computer-chess researchers attempted to build a human understanding of chess into their programs. But the computer that ultimately won was based on massive, deep search alone. The “human-knowledge-based chess researchers were not good losers,” Sutton wrote. They argued that “brute force” may have won this time, but that didn’t mean that relying on compute was a general strategy. This pattern—the bitter lesson that raw computer power is, in fact, better than attempting to encode human expertise—was borne out again with Go, speech recognition, and computer vision. 

Rather than have humans tell computers how they should think about a problem, researchers like Grover think we’ll get much better—and faster—results if we let computers figure things out for themselves. Which means the humans can spend more of their time and energy on the parts of this work that can have the most impact.

“That's also one of those highly underappreciated daily conveniences of a scientist: that you can do more explorations and iterations if you have a really fast simulator,” Grover said. “That means we have to increase the scale of challenges we are addressing. The goalposts are shifting, but in the right direction. You're becoming more and more ambitious about the scale of problems you're trying to solve.”

Whether or not his climate predictions will be more accurate, Grover is modest enough to admit that he doesn’t know yet. 

“If I make a forecast about 2035 today, we have to wait 10 years to tell whether I was right.”

What does a generative AI model do? It makes predictions about what the future will look like based on a present set of observations

Not all scientists can afford to be so cavalier. Climate forecasting is essential, but it’s also contained safely in the realm of the theoretical. It helps determine policy and promote action, but the stakes are more abstract. Not so with weather forecasting. As the recent floods in Texas made painfully clear, modeling rainfall and flood patterns with high accuracy is an urgent societal necessity. 

Auroop Ganguly knows exactly where the rubber meets the road when it comes to weather forecasting. He’s a hydrologist and civil engineer at Northeastern who’s also the director of the Institute for Experiential AI, which focuses on applying AI to high-impact challenges while keeping the human in the loop. And he’s spent a combined 12 years at the Oak Ridge and Pacific Northwest National Labs. 

“AI is great. But we have to put it to rigorous tests,” Ganguly said. “With weather there’s accountability. Real people are using the outputs.”

The issue is explainability. “Can we interpret it well? If it’s working well, no problem. But if something fails, and you cannot interpret it, that’s a big problem for stakeholders and decision makers.”

Ganguly’s recent work focuses on nowcasting, the real-time estimation of what’s happening in a weather event. In an article published in Nature last year, his team used a hybrid physics and AI model to outperform the traditional numerical weather prediction models for precipitation. This can be used to improve emergency response and hydropower management.

“If you’re looking at the flash floods that happened in Texas recently, we are interested in the short-term forecasting of rainfall,” Ganguly said. “So much rainfall happens in an urban area that it can’t infiltrate the soil.”

Ganguly is optimistic about the potential for AI to improve these predictions. Not only that, researchers can also lean on AI to speed up tedious modeling tasks so they can focus on deeper questions, thereby enhancing—rather than replacing—complex knowledge work. Ganguly just doesn’t want to unnecessarily disregard the short-term accuracy that physics can provide. 

“Physics is F = ma, right? That’s Newton’s laws of motion. It will be very hard for me to be convinced that we don’t need that physics. But nothing in the real world works according to F = ma. In the real world, here on Earth, you have friction.”

For those of us who weren’t paying attention in physics class, he’s referring to the classic challenge of modeling: the neat equations on paper rarely match the messy, friction-filled reality we actually live in.

Researchers can also lean on AI to speed up tedious modeling tasks so they can focus on deeper questions

Claire Monteleoni has been attempting to get AI researchers and climate scientists to work together for her entire career. She’s a professor at the University of Colorado Boulder, currently on leave at INRIA in Paris. Dirty grids aren’t a problem in France, which is powered mostly by nuclear. Her work focuses on the granularity of weather and climate modeling, which, traditionally, is very coarse.

“What is the problem with the climate models?” Monteleoni asks. “Currently, physics-based models used to predict very long into the future can only do it at rough scales, spatially. Like all of France might be represented in one number.” Meaning, an entire country is represented as a single data point, without capturing regional differences.

This presents a problem when the French utility asks where to put wind turbines in the future. Wind patterns are changing, and if you put the turbines in the wrong place, the cost-benefit no longer pencils out. That’s where AI comes in. “We created a general method called ClimAlign that can basically use generative AI to not just go from a coarse resolution to a fine resolution, but give you a probability distribution over possible fine resolution instantiations.”

Much of the pushback against AI right now centers on the outsized electricity demand created by the proliferation of data centers. The addition of more of them, particularly in dirty grids, typically means greater greenhouse gas emissions and increased water use. In a surprising twist to that narrative, Monteleoni’s new AI models use much less compute, and therefore less electricity, than the traditional physical models that run on a computer the size of a tennis court. 

What’s more, the hallucinations that make chatbots so problematic in many applications can actually be useful in the context of climate predictions. In a changing climate, you don’t just want to know what the average increase in temperature is. You also want to know about extreme events—damage from wildfires and floods—that exist outside of the average case. 

“So probabilistic or generative AI is one of the only ways that we can really reliably make a lot of different scenarios and then recover some of those really rare case scenarios that we actually urgently need to study for climate change.”

This might be the surprising part of it all. Generative AI can finally help us make sense of mountains of data and move past the blunt tools of the ’70s—opening the door to insights and innovations across modeling, climate research, and other industries with a deep interest in predicting the future.

As Monteleoni puts it, “there’s a science reason for using generative AI.”