Like many other meteorologists around the U.S. Gulf Coast on the morning of August 26, 2005, Alan Gerard was monitoring the latest computer model forecasts for Hurricane Katrina—which had just emerged over the Gulf of Mexico after striking South Florida as a Category 1 storm. Gerard, then meteorologist in charge at the National Weather Service’s (NWS’s) office in Jackson, Miss., saw that the newest projections indicated that Katrina would track farther south than previous model runs had predicted. “It was a big change,” he says—and a concerning one because it meant that the storm would have more time over warm water to strengthen and that Katrina’s path had shifted westward, toward Mississippi.
With the weekend fast approaching and several hours before the official forecast would be updated, Gerard quickly e-mailed Mississippi’s emergency management agency to warn them that the state was facing a worse hit and that they needed to start preparing right away.
Just three days later, on August 29, Katrina rammed into the coast at the Louisiana-Mississippi border with a 20-mile-long wall of storm surge estimated at 24 to 28 feet high. (The exact heights that the surge reached aren’t known because most of the gauges, buildings and other structures that would provide evidence of a high-water mark were obliterated.) In the subsequent hours, the levees around New Orleans failed, releasing torrents of water into the city and making Katrina the deadliest storm to hit the U.S. in nearly 80 years.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
READ MORE: Is New Orleans Safer Now Than When Hurricane Katrina Hit 20 Years Ago?
Despite the disaster that unfolded because of human mistakes, Katrina had been a well-predicted hurricane; the forecast errors involved were lower than the average at the time. But Katrina, along with the rest of the blockbuster 2004 and 2005 hurricane seasons, helped spark a dedicated, government-funded effort to make hurricane forecasts even better. Over the past 20 years, that project has nearly halved the error in predictions of where a storm will go and has given communities an extra 12 hours of warning time. By one estimate, these and other improvements have saved the nation up to $5 billion for each hurricane that hit the U.S. since 2007—3.5 times as much as the NWS’s budget for 2024. The resounding success is an example of “how this can all work when it’s done right,” Gerard says.
But that success, he and other hurricane experts warn, is under threat as the Trump administration is chopping away parts of the research staff and infrastructure that made such remarkable, lifesaving progress possible.
How Hurricane Forecasts Have Improved
When Frank Marks began forecasting hurricanes in the 1980s, it was only really possible to try to roughly predict the track that a storm would take. “Intensity was a wing and a prayer,” he says. Back then a storm similar to Hurricane Erin, which parallelled the East Coast in mid-August 2025, would have likely prompted meteorologists to warn the entire coast of a possible hurricane hit because of the inherent uncertainty in forecasts. But this year forecasters were able to tell that Erin would stay well out to sea; they only issued warnings for rip currents, heavy surf and some storm surge in coastal areas. “To me, that is astounding, to see that evolution,” says Marks, who became director of the National Oceanic and Atmospheric Administration’s Hurricane Research Division in 2002 and is now retired.
By the time Katrina formed near the Bahamas on Aug. 23, 2005, increased computing power, a better understanding of the physics of hurricanes and more detailed observations of storms had substantially improved forecasts. But after the Gulf was battered by storms throughout 2004 and 2005, Vice Admiral Conrad Lautenbacher, then administrator of NOAA, thought there was still plenty of room for improvement, Marks says.
“If you eliminate all of that research, you’re basically creating a stagnant weather service and a stagnant weather community in general.” —Alan Gerard, former National Weather Service meteorologist
What grew out of that initial request was a fairly revolutionary effort that was eventually dubbed the Hurricane Forecast Improvement Project (HFIP). (The full name was subsequently changed to the Hurricane Forecast Improvement Program.) Its first step was to ask forecasters what problems they faced—and to bring together NOAA’s hurricane researchers and modelers, as well as academic scientists, to solve those issues.
HFIP’s teams deliberately worked to fine-tune models that could better capture the intricate physics of the atmosphere, such as how energy is exchanged between the ocean and the atmosphere or how certain kinds of clouds reflect sunlight back to space. Gradually, what the models showed more closely matched what meteorologists actually observed, says Marks, who served as research lead for HFIP. “Then, all of the sudden, we started to see improvements” in forecasting, he adds. By 2015, track forecasts had improved by 20 percent compared with their accuracy in 2005.
Now, in 2025, track forecast errors have decreased by 40 percent compared with 2005, and intensity forecast errors have declined by 30 percent since that time, says James Franklin, former chief of the National Hurricane Center’s (NHC’s) Hurricane Specialist Unit. And Brian McNoldy, a hurricane researcher at the University of Miami, has looked at how just the improvement in track forecasting would have narrowed Katrina’s “cone of uncertainty,” a measurement that shows the general area where the center of a storm is most likely to travel. Under today’s forecasts, Katrina’s cone would have narrowed the focus on Mississippi earlier on.
Brian McNoldy (storm path graphics), modified by Amanda Montañez; Source: National Oceanic and Atmospheric Administration (satellite map and data)
Hurricane watches and warnings are now issued 36 hours and 48 hours before the expected impacts, respectively, compared with the 24 and 36 hours of notice in 2005. “You can do a lot of preparation in 12 hours,” Franklin says.
Forecasts of whether and where the seeds of a storm might organize into a tropical storm or hurricane also have longer lead times and are much more precise about the chances of formation than they were two decades ago. And today the NHC issues forecasts for the track and intensity of possible storms “even before they form,” Franklin says. At the time of Katrina, the NHC couldn’t put up warnings until a storm had become at least a tropical depression. “Now we don’t have to wait,” Franklin says.
In 2017, with additional funding from Congress as part of the Weather Research and Forecasting Innovation Act, work began that was focused specifically on improving forecasts of a pernicious phenomenon called rapid intensification. Defined as the strengthening of a storm’s winds by at least 35 miles per hour in 24 hours, rapid intensification can leave those in harm’s way facing a much stronger storm than originally anticipated without much notice.
The work to improve rapid intensification forecasts resulted in the development of the Hurricane Analysis and Forecast System (HAFS) in just three years—an astonishing speed made possible through the development of a robust model testing infrastructure and the nurturing of talent under the HFIP, Marks says. The new system debuted with the 2023 hurricane season, and the NHC has successfully predicted rapid intensification for several storms since then. “That was a dream 20 years ago,” McNoldy says. And though there are still misses, “just to be able to do it some of the time is remarkable,” Franklin says.
How Gains in Forecasting Could Be Lost
Marks, Franklin, Gerard, McNoldy and others are all worried about this progress being lost—and further progress never coming to fruition—because the Trump administration has pushed to slash the federal workforce and drastically cut research funding. In its proposed 2026 budget, the administration wants to completely eliminate NOAA’s Office of Oceanic and Atmospheric Research (OAR). “Most of the HFIP work was done by OAR scientists,” Gerard says. “Essentially, if you eliminate all of that research, you’re basically creating a stagnant weather service and a stagnant weather community in general.”
In its budget negotiations so far, Congress has not followed the administration’s requests to significantly cut OAR, but reporting by Science shows the administration is withholding nearly $100 million of funding for the office that was already allocated by Congress for this year. And hundreds of NWS and NOAA employees were either fired or took a buyout earlier this year as well. Among them were people who had worked on new models such as HAFS. The Hurricane Research Division, which is part of one of the nine OAR labs around the country, now has one third of the staffing it had at the peak of HFIP, Marks says. “This year we’re struggling,” he adds. And further cuts would stymie potential progress toward modeling storm impacts at more detailed scales and being able to issue warnings for events such as tornadoes and flash floods based solely on forecasts (instead of once those threats are observed, as is the current practice). “If you like it the way we forecast now, then that’s what you get,” Marks says. “You’re not going to get much better without research.”

A residential area is engulfed in shipping containers, RVs, and boats washed ashore n Gulfport, Miss., following high winds and waves from Hurricane Katrina.
Paul J. Richards/AFP via Getty Images
Many experienced people have also already taken early retirement as part of buyouts offered by the administration, which has left up-and-coming researchers with fewer people to learn from, Marks says. “You’re going to lose very talented, smart people to other fields,” McNoldy agrees.
Even maintaining the current forecasting quality takes effort, says Kim Wood, an atmospheric scientist at the University of Arizona. Computer model code has bugs, and updates have to be made—for example, to take in new sources of data. Wood likens the situation to owning a car: “Eventually you need to replace tires, replace the oil. You have to maintain the car for it to continue to be usable,” she says. Likewise, “there’s a lot of invisible work that enables what we see on our phones” when we look at a forecast.
Because those forecasts on our phones and TVs are now so ubiquitous and accurate, “it makes people not realize truly what a scientific achievement it is,” Gerard says, “when you stop and think about how complex the atmosphere is and how we have been able to get to a point that we can, with pretty remarkable accuracy, predict what’s going to be happening with your weather five days from now. We’re literally predicting the future. And I think that’s amazing.”