When a natural disaster is on the horizon, a chain of events is often set in motion. Schools might be closed because of a blizzard, or an incoming hurricane might spur officials to order residents to evacuate. Then the disaster hits, and recovery begins.
For the people not affected, it’s easy to move on once this sequence of events has run its course. But there’s a world of people who think about natural disasters every single day. You can find them at events like the Natural Disasters Expo, held this year in Miami Beach, Florida. Attendees heard from a range of private and government experts about disaster preparedness, mitigation, and recovery. They browsed booths showcasing drones, fire extinguishers, water testing devices, and even shipping containers repurposed into emergency response kitchens.
And if they passed booth 1042, they would have seen a posterboard loudly proclaiming, “If you knew…What would you do?”
ADVERTISEMENT
That’s the motto of Weather 20/20, a company founded by retired broadcast meteorologist Gary Lezak that uses a proprietary methodology it claims is capable of predicting natural disasters (snowstorms, hurricanes, tornados—you name it) and everyday weather up to 10 months in advance. And according to Lezak, it’s the only company of its kind to do so. The idea is that these predictions can save money and lives by leading to earlier preparations for natural disasters.
In October, he touted Weather 20/20’s prediction of Hurricane Ian. Using a “peer-reviewed methodology and weather prediction technique” called “Lezak’s Recurring Cycle” (LRC), Lezak purported to have identified a weather system back in March that would hit Florida, Cuba, or the Bahamas between late September and early October 2022. In late September of last year, Hurricane Ian hit Cuba, Florida, and South Carolina, killing dozens of people and knocking out power for millions.
The Expo was also something of a victory lap for Weather 20/20. Just days earlier, the company announced a new partnership with Baron Weather, a company that provides storm-tracking tools and weather predictions for hundreds of TV stations, satellite radio, and U.S. and international government agencies.
“Weather 20/20 has a successful track record for providing long-range guidance with respect to the geographic region and timing of significant weather risks,” Baron Weather CEO and president Bob Dreisewerd said in the press release announcing the partnership. “Our business clients have a need for guidance beyond the typical 7-14 day forecast period and will find the forecast insights by Weather 20/20 very valuable in terms of long-range planning, risk mitigation, and decision support.” A representative from Baron Weather declined an interview request for this story.
But meteorologists and climate experts who spoke to The Daily Beast take issue with the company’s premise. According to them, weather prediction like the kind Lezak advertises is simply not possible with today’s models.
“I don't fault the guy for selling it to people, and I don't fault people for believing in it either, because it has shown some efficacy,” Nick Lilja, an atmospheric scientist and the managing meteorologist for NickelBlock Forecasting, told The Daily Beast. “But in any of the conferences that I've seen him give a talk, he has never shown where it's failed. And I can tell you... it fails.” In the winter of 2011, for instance, the LRC projected that Milwaukee would receive between 55 and 65 inches of snow. In fact, the city only received about 30 inches that winter. (When asked about this example, Lezak countered that LRC had come far since 2011, and that he believed the story tearing down LRC was a result of a larger competition of meteorologists competing for the same Milwaukee market and looking for an edge.)
It’s a problem that recalls the beginning of the famous George Box aphorism: All models are wrong.
Chaotic Weather
Weather 20/20 is Lezak’s second act. For 38 years, he was a meteorologist in Oklahoma and Missouri, retiring from the chief meteorology position at local TV station KSHB at the end of 2022. He came up with his hunch about long-range weather prediction near the start of his career after observing a series of snowstorms in Oklahoma during the 1987 winter season.
“There was a one-foot snowstorm in December, and then a few weeks later, this place that averages eight inches of snow a winter had a second one-foot snowstorm,” Lezak told The Daily Beast. “That’s when I noticed, ‘Wait a second. It looks like the pattern is similar.’”
Over the next 15 years, he developed this hunch into a theory that eventually became the LRC. According to Lezak, every year around October, a cyclical pattern “sets up.” This cycle could be anywhere from 35 to 77 days and acts as a template for the following months’ weather. For example, according to the LRC, if a storm hits Chicago on Oct. 20 and the cycle length is determined to be 61 days, another one will hit around Dec. 20. And so on and so forth, until the cycle resets itself the next fall.
“It'll be the same but different, because December is different from October. The July version will be different from October—it's not gonna snow in July,” Lezak said. “What I'm saying is that the pattern is cycling regularly. There is order in chaos.”
But this theory is at odds with the predominant science behind weather prediction. The foundation of modern weather forecasting has to do with an area of physics known as chaos theory. Ed Lorenz, a meteorologist at MIT who is regarded as a founder of chaos theory, discovered a phenomenon called error growth when he entered two numbers differing by a value of less than .001 into a complex computer program—and got wildly divergent results. This would also come to be known as the butterfly effect, where a tiny change caused a huge impact.
Lorenz immediately realized that chaos theory would make long-range weather prediction impossible because of the sensitivity of initial measurements. David DeWitt, the director of the National Weather Service’s Climate Prediction Center, told The Daily Beast that this means there are small errors present in any meteorological observation or prediction that balloon over time.
“We just don't have the computing [to eliminate these errors], nor are the physical processes represented perfectly,” DeWitt said.
Lorenz figured out that it is possible to predict the weather for about two weeks into the future—after that, error growth would render calculations essentially useless. As for repeating cycles, DeWitt agreed that long-range prediction theories in general, like the LRC or the Old Farmer’s Almanac, are not based in science.
However, Lezak believes that the LRC’s results speak for themselves. One prediction he’s already made this year is that there is a 64 percent chance of a tropical storm or hurricane hitting between Miami Beach and Daytona Beach around the second week of September. Lilja said he’s seen broadcast meteorologists make these kinds of predictions on air, citing the LRC. But when he goes to look at the dates they list, he’s struck by an observation that makes the model seem a lot less clairvoyant: Early September is the peak of hurricane season.
“That’s not a forecast; that’s an inevitability,” Lilja said.
The National Weather Services publishes three-month outlooks for U.S. temperature and precipitation, but these forecasts compute probabilities that a region’s weather will differ significantly from their normal values across a span of three months. It’s vague and imprecise, and nowhere near the kind of specific, day-by-day predictions made by the LRC. But this type of weather forecasting is the most meteorologists can know months out, given what we know about chaos theory, Lilja said. Specifics for long-range forecasting are incredibly difficult, he added.
“The way I look at weather forecasting is you can get really specific for very close in time, but the farther out you go, the more generalized you have to be,” he said. “When you’re talking about what the LRC is predicting, either it’s going to be accurate, or it’s going to be specific. It can’t be both all the time.”
Science vs Sales
On Weather 20/20’s website and in presentations about the LRC, Lezak repeatedly refers to the methodology he uses as “peer-reviewed.” But experts interviewed raised concerns about the quality of the study that he wrote, and the journal in which it was published. In 2018, he co-authored a paper laying out the LRC in the Journal of Climatology & Weather Forecasting. The journal’s website states that papers are typically processed within 45 days of submission. However, a Fast Editorial Execution and Review Process that costs $99 would lead to a paper being published within 15 days of its submission.
Many journals charge submission fees to authors, but a subset of so-called predatory journals maximize profit at the expense of scientific quality. The International Online Medical Council, whose journals include the one that published Lezak’s paper, has been accused of being a potential predatory publisher. Editors of the journal did not respond to a request for comment.
Chris Vagasky, a meteorologist who focuses on lightning, told The Daily Beast that the speed at which Lezak’s article was published—40 days after submission—was atypical for a peer-reviewed journal. The content raised red flags too.
“As someone who has peer-reviewed for reputable journals around the world, the article is one I would have rejected for lack of scientific rigor and poor writing,” Vagasky said. “The science is not repeatable,” he added: If a full cycle occurs between October and November, it can only be 61 days, but one cycle in the paper is as long as 77 days. “How is this possible?” he asked.
“Certainly in broadcast meteorology and in private sector meteorology, you always want to have something that the other guy doesn’t,” Vagasky added. “But personally, I wouldn’t rely on someone telling me on Nov. 10 not to host a concert on July 30 at Red Rocks because thunderstorms are likely. The science just isn’t there to predict that.”
Some of the LRC’s predictions touted by its creator legitimately seem accurate—but less is known about where the model fails. Until the good, the bad, and the ugly of the model are shared with researchers, people in the field are unable to make a full assessment of it, Lilja said.
“The big hesitation for the scientific community is that he is not releasing or sharing failure points for the community at large to critique,” he said. No model can be correct 100 percent of the time. “We know that it fails somewhere. But the fact that he’s not willing to tell us where it fails is the reason no one wants to ask.”
Lilja recalled once meeting Lezak at a conference and asking him questions along those lines—where his model isn’t perfect, or places for improvement. He recalled that Lezak dodged his questions and spoke instead about instances where the LRC has seemingly predicted severe weather events. (Lezak told The Daily Beast he only vaguely recalled the interaction, and said he did conclude a June 2022 presentation)" href="https://urldefense.com/v3/__https://vimeo.com/manage/videos/724106743__;!!LsXw!X5UUbUZ3ru8zQc56AACtWYDTjj9XaPpqiqxuP7nh7OdFQXexUjFUmK0vbL2O3AMhDaAiuhFNpUg71P6IY9AP0_YtOTs$">June 2022 presentation by saying there have been busts.)
“I walked away now with a bad taste in my mouth,” Lilja said. “I think he’s a nice guy. But it didn’t feel like a science conversation, it felt like a sales conversation.”
There’s a growing chance that LRC predictions could have an impact outside of the scientific community—and with it comes the potential to do harm, Lilja said. It’s not clear how Baron Weather will incorporate the LRC into its prediction offerings for media, private companies, and government organizations, but DeWitt said that major industries use weather predictions to make important decisions about resource allocation that have far-reaching impacts. A water resource provider, for instance, might use rainfall predictions to allocate their supplies. Not only would a decision based on an inaccurate prediction cost the provider, it could leave people without access to a crucial resource.
Moreover, some of the LRC’s predictions are vague to the point of needlessly worrying people, Lilja said. A prediction of tornados in either the South or Southwest might turn out to be accurate, but it’s not precise, and can do more harm than good.
“Is everyone across eight states supposed to be on guard in seven months?” Lilja said. “If that’s the goal, that’s fine, but that's a big false alarm for a large population of people.”
Lezak said that he doesn’t worry about causing people to panic unnecessarily or take excessive precautions. “If you know a hurricane is likely going to strike an area weeks and months before, maybe some families will plan to get away that week, and you save lives that way,” he said. “There is an advantage in knowing ahead of time.”
He’s not surprised that scientists doubt his work, though, adding he knows that the LRC “makes other scientists a little nervous.” As is in weather prediction—and in life for that matter—only time will tell how the LRC will fare. If Lezak is right, the model could be the biggest development in meteorology since chaos theory. If he’s wrong, though, it’ll be a disaster you can’t blame on nature.
“If it’s real, which it is, it’s a discovery,” he said. “Usually discoverers are dead and long gone before they become famous.”