Observations and models–that’s often an uneasy relationship. It’s not always easy to find the common ground needed to turn observations into model input and then models themselves into physically realistic output consistent with those observations.

DOE’s Atmospheric Radiation Measurement (ARM) program is trying to pull observing and modeling—ranging over vast time and space scales—tighter together, into effective bundles of science. Naturally, they’re using an initiative called “LASSO.”

Focused on shallow convection (often small, low-level scattered clouds), LASSO, or, the “Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation” project centers on the capabilities of DOE’s Southern Plains observatory in Oklahoma. LASSO is designed to “add value to observations” through a carefully crafted modeling framework that evaluates how well the model “captures reality,” write William I. Gustafson and his colleagues in a paper recently published in the Bulletin of the American Meteorological Society (BAMS).

Renderings of cloud water isosurfaces (10−6 kg−1), every 2 h, show the diurnal evolution of a cloud field from a simulation forced by the VARANAL large-scale forcing on 30 Aug 2017. Cloud shadows can be seen in the surface downwelling shortwave radiation (colors; W m−2).
Renderings of cloud water isosurfaces (10−6 kg−1), every 2 h, show the diurnal evolution of a cloud field from a simulation forced by the VARANAL large-scale forcing on 30 Aug 2017. Cloud shadows can be seen in the surface downwelling shortwave radiation (colors; W m−2).

 

LASSO bundles data such as observations, LES input and output, and “quick-look” plots of the observations into a library of cases for study by modelers, theoreticians, and observationalists. LASSO includes diagnostics and skill scores of the extensive observations in the bundles and makes them freely available with simplified access for speedy use.

The goal of the data packaging approach is to enable scientists to more easily bridge the gap from smaller scale measurements and processes to the larger scale at which most modeling parameterizations operate.

We asked Gustafson to explain:

BAMS: What would you like readers to learn from this article?

Gustafson: In the atmospheric sciences we work with so many scales that we often get siloed into thinking in very scale-specific ways based on our sub-specialty and the type of research we do. This can happen whether we are modelers trying to wrap our brains around comparing parcel model simulations with global climate models, or as observationalists trying to rationalize differences between point-based surface measurements and big, pixel-based satellite measurements. The LASSO project is one attempt to get past limitations sometimes imposed by certain scales. For example, the DOE ARM program has such a wealth of measurements, and at the same time, DOE is developing a new and improved climate model. LASSO is one way to help marry the two together to add value for researchers working with both sets of data.

How did you become interested in the topic of this article?

My training is as a modeler, and over the years, a lot of my research has looked at issues of scale and how atmospheric models can better deal with unresolved detail—the so-called subgrid information. We know that subgrid information can be critical for properly simulating things like clouds and radiation. Yet, we cannot run global models with sufficient resolution to track this information. So, we need tools like large-eddy simulation to help us make better physics parameterizations for the coarser models used for weather prediction and climate change projections. Marrying the LES more tightly with observations seemed like a great way to help the atmospheric community move forward and make progress improving the models.

What got you initially interested in meteorology or the related field you are in?

I find weather fascinating and awe inspiring, and science has always been one of my interests alongside computers. Coming out of my undergrad years with a physics degree, I knew I wanted to pursue something related to computing, but I did not want to do it for a company for the sole purpose of making money for somebody. Atmospheric modeling seemed like a great way to apply my computer interests in an impactful way that would also be a lot of fun. Not many people get to play on giant supercomputers for a living trying to figure out what makes clouds do what they do. I have never looked back and much of my job I see as a grown-up playground where I get to build with computer bits instead of the sand I used to play with as a kid.

What surprises/surprised you the most about the work you document in this article?

This is not an article with filled with “ahah” moments. It is the result of years of effort put into developing a new data product that combines input from a large number of people with many different specialties. So, I would not say that I came across surprises.

However, I have come to really appreciate the help from so many people to make LASSO happen. We have people helping to collect input from dozens of instruments that have had to be maintained, data that has to be quality controlled, computers that are maintained, the actual modeling and packaging of the observations with the model output, the database and website development to make the product findable by users, the backhouse archive support, and communications specialists that have all been
critical to make LASSO happen.

What was the biggest challenge you encountered while doing this work?

Working with a long-term dataset has been one of our big challenges. We have been trying to put together a standardized data bundle that would make it easy for researchers to compare simulations from different cases spanning years. However, instrumentation changes from year to year, which means we continually have to adapt. Sometimes this presents itself as a new opportunity because of a new capability, such as a new photogrammetric cloud fraction product we are starting to work with. Other
times, existing instruments malfunction or are replaced with instruments that do not have the same capabilities, such as a switch from a two-channel to a three-channel microwave radiometer. The latter, in theory, could offer improved results, but in reality, led to years of calibration issues.

What’s next? How will you follow up?

The LASSO activity has been well received and we are excited to be expanding to new weather regimes. During 2020 we have been developing a new LASSO scenario that focuses on deep convection in Argentina. This is really exciting because storms in this area are some of the tallest in the world. It will also be a lot of fun working with LES of deep convection with all its associated cloud motions and detail. We plan to have this new scenario ready for release in 2021.

William Gustafson visited the DOE ARM's Southern Great Plains Central Facility in 2016 near the beginning of the LASSO activity. Seeing the locations of the instruments within their natural environment has really helped put the them into context and use them in conjunction with the LASSO LES modeling.
William Gustafson visited the DOE ARM’s Southern Great Plains Central Facility in 2016 near the beginning of the LASSO activity. Seeing the locations of the instruments within their natural environment has really helped put them into context and use them in conjunction with the LASSO LES modeling.

{ 0 comments }

Many of us will not be seeing fireworks this Independence Day, due to coronavirus restrictions and local ordinances. But one way to make up for not seeing festive explosions of color and fire in person this year might be to see what they look like…on weather radar.

Willow fireworks 2.3 s after burst. The three smaller bursts are at earlier stages of development. The one in the upper-right corner is at 270 m above ground.
Willow fireworks 2.3 s after burst. The three smaller bursts are at earlier stages of development. The one in the upper-right corner is at 270 m above ground, the highest of the bursts in the study.

 

In “Fireworks on Weather Radar and Camera,” published recently in the Bulletin of the AMS (BAMS), Dusan Zrnic (National Severe Storms Laboratory) and his colleagues looked at Fourth of July fireworks in Norman, Oklahoma, and Fort Worth, Texas, using reflectivity data and the dual-pol capability on finer resolution radar, which could discern meteor sizes from the explosions.

The three types of radars were: NSSL’s research (3-cm wavelength) dual-polarization radar, the Terminal Doppler Weather Radar (TDWR) that operates in single polarization at a 5-cm wavelength from the Oklahoma City airport, and NWS Doppler radar in both Norman and Fort Worth. To complement the radar, video was taken of the shows.

In Norman, they found bursts were typically 100 to 200 m above ground and a few of them spread to 200 m in diameter. Some of the meteors fell at 22 m s-1, or about the fall speed of large hail. The Fort Worth fireworks were often much larger, and reflectivity  could cover an area about 800 m to more than 2,000 m across–four times as big as in Norman. The peak reflectivity signals in Fort Worth were also greater.

Fields of reflectivity Z (in dBZ), Doppler velocity υr (in m s−1), and Doppler spectrum width συ (in m s−1). The diameter of the white circle is 3.5 km. The data are from the operational WSR-88D over the Dallas–Fort Worth metro area. The arrow points to the patch caused by the fireworks. The patch to the right is caused by reflections off buildings.
Fields of reflectivity Z (in dBZ), Doppler velocity υr (in m s−1), and Doppler spectrum width συ (in m s−1). The diameter of the white circle is 3.5 km. The data are from the operational WSR-88D over the Dallas–Fort Worth metro area. The arrow points to the patch caused by the fireworks. The patch to the right is caused by reflections off buildings.

 

In polarimetric radar views of the Norman fireworks, the pyrotechnics signals blended with those from living things like insects, birds or bats. In the Fort Worth case, the backscatter differential phase and the differential reflectivity were in the range of giant hail.

We asked Dr. Zrnic to help us understand his motivations for this work.

How did you get started in observational studies with weather radar?

I have a degree in electrical engineering and was interested in applying my knowledge of random signals to useful purposes. I received a postdoctoral position at the National Severe Storms Laboratory, where in 1973 they had collected data from a violent tornado in Union City, Oklahoma, to gauge its maximum rotational speed. It was about 15 years ahead of any similar collection elsewhere. Upon my arrival I was given the opportunity to work on determining the Doppler spectra of the tornado. That was how I ended up comparing simulated to observed spectra. We observed a reflectivity maximum at a certain radial distance—a “doughnut” type profile that we posited was caused by drops with size and rotational speed for which the centrifugal and centripetal forces were in equilibrium. The rest is history.

What would you like readers to learn from this article?

Operational, polarimetric radars detect fireworks. Also, by comparing reflectivity at three wavelengths we can roughly estimate the dominant size of “stars” of fireworks.

Was this a surprise?

We expected that the polarimetric variables would detect the bursts, but we were surprised by the high values of reflectivities: 47 dBZ from large metropolitan displays versus 39 dBZ for small municipal fireworks as in Norman. These high reflectivity values can bias rainfall measurements unless they are eliminated from further processing.

Why study fireworks on radar?

Initially we were trying to identify onsets and locations of fires and explosions. We found we could do this using  historic WSR-88D data, but not very well. Then my co-author Valery Melnikov suggested that fireworks could be a proxy for these events and this turned out to be true.  The obvious advantage is that the exact place and time of fireworks detonation is known, making it is easy to locate a mobile radar in a favorable position to obtain key data.

What else surprised you?

The highest fall speeds of about 22 m s-1 exceeded our expectations. We also did not realize how transient the returns are; a firework can be seen by eye for up to several seconds and after that it turns into ash, which is not detectable by radar.

What was the biggest challenge you encountered?

We were hoping we might be able to observe the dispersion of Doppler velocities in the Doppler spectra and collected such data. Unfortunately, we lost these data. Another first for us was to learn how to use software for displaying visual images; once we learned, it became a matter of time to do the analysis. Also, to develop the backscattering model of “stars” required extensive literature search. There is no information about the refractive index of “stars” so we had to look up the composition of these and estimate the values for mixtures of three ingredients. The good thing is that the results are not very sensitive to a range of possible values.

Fireworks on radar may be quieter, but the paper shows that—on polarimentric displays—they’re just as colorful. When your local fireworks shows finally return, the authors advise, “using smart phones, the public can observe radar images and the real thing at the same time.”

{ 0 comments }

An AWS being installed. Note the tents of Camp IV in the background, and the exposed glacier ice visible behind. Photo credit: Baker Perry / National Geographic.
An automatic weather station (AWS) being installed on Everest’s South Col at 7,945 m (~26,066 ft). Note the tents of Camp IV in the background, and the exposed glacier ice visible behind. [Photo credit: Baker Perry / National Geographic.]

 

Despite freezing temperatures, snow is melting on Mount Everest. That’s just one finding in a recent study of weather data provided by a new network of five automated weather stations on Earth’s tallest mountain. The network includes two of the highest altitude weather stations on Earth, Balcony Station at 8,430 m (~27,658 ft) and South Col at 7,945 m (~26,066 ft), and offers “an unrivaled natural platform for measuring ongoing climate change across the full elevation range of Asia’s water towers,” Tom Matthews and his colleagues write in their new article published as an Early Online Release in the Bulletin of the American Meteorological Society.

Photos of the automatic weather stations installed during the 2019 Everest Expedition. Note the shovel handles used to mount the wind speed sensors on the Balcony weather station (upper right).
Photos of the automatic weather stations installed during the 2019 Everest Expedition. Note the shovel handles used to mount the wind speed sensors on the Balcony weather station (upper right).

 

The snowmelt is attributed to extreme insolation in the high altitudes of the Himalaya. It enables “considerable” melt up to Camp II at an altitude of 6,464 m (~21,207 ft), “despite freezing air temperatures,” the study reports. And modeling with the data the five stations are providing shows not only is melting occurring at South Col even with average air temperatures of -10°C—which means melting may be common at the tops of all but a small portion of the peaks in the Himalaya—but also is likely happening even at Everest’s peak, Matthews and his team report.

Uncertainties in the extrapolation are considerable, but we cannot rule out that limited melting during the monsoon may be occurring at the summit.

The authors note that while snow melting at the peak of the world’s tallest mountain may be “symbolic” as Earth continues to warm, sublimation of the snowpack appears to be a far greater contributor to its loss at such high altitudes. This finding has implications for the amount of snow that actually falls at extreme altitudes:

The amount of mass potentially lost by sublimation on the upper slopes of Everest, coupled with the presence of permanent snow cover over much of this terrain, raises the interesting prospect that snowfall at such altitudes in the Himalaya may be more substantial than previously thought. For example, the modeled sublimation of 128 mm at the South Col (in five months) is almost eight times greater than the predicted annual precipitation at such altitude. Windblown snow from lower elevations may account for much of the discrepancy, but the winds are also known to deflate the snow on Everest, sometimes to spectacular effect. Future work is clearly needed to rule out the possibility of a much more vigorous hydrological cycle at these extreme elevations.

Matthews and his coauthors conclude that the data the five AWSs have collected so far offer “rich opportunities” to adjust and improve mountain weather forecasting and melt modeling.

{ 0 comments }

Uncontrollable sources of ozone from stratospheric intrusions, wildfires, and intercontinental transport are complicating efforts in California to further reduce this pollutant, which is particularly harmful to our health.

Scientists measured daily fluctuations in ozone in the air across Northern and Central California in 2016 during a coordinated field campaign known as the California Baseline Ozone Transport Study. They particularly focused on ozone crossing the shoreline and accumulating in low level air over the San Joaquin Valley.

Ian Faloona (University of California, Davis) and colleagues summarize the measurements and unique meteorological context for this novel dataset in a recent article published in the Bulletin of the American Meteorological Society. Faloona et al. draw attention to the dataset’s potential for future modeling studies of the impacts of long-range transport on regional air quality.

FaloonaIaninCockpit

Falloona, in his cockpit perch during aerial measurements for CABOTS.

We asked lead author Faloona to help us understand CABOTS and his motivations for this work.

BAMS: What would you like readers to learn from this article?

Faloona: I think this article presents a nice overview of the mesoscale flow over the complex terrain of Central and Northern California, and I would like readers to become more appreciative of the global nature of air pollution. The field of air quality was once considered in terms of emissions and receptors within “air basins” but as our knowledge of the global nature of greenhouse gases in terms of climate change has developed, I believe that we have similarly become more and more aware of the global aspects of many air pollutants in general.

The CABOTS study domain and measurement platforms ranged from daily ozonesondes launched at the two coastal sites (Bodega Bay and Half Moon Bay) to the NOAA TOPAZ lidar in Visalia. The green and purple polygons represent the approximate domains surveyed by the NASA Alpha jet and Scientific Aviation, Inc., Mooney air-craft, respectively.
The CABOTS study domain and measurement platforms ranged from daily ozonesondes launched at the two coastal sites (Bodega Bay and Half Moon Bay) to the NOAA TOPAZ lidar in Visalia. The green and purple polygons represent the approximate domains surveyed by the NASA Alpha jet and Scientific Aviation, Inc., Mooney air-craft, respectively.

 

How did you become interested in the topic of this article?

Some colleagues from the UC Davis Air Quality Research Center and I became interested in long-range transport of air pollution to California and how it might be best sampled along the coastal mountains where local emissions might be minimal and the surface was well above the strong temperature inversion of the marine boundary layer. We eventually found the site on Chews Ridge where a group of renegade astronomers had been operating an off-the-grid observatory with the Monterey Institute for Research in Astronomy. They allowed us to build a climate monitoring site collocated with their observatory (the Oliver Observing Station) and then some airborne work for the San Joaquin Valley Air Pollution Control District allowed us to link the inflow at the coast to air quality issues within the leeward valley.

What got you initially interested in meteorology or in the related field you are in?

While an undergraduate studying physical chemistry I wrote a term paper on acid rain for a chemical oceanography class. I was floored by how few details were thoroughly understood about the chemical mechanisms of an environmental problem that at the time was considered quite serious. I figured I should throw whatever brainpower heft I could into this type of atmospheric oxidation chemistry.  But then, while working for a private consulting company in Colorado after college, many of my colleagues there were trained in meteorology and I knew there would be little progress without a fundamental understanding that field.  So I went to Penn State to do chemistry research but get trained in all aspects of meteorology.

What surprises/surprised you the most about the work you document in this article?

The first thing that surprised me about the data we collected for CABOTS was how deep the daytime up-valley flow was (~1.5 km), but how shallow the convective boundary layers tended to be (~0.5 km).  The scale interactions that need to be taken into account when analyzing boundary layers among the complex terrain of California make it a great place to study in meteorology. But the other major discovery that came out of this work was the evidence we found of significant NOx emissions from certain agricultural regions in the San Joaquin Valley. For instance, we found that the agricultural region between Fresno and Visalia was responsible for as much NOx emitted to the valley atmosphere as from all the mobile sources in the CARB inventory across the three county region.

What was the biggest challenge you encountered while doing this work?

The sensible heat at the Fresno airport.  Our airborne deployments attempted to target high ozone episodes, which are best forecast by their correlation with ambient temperatures. I like to tell my students that I am a chaser of extreme weather. It just so happens that the weather features most important to air quality are heat waves. Heat waves are extremely easy to catch, and can be brutal in their persistence.  Some days we observed temperatures in the plane on the tarmac of >115 ºF, which made it challenging to keep the equipment up and running. I remember dragging bags of ice in and out of the plane covered in sweat, and still having the instruments give up in heat exhaustion before one of our midday flights.

What’s next? How will you follow up?

I would like to continue studying the various scales at play in the transport of intercontinental pollution to North America, and my preferred tools are aircraft laboratories. I would like to follow up with a study of wintertime stagnation events that lead to particulate matter air quality problems – an entirely different meteorological beast.  But I would also like to follow up with a study of agricultural NOx emissions in the Imperial Valley of Southern California. This region is expected to have the largest soil emissions and the lowest urban sources to confound the measurements. It is also a region of important environmental justice issues being made up largely of migrant agricultural workers who have to bear the burden of the air quality problems engendered by agriculture.

 

 

 

 

{ 0 comments }

The Observationalist

June 17, 2020 · 0 comments

MogilClouds

Editor’s note:  Whether you’re in isolation or reemerging, we hope this guest column feels like a perfectly meteorological way to reconnect with the world. Read Mike’s full blog post here and more of his photos here. Above: Corona and wave clouds.

by H. Michael Mogil, CCM, CBM

First there wa“The Mentalist,” the hit CBS series that focused on Patrick Jane’s (played by Simon Baker) ability to use his mind to find clues, piece them together and, in the process, mess with the minds of others.

Now comes “The Observationalist,” played by yours truly.

I didn’t assume the role. Rather, Joseph Williams Jr.  assigned it.  Williams was a counselor and science assistant at Howard University’s summer 2009 weather camp and I was the camp’s director. Williams caught me “observing everything around me—bricks on walls, sidewalks, people, and especially the clouds.”  Shortly after giving me my alter ego, Williams started becoming an observationalist himself.  He told me that he had never looked up to see the clouds (even though he was a graduate chemistry major).

In fact, developing keen observational skills is what most detective shows and movies are all about.  The key questions are: “What do you see?” and, more importantly, “What DOESN’T fit?” What doesn’t fit is typically out of place for a reason (usually, but not always, related to the crime).

I don’t solve too many mysteries in real life (although I do get involved a bit as an expert witness in event reconstruction for weather-related lawsuits).  But as a practicing meteorologist, I have to always look for weather-related clues in the clouds, radar and satellite images and even computer model weather forecasts.

In a similar sense, my wife and I operate a math-tutoring center in Naples, FL.  Here we emphasize that solving math problems is much like solving a crime.  What information is there, how do the pieces fit together, who did it (a.k.a., the answer)?  The numbers have patterns that beg to be discovered.  My goal is to have everyone be better observers.

Most other professions require keen observational skills (although they are often not emphasized). Football quarterbacks have to be consummate observers to scan the field and find an open receiver. Artists have to “see” their world in order to paint it.

But one doesn’t need a career to be an observationalist. Just look at patterns in our natural world. For example, I love the banded patterns in many cloud types and the patterns within flower heads and waves at the beach. Take me on a road trip through the Desert Southwest and I am in awe at the rock formations that grace the landscape.

And, I ALWAYS grab a window seat on the airplane. After all, it is the closest I will ever come to being an astronaut, so why not observe the Earth as most others do not?

I am not sure where and when I became an observationalist. But, I know I was already one at nine years old (that’s back in 1954). I recall watching from my New York City apartment window as several hurricanes blew past. I also watched winter cloud lines march southward down the Hudson River. These observational experiences clearly pushed me over the brink and into a weather career.

Yogi Berra really nailed it when he said, “You can observe a lot by just watching.” You really can!

© 2011 H. Michael Mogil (updated 2020)

{ 0 comments }

With hurricanes moving more slowly and climate models projecting increasing rain rates, scientists have been grappling with how to effectively convey the resulting danger of extreme rains from these more intense, slow-moving storms.

C_BosmaFlooding rainfall already is the most deadly hazard from tropical cyclones (TCs), which include hurricanes and tropical storms. Yet the widely recognized tool for conveying potential tropical cyclone destruction is the Saffir-Simpson Scale, which is based only on peak wind impacts. It categorizes hurricanes from 1, with winds causing minimal damage, to 5 and catastrophic wind damage. But it is unreliable for rain.

Recent research by Christopher Bosma, with the University of Wisconsin in Madison, and colleagues published in the Bulletin of the American Meteorological Society introduces a new tool that focuses exclusively on the deadly hazard of extreme rainfall in tropical cyclones. “Messaging the deadly water-related threat in hurricanes was a problem brought to light with Hurricanes Harvey and Florence,” says J. Marshall Shepherd (University of Georgia), one of the coauthors. “Our paper is offering a new approach to this critical topic using sound science methods.”

“One goal of this paper,” Bosma explains, “is to give various stakeholders—from meteorologists to emergency planners to the media—an easy-to-understand, but statistically meaningful way of talking about the frequency and magnitude of extreme rainfall events.”

That way is with their extreme rainfall multiplier (ERM), which frames the magnitude of rare extreme event rainfalls as multiples of baseline “heavy” rainstorms. Scientifically, ERM is the ratio of a specific location’s storm rainfall and the maximum amount of rain that has fallen most often at the location in two consecutive-year periods from 1981 through 2010—the baseline rain events that are relatively frequent at that location. A recurring baseline heavy rain amount is defined by the median (rather than the mean) annual maximum rainfall during the 30-year period and is used to weed out outlier events.

The authors are proposing the scale to

1. Accurately characterize the TC rainfall hazard;

2. Identify “locally extreme” events because local impacts increase with positive deviations from the local rainfall climatology;

3. Succinctly describe TC rainfall hazards at a range of time scales up to the lifetime of the storm system;

4. Be easy to understand and rooted in experiential processing to effectively communicate the hazard to the public.

Experiential processing is a term meaning rooted in experience, and ERM aims to relate its values for an extreme rainfall event to someone’s direct experience, or media reports and images, of heavy rainfall at their location. Doing this has the benefit of enabling them to connect, or “anchor” in cognitive psychology terms, the sheer magnitude of an extreme rain event to the area’s typical heavy rain events, highlighting how much worse it is.

Highest annual maximum ERMs (1948–2017) are indicated with colored markers and colored lines representing linear regression fit. A Mann–Kendall test for monotonic trends in annual maxima values did not reveal significant changes over time for either ERM or rainfall. Highest annual maximum ERMs (1948–2017) are indicated with colored markers and colored lines representing linear regression fit. A Mann–Kendall test for monotonic trends in annual maxima values did not reveal significant changes over time for either ERM or rainfall.

 

The researchers analyzed 385 hurricanes and tropical storms that either struck or passed within 500 km of land from 1948 through 2012 and, through hindcasting, determined an average ERM of 2.0. Nineteen of the storms had ERMs greater than 4.0. And disastrous rain-making hurricanes in the record had ERMs directly calculated as benchmark storms. These include the most extreme event, Hurricane Harvey with an ERM of 6.4, Hurricane Florence as well as 1999’s Hurricane Floyd, which swamped the East Coast from North Carolina to New England, (ERMs: 5.7), and Hurricane Diane (ERM: 4.9), which destroyed large swaths of the Northeast United States with widespread flooding rains in 1955, ushering “in a building boom of flood control dams throughout New England,” says, coauthor Daniel Wright, Bosma’s advisor at UW-Madison.

Wright says that a major challenge in developing ERM was maintaining scientific accuracy while widening its use to non-meteorologists.

I’ve been reading and writing research papers for more than 10 years that were written for science and engineering audiences. This work was a little different because, while we wanted the science to be airtight, we needed to aim for a broader audience and needed to “keep it simple.”

In practice, these historical values of ERM would be used to convey the severity of the rainfall hazard from a landfalling storm. For example the authors successfully hindcast ERM values  in the Carolinas for Hurricane Florence, which inundated southeastern portions of North Carolina and northeastern South Carolina as it crawled ashore in 2018. With an active tropical storm or hurricane, the forecast value of ERM could be compared with historical hurricanes that have hit the expected landfall location.

Verification of the National Weather Service forecasts for the 3-day rainfall after landfall of Hurricane Florence (and ERM forecasts derived from these QPF estimates), issued at 1200 UTC 14 Sep 2018. Actual rainfall and 3-day ERM are based on poststorm CPC-Unified data.

Verification of NWS forecasts for the 3-day rainfall after landfall of Hurricane Florence (and ERM forecasts derived from these QPF estimates), issued at 1200 UTC 14 Sep 2018. Actual rainfall and 3-day ERM are based on poststorm CPC-Unified data.

 

In theory, the sound science is such that the ERM framework could be applied to other rain-producing storms.

“We think there is potential both for characterizing the spatial properties of all kinds of extreme rainstorms…and then also for examining how these properties are changing over time,” Wright says.

The researchers caution, however, that there are things that must be resolved before ERM can be used operationally as a communication tool. For example, ERM will need to be scaled to be compatible with NWS gridded rainfall products and generalized precipitation forecasts.  Forecast lead times and event durations also will need to be determined. And graphical displays and wording still need to be worked out to communicate ERM most effectively.

Nevertheless, the team argues:

…our Hurricane Florence ERM hindcast shows that the method can accurately characterize the rainfall hazard of a significant TC several days ahead in a way that can be readily communicated to, and interpreted by, the public.

D_Wright

Above, Daniel Wright, of the University of Wisconsin-Madison

{ 0 comments }

Sunset4_IMG_0813 copy

Weather proverbs can be useful indicators of real correlations observed over the centuries, but they can also show unwelcome persistence. The phenomenon is well known: for example, a December 1931 BAMS article referred to a Columbia University study that revealed most high school students had heard the proverb, “When squirrels gather an unusual supply of nuts, it indicates a severe winter”—and 61% of them believed it.

Efforts to confirm or debunk proverbs are also an old tradition. As recorded in the October 1896 Monthly Weather Review, members of the Meteorological Society of France discussed the merits of the popular proverb: ” When it rains on St. Medard’s day it will rain for forty days unless fine weather returns on the day of St. Bernabe.” They found no confirmation of the saying in their data.

In recommending W.J. Humphrey’s 1923 book sorting proverb fact from fiction, Robert deCourcy Ward of Harvard University wrote in BAMS,

There have been several such collections, but there have been practically no serious attempts to separate the “good” from the “bad” proverbs. Many proverbs are merely the relics of past superstitions. Many are useful in one climate and of no use in another land into which they have been imported. Most of our own proverbs came from Europe, or even still farther away, and do not fit into our climatic environment.

Along comes an unusually thorough verification study of Polish weather proverbs in the July 2020 issue of Weather, Climate and Society. Lead author Piotr Matczak (of the Adam Mickiewicz University in Poznań, Poland) and colleagues set their article in the context of the recent, increased interest in integrating traditional knowledge with scientific findings in order to enrich overall climate databases.

The authors searched through 1,940 sayings, mostly looking for if-then logical structure (such as “hot July leads to January frosts”) that suggested predictive power, and narrowed the list to 28 specific enough about temperature to be verified by decades of weather data from observing stations in and around Poland. In many cases, this meant turning subjective descriptions into quantitative categories. For instance, “If Saint Matthew (February 24) does not melt ice, peasants will long puff to warm their cold hands]” was recast as a test: the correlation of maximum air temperature on February 24 below 0°C to mean air temperature for the following two weeks below 0°C.

This proverb proved to be the most accurate of the bunch, fulfilling its predictions 83% of the time. The rest of the sayings were–not so much fantastical as just plain unhelpful. Only 16 of the 28 proverbs showed any forecast skill, and usually quite low skill, which wasn’t necessarily unexpected, since many of the proverbs were essentially extended range forecasts that wouldn’t have been skillful even with modern techniques. Three proverbs, like “If the Marek day (April 25) is threatening with the swelter the Boniface (May 14) freezes” never predicted accurately in the data record. Most of the time the predictive condition occurred, the predicted consequence did not occur (false alarm ratios for most proverbs greatly exceeded 50%).

Including the St. Matthew’s day prediction, only three verified more than 43% of the time: “When Zbigniew and Patrick (March 17) are freezing people’s ears, two more Sundays of winter freezing and snows,” and another for St. Matthew’s day: “If the Matthew day is warm there is a hope for spring.”

There were some interesting shifts in the proverbs’ success rate however that may warrant follow-up research. They did better earlier in the record than in later years, and better in eastern Poland and formerly Polish lands further east. Matczak et al. note,

following the Second World War, Poland was displaced by some 200 km westward, with the population displaced accordingly. Thus, the proverbs may refer to the climate of areas that are more eastward when compared with the current borders of Poland, that is, the areas nowadays in Belarus, Lithuania, and Ukraine.

 

 

{ 0 comments }

Officially, the Atlantic season is almost upon us. The season of tropical storms and hurricanes, yes, but more to the point, the season of heat-seeking machines and relentless monsters.

At least, that’s the metaphorical language of broadcast meteorologists when confronted with catastrophic threats like Hurricane Harvey in Houston in 2017. A new analysis in BAMS of the figures of speech used by KHOU-TV meteorologists to convey the dangers of this record storm shows how these risk communicators exercised great verbal skill to not only connect with viewers’ emotions, but also convey essential understanding in a time of urgent need.

For their recently released paper, Robert Prestley (Univ. of Kentucky) and co-authors selected from the CBS-affiliate’s live broadcasts during Harvey’s onslaught the more than six hours of on-air time for the station’s four meteorologists. The words the meteorologists used were coded and systematically analyzed and categorized in a partly automated, partly by-hand process. No mere “intermediaries” between weather service warnings and the public, the meteorologists—David Paul, Chita Craft, Brooks Garner, and Blake Matthews—relied on “figurative and intense language” on-air to “express their concern and disbelief” as well as explain risks.

As monster, the hurricane frequently displayed gargantuan appetite—for example, “just sitting and spinning and grabbing moisture from off the Gulf of Mexico and pulling it up,” in Paul’s words. The storm was reaching for its “food,” or moisture. The authors write, “The use of the term ‘feeder bands’…fed into this analogy.” Eventually Matthews straight out said, “We’re dealing with a monster” and Craft called the disaster a “beast.”

When the metaphor shifted to machines, Harvey was like a battery “recharging” with Gulf moisture and heat or a combustion engine tending to “blow” up or “explode.” Paul noted the lingering storm was “put in park with the engine revving.”

Other figurative language was prominent. Garner explained how atmospheric factors could “wring out that wet washcloth” and that the saturated ground was like “pudding putty, Jello.” The storm was often compared to a tall layered cake, which at one point Garner noted was tipped over like the Leaning Tower of Pisa.

In conveying impact risks, the KHOU team resorted frequently to words like “incredible” and “tremendous.” To create a frame of reference, they initially referred to local experience, like “Allison 2.0”—referring to the flood disaster caused by a “mere” tropical storm in 2001 that deluged the Houston area with three feet of rain—until Harvey was clearly beyond such a frame of reference. Then they clarified the unprecedented nature of threats, that it would be a storm “you can tell your kids about.”

The authors note, “By using figurative language to help viewers make sense of the storm, the meteorologists fulfilled the “storyteller” role that broadcast meteorologists often play during hurricanes. They were able to weave these explanations together with contextual information from their community in an unscripted, ‘off-the-cuff’ live broadcast environment.” They conclude that the KHOU team’s word choices could “be added to a lexicon of rhetorical language in broadcast meteorology” and serve as a “a toolkit of language strategies” for broadcast meteorologists to use in times of extreme weather.

Of course all of this colorful language was, perhaps, not just good science communication but also personal reality. Prestley et al. note: “The KHOU meteorologists also faced personal challenges, such as sleep deprivation, anxiety about the safety of their families, and the flooding of their studio. The flood eventually forced the meteorologists to broadcast out of a makeshift studio in a second-floor conference room before evacuating their building and going off air.”

As water entered the building, Matthews told viewers, “There are certain things in life you think you’ll never see. And then here it is. It’s happening right now.”

The new BAMS article is open access, now in early online release.

 

{ 0 comments }

For a fifth consecutive year, NOAA is forecasting an above-average number of tropical cyclones (TCs) in the Atlantic, with 13-19 named storms expected in 2020. The number of TCs includes both tropical storms and hurricanes. This is in line with recent hurricane season forecasts by The Weather Channel, Penn State, Tropical Storm Risk, and others.

NOAA-2020-outlook

The recent spate of highly-active TC seasons, however, contrasts sharply with future trends in a majority of climate models, which simulate decreasing annual numbers of TCs as Earth’s climate continues to warm. That’s one of a number of findings in a recent paper by Tom Knutson (NOAA) and colleagues in the Bulletin of the American Meteorological Society.

In the paper, a team of tropical meteorology and hurricane experts led by Knutson assessed model projections of TCs in a world 2°C warmer than pre-industrial levels. The authors indicated mixed confidence in a downward TC frequency trend, even though 22 of 27 climate models the authors reviewed indicating the decrease. Some reputable models, though a minority, showed the frequency in named storms will instead increase in a warmer world, which lowered confidence in this particular finding.

As noted in Knutson et al. (2019, Part I of their two-part study: “Tropical Cyclones and Climate Change Assessment”), there is no clear observational evidence for a detectable human influence on historical global TC frequency. Therefore, there is no clear observational evidence to either support or refute the notion of decreased global TC frequency with climate warming. This apparent discrepancy between model projections and historical observations could be due to a number of factors. Among these are the relatively short available global TC records, the relatively modest expected sensitivity of global TC frequency to global warming since the 1970s, errors arising from limitations of model projections, differences between historical climate forcings and those used for twenty-first-century projections, or even observational limitations. However, the growing TC observational databases may soon provide a means of distinguishing between some highly divergent modeled scenarios of global TC frequency.

An average hurricane season in the Atlantic, which includes storms forming in the Caribbean Sea and Gulf of Mexico, sees 12 named storms with 6 becoming hurricanes. Of those hurricanes, typically three strengthen their sustained winds above 110 mph, becoming major hurricanes.

NOAA’s forecast cited warmer-than-usual sea surface temperatures, light winds aloft, and the lack of an El Niño, which tends to shear apart hurricanes, as factors for this year’s potentially active season. “Similar conditions have been producing more active seasons since the current high-activity era began in 1995,” NOAA stated in a release Thursday.

Knutson and his colleagues explain that the reason or reasons for a future decrease in TC frequency is uncertain, even as a warmer world would mean a continuation of warming seas. One possibility, the team entertains, is a decrease in large-scale rising air, termed “upward mass flux,” in the future. Its mechanism, however, is unclear, they find. Another is a reduction in saturation of the middle atmosphere in the models. Both are unfavorable for TC genesis.

The authors state that projections of TC frequency in different TC basins are “less robust” than the global signal. Comparing basins, they did find that the southwest Pacific and southern Indian oceans had greater TC decreases than the Atlantic and the Eastern and Western Pacific oceans.

They conclude this portion of the study stating that “reconciling projection results with theories or mechanistic understanding of TC genesis may eventually lead to improved confidence in projections of TC frequency.”

Knutson’s team found greater certainty in other facets of future TCs in the same study. For example, they expressed medium-to-high confidence that hurricanes will become stronger and wetter by the end of the twenty-first century.

{ 0 comments }

Even with a modest amount of global warming, future hurricanes will become nastier. They’ll push ashore higher storm surges, grow into superstorms like Hurricanes Dorian and Irma more often, and unleash inundating rains similar to Hurricanes Harvey and Florence more frequently.

That’s the assessment of published, peer-reviewed research in the past decade, according to an assessment by Thomas Knutson (NOAA) and colleagues, recently published in the Bulletin of the American Meteorological Society. It’s the second in a two part study conducted by the author team, 11 experts in climate and tropical cyclones (TCs). Part 1 found there are indeed already detectable changes in tropical cyclone activity attributable to human-caused climate change. Part 2, in the March 2020 BAMS online, project changes in the climatology of these storms worldwide due to human-induced global warming of just 2°C.

Highest confidence among the experts was in storm surge flooding. Rising sea levels due to warming and expanding oceans, responding to atmospheric warming and glacial ice melt, are already making it easier for hurricanes and even tropical storms to drive greater amounts of seawater ashore at landfall. And this will only worsen.

With CO2 levels climbing to about 414 ppm in March, as measured atop Mauna Loa in Hawaii, Earth is on track to reach a 2°C average global temperature increase by mid century. Already global average surface temperature has risen 1.2°C since the Industrial Revolution began.

In the assessment the authors have medium-to-high-confidence that rainfall rates in tropical cyclones will increase globally by 14% due to the increasing amount of water vapor available in a warmer atmosphere. They project a 5% global increase in tropical cyclone intensity along with an increase in the number of Category 4 and 5s ̶ although the range of opinions among the experts involved is 1-10%. In the Atlantic Basin, which includes the Caribbean Sea and Gulf of Mexico, the number of storms is projected to decrease while intensity as well as the number of intense hurricanes increases.

Other studies found that hurricanes will slow down, making them even more prolific rainmakers, among other changes. Authors of the new assessment discussed these additional changes, but cited less confidence in general and that different tropical basins around the world had different projections:

Author opinion was more mixed and confidence levels generally lower for some other TC projections, including a further poleward expansion of the latitude of maximum intensity of TCs in the western North Pacific basin, a decrease of global TC frequency, and an increase in the global frequency (as opposed to proportion) of very intense (category 4–5) TCs. The vast majority of modeling studies project decreasing global TC frequency (median of about −13% for 2°C of global warming), while a few studies project an increase. It is difficult to identify/quantify a robust consensus in projected changes in TC tracks across studies, although several project either poleward or eastward expansion of TC occurrence over the North Pacific. Projected TC size metric changes are on the order of 10% or less, and highly variable between basins and studies. Confidence in projections of TC translation speed is low due to the potential for data artifacts in the observed slowdown and a lack of model consensus. Confidence in various TC projections in general was lower at the individual basin scale than for the global average.

 Summary of TC projections for a 2°C global anthropogenic warming. Shown for each basin and the globe are median and percentile ranges for projected percentage changes in TC frequency, category 4–5 TC frequency, TC intensity, and TC near-storm rain rate. For TC frequency, the 5th–95th-percentile range across published estimates is shown. For category 4–5, TC frequency, TC intensity, and TC near-storm rain rates the 10th–90th-percentile range is shown. Note the different vertical-axis scales for the combined TC frequency and category 4–5 frequency plot vs the combined TC intensity and TC rain rate plot. See the supplemental material for further details on underlying studies used. Summary of TC projections for a 2°C global anthropogenic warming. Shown for each basin and the globe are median and percentile ranges for projected percentage changes in TC frequency, category 4–5 TC frequency, TC intensity, and TC near-storm rain rate. For TC frequency, the 5th–95th-percentile range across published estimates is shown. For category 4–5, TC frequency, TC intensity, and TC near-storm rain rates the 10th–90th-percentile range is shown.

{ 0 comments }