Thursday NOAA updated its forecast to an “extremely active” Atlantic hurricane season. That has some news outlets linking the 19-25 predicted named storms to Earth’s future—even warmer—global climate. The future looks like it will indeed bring high levels of overall “activity” due to the intense, damaging hurricanes of a warming world (regardless of whether the frequency of storms overall changes). And, of course, settling into a new “norm” isn’t going to happen while warming is ongoing. But the huge number of storms forming? That’s a lot of what the public takes away from the forecast, and that profusion of named storms is not projected to be characteristic of seasons to come.
In that assessment of the current literature, Tom Knutson (NOAA) and other top tropical experts reviewed a number of peer-reviewed studies and determined that a majority found the numbers of named storms actually decrease in climate projections as we move deeper into this century. But there was no consensus among the authors to either support or refute those studies since their research also showed that “there is no clear observational evidence for a detectable human influence on historical global TC frequency.”
Their assessment did find that we can expect stronger and wetter hurricanes in our warming world and, notably, a possible uptick in the number of intense (Category 4 and 5) hurricanes. It’s these storms that have Knutson and his colleagues most concerned since a majority of hurricane damage is done by the big ones. Their increase is alarming even if the number of storms goes down.
Notable with this week’s forecast update is a prediction close to record territory. “We’ve never forecast up to 25 named storms” before—more than twice a season’s typical 12—noted Jerry Bell, lead seasonal hurricane forecaster at NOAA’s Climate Prediction Center. He went on to say there will be “more, stronger, and longer-lived storms than average” in the Atlantic Basin, which includes the Caribbean Sea and Gulf 0f Mexico. In an average season there are six hurricanes, and three of those grow into major hurricanes.
Tropical Storm Isaias is soaking the Mid-Atlantic states with what is expected to be three times as much rain as is typical for the area. Today’s heaviest tropical showers could trigger potentially deadly flash floods.
The projection is the finding of a new Intuitive Metric for Deadly Tropical Cyclone Rains, which we blogged about on The Front Page in June. The extreme rainfall multiplier (ERM) used the quantitative precipitation forecast (QPF) from the Storm Prediction Center last night to generate an ERM forecast for Isaias.
“Since Isaias is a fast-moving storm (currently moving NNE at 23 mph), the heaviest rain is forecast to fall with[in] a 24-hour period today (Aug 4)”, wrote the study’s lead author, Christopher Bosma, a Ph.D. student at the University of Wisconsin-Madison, in a-pre-dawn e-mail. “Peak rainfall totals are projected to be just over 6 inches (approx. 150 mm), mostly in a narrow region just south of the DC Metro [area].”
In contrast, the region’s heaviest single-day, 2-year rainfall event is a bit more than 50 mm. Bosma uses that comparison in generating an ERM around 2.86 (152 mm / 53 mm). Rainfall may exceed the projections, but that gives a rough idea of how the storm compares to others in residents’ recent memory.
According to the study, which was published in the Bulletin of the American Meteorological Society in May, the average value of an ERM in U.S. landfalling hurricanes and tropical storms is 2.0. ERMs can hindcast the severity of precipitation for such storms, like 2017’s Hurricane Harvey. Harvey deluged Texas with as much as 60 inches of rain and reached an ERM of 6.4—the highest calculated.
Those having lived in the D.C. area in the early 2000s might recall a tropical storm that Bosma says is comparable to Isaias: Isabel. After landfall in eastern North Carolina as a Cat. 2 hurricane the morning of September 18, 2003, it barreled north-northwest through the Mid-Atlantic delivering flooding rains and damaging winds that night.
“Isabel was also a fast mover at landfall, and was responsible for similar one-day rain totals of just over 6 inches, based on CPC-Unified gauge-based gridded data,” Bosma wrote.” The peak ERM for Isabel was 2.8. One thing to note from Isabel is that localized rainfall totals were higher in some spots, particularly in the mountains of Virginia, highlighting the threat of localized flash flooding that might also be present today with Isaias.”
Indeed, flash flood warnings were issued all across the interior Mid-Atlantic this morning. This was despite drought conditions in parts of the area.
Bosma and colleagues Daniel Wright (UW-Madison), J. Marshall Shepherd (University of Georgia), et al., created the ERM metric to focus on the deadly hazard of extreme tropical cyclone rainfall. Getting word out about the threat using only the wind-based Saffir-Simpson Scale “was a problem brought to light with Hurricanes Harvey and Florence,” Shepherd says.
Wright also in an e-mail last night stated that for Isaias in and around Washington, D.C., it’s “a fairly large amount of rain, though certainly not unprecedented for the region.”
Tropical Storm Isaias is soaking the Mid-Atlantic states with what is expected to be three times as much rain as is typical for the area. Today’s heaviest tropical showers could trigger potentially deadly flash floods.
The projection is the finding of a new Intuitive Metric for Deadly Tropical Cyclone Rains, which we blogged about on The Front Page in June. The extreme rainfall multiplier (ERM) used the quantitative precipitation forecast (QPF) from the Storm Prediction Center last night to generate an ERM forecast for Isaias.
“Since Isaias is a fast-moving storm (currently moving NNE at 23 mph), the heaviest rain is forecast to fall with[in] a 24-hour period today (Aug 4)”, wrote the study’s lead author, Christopher Bosma, a Ph.D. student at the University of Wisconsin-Madison, in a-pre-dawn e-mail. “Peak rainfall totals are projected to be just over 6 inches (approx. 150 mm), mostly in a narrow region just south of the DC Metro [area].”
In contrast, the region’s heaviest single-day, 2-year rainfall event is a bit more than 50 mm. Bosma uses that comparison in generating an ERM around 2.86 (152 mm / 53 mm). Rainfall may exceed the projections, but that gives a rough idea of how the storm compares to others in residents’ recent memory.
According to the study, which was published in the Bulletin of the American Meteorological Society in May, the average value of an ERM in U.S. landfalling hurricanes and tropical storms is 2.0. ERMs can hindcast the severity of precipitation for such storms, like 2017’s Hurricane Harvey. Harvey deluged Texas with as much as 60 inches of rain and reached an ERM of 6.4—the highest calculated.
Those having lived in the D.C. area in the early 2000s might recall a tropical storm that Bosma says is comparable to Isaias: Isabel. After landfall in eastern North Carolina as a Cat. 2 hurricane the morning of September 18, 2003, it barreled north-northwest through the Mid-Atlantic delivering flooding rains and damaging winds that night.
“Isabel was also a fast mover at landfall, and was responsible for similar one-day rain totals of just over 6 inches, based on CPC-Unified gauge-based gridded data,” Bosma wrote.” The peak ERM for Isabel was 2.8. One thing to note from Isabel is that localized rainfall totals were higher in some spots, particularly in the mountains of Virginia, highlighting the threat of localized flash flooding that might also be present today with Isaias.”
Indeed, flash flood warnings were issued all across the interior Mid-Atlantic this morning. This was despite drought conditions in parts of the area.
Bosma and colleagues Daniel Wright (UW-Madison), J. Marshall Shepherd (University of Georgia), et al., created the ERM metric to focus on the deadly hazard of extreme tropical cyclone rainfall. Getting word out about the threat using only the wind-based Saffir-Simpson Scale “was a problem brought to light with Hurricanes Harvey and Florence,” Shepherd says.
Wright also in an e-mail last night stated that for Isaias in and around Washington, D.C., it’s “a fairly large amount of rain, though certainly not unprecedented for the region.”
All the research ships and aircraft of atmospheric science may never be able to gather in one place for testing. But small, portable unmanned aircraft systems (UAS) are another matter. An international vanguard of scientists developing these atmospheric observing capabilities is finding that it is really helpful to get together to pool their insights—and devices—to accelerate each other’s progress. Together, their technology is taking off.
In the May 2020 BAMS, Gijs de Boer (CIRES and NOAA) and colleagues overview one of these coordinate-and-compare campaigns: when 10 teams from around the world brought 34 UAS to Colorado’s San Luis Valley for a week of tests, laying groundwork for new collaborations and future field programs. The July 2018 flight-fest conducted 1,300 research flights totaling more than 250 flight hours focused on observing the intricacies of the lower atmosphere.
At a “Community Day,” the scientists shared their aircraft and interests with the public as well. Working together all in one place has huge benefits. The teams get to see how they compare with each other, work out the kinks with their UAS, and move faster toward their research goals. It’s one reason they are getting so good so fast.
Below, de Boer answers some questions about the campaign and how he got started with UAS.
BAMS: What are some of the shared problems revealed by working together—as in LAPSE-RATE—with other UAS teams? Gijs de Boer: There are common problems at a variety of levels. For example, accurate wind sensing has proven challenging, and we’ve definitely worked together to improve wind estimation. Additionally, different modes of operation, understanding which sensors are good and which are not, and sensor placement are all examples of how the community has worked together to lift up the quality of measurements from all platforms.
BAMS: What are the most surprising lessons from LAPSE-RATE? GdB: I think that the continued rapid progression of the technology and the innovation in UAS-based atmospheric research is impressive. Some of the tools deployed during LAPSE-RATE in 2018 have already been significantly improved upon.
BAMS: What are some examples of this more recent UAS improvement? GdB: Everything continues to get smaller and lighter. Aircraft have become even more reliable, and instrumentation has continued to be scrutinized to improve data quality. Battery technology has also continued to improve, allowing for longer flight times and more complex missions.
Yet, we have so much more to do with respect to integrating our measurements into mainstream atmospheric research.
BAMS: What are some challenges to doing more to integrate UAS into research? GdB: Primarily, our UAV research community is working to demonstrate the reliability and accuracy of our measurements and platforms. This is critical to having them accepted in the community. There are also some other challenges associated with airspace access and development of infrastructure to interface these observations in both mainstream research and operations.
BAMS: It seems like there’s been success in this mainstreamed usage of UAS. GdB: Campaigns like LAPSE-RATE have paved the way for UAS to be more thoroughly included in larger field campaigns. A nice example is the recent ATOMIC (Atlantic Tradewind Ocean–Atmosphere Mesoscale Interaction Campaign) and EUREC4A (Elucidating the role of clouds-circulation coupling in climate) field campaigns, where three different UAS teams were involved and UAS were operated alongside manned research aircraft and in support of a much larger effort.
BAMS: How did you become interested in unmanned aviation? GdB: In 2011, I worked with a small group on a review article about our knowledge of mixed-phase clouds in Arctic environments. We took a good look at critical observational deficiencies, and I began to realize that many of the gaps involved a lack of in situ information, quantities that I thought could be measured by small platforms. This sent me down the road of investigating whether UAS could offer the necessary insight.
You’ve heard of drones in the air, but how about on the ocean’s surface? Enter Saildrone: A new wind and solar powered ocean-observing platform that carries a sophisticated suite of scientific sensors to observe air–sea fluxes. Looking like a large windsurfer without the surfer, the sailing drone glides autonomously at 2–8 kts. along the surface of uninhabited oceans on missions as long as 12 months, sampling key variables in the marine environment.
In a recent paper published in the Bulletin of the American Meteorological Society, author Chelle Gentemann and her colleagues explain that from April 11 to June 11, 2018, Saildrone cruised on a 60-day round trip from San Francisco down the coast to Mexico’s Guadelupe Island to establish the accuracy of its new measurements. These were made to validate air–sea fluxes, sea surface temperatures, and wind vectors derived by satellites. The automated surface vehicle also studied upwelling dynamics, river plumes, and the air–sea interactions of both frontal and diurnal warming regions on this deployment—meaning Saildrone’s versatile array of instruments got a workout not only above surface but just below it as well, in the water along the hull.
BAMS asked a few questions of the authors to gain insight into their research as well as their backgrounds. A sampling of answers are below:
BAMS: What would you like readers to learn from your article?
Chelle Gentemann, Farallon Institute: New measurement approaches are always being developed, allowing for new approaches to science. Understanding a dataset’s characteristics and uncertainties is important to have confidence in derived results. BAMS: How did you become interested in working with Saildrone? Gentemann: The ocean is a challenging environment to work in: it can be beautiful but dangerous, and gathering ship observations can require long absences from your family. I learned about Saildrones in 2016 and wanted to see how an autonomous vehicle might be able to gather data at the air–sea interface and adapt sampling to changing conditions. There are some questions that are hard to get at from existing remote sensing and in situ datasets; I thought that if these vehicles are able to collect high-quality data, they could be useful for science. BAMS: How have you followed up on this experiment? Gentemann: We sent two more [Saildrones] to the Arctic last Summer (2019) and are planning for two more in 2021. There are few in situ observations in the Arctic Ocean because of the seasonal ice cover, so sending Saildrones up there for the summer has allowed us to sample temperature and salinity fronts during a record heat wave. Sebastien de Halleux, Saildrone, Inc.: I believe we are on the cusp of a new golden age in oceanography, as a wave of new enabling technologies is making planetary-scale in situ observations technically and economically feasible. The fact that Saildrones are zero-emission is a big bonus as we try to reduce our carbon footprint. I am excited to engage further with the science community to explore new ways of using this technology and developing tools to further the value of the data collected for the benefit of humanity. BAMS: What got you initially interested in oceanography? de Halleux:Having had the opportunity to sail across the Pacific several times, I developed a strong interest in learning more about the 70% of the planet covered by water—only to realize that the challenge of collecting data is formidable over such a vast domain. Being exposed to the amazing power of satellites to produce large-scale remote sensing datasets was only tempered by the realization of their challenges with fine features, land proximity, and of course the need to connect them to subsurface phenomena. This is how we began to explore the intersection of science, robotics, and big data with the goal to help enable new insights. Yet we are only at the beginning of an amazing journey. BAMS:What surprises/surprised you the most about Saildrone’s capabilities? Peter Minnett, Univ. of Miami, Florida: The ability to reprogram the vehicles in real time to focus on sampling and resampling interesting surface features. The quality of the measurements is impressive.
Saildrones are currently deployed around the world. In June 2019 , there were three circumnavigating Antarctica, six in the U.S. Arctic, seven surveying fish stock off the U.S. West Coast and two in Norway, four surveying the tropical Pacific, and one conducting a multibeam bathymetry survey in the Gulf of Mexico. In 2020, Saildrone, Inc. has deployed fleets in Europe, the Arctic, the tropical Pacific, along the West Coast, the Gulf of Mexico, the Atlantic, the Caribbean, and Antarctica. NOAA and NASA-funded Saildrone data are distributed openly and publicly.
You’ve heard of drones in the air, but how about on the ocean’s surface? Enter Saildrone: A new wind and solar powered ocean-observing platform that carries a sophisticated suite of scientific sensors to observe air–sea fluxes. Looking like a large windsurfer without the surfer, the sailing drone glides autonomously at 2–8 kts. along the surface of uninhabited oceans on missions as long as 12 months, sampling key variables in the marine environment.
In a recent paper published in the Bulletin of the American Meteorological Society, author Chelle Gentemann and her colleagues explain that from April 11 to June 11, 2018, Saildrone cruised on a 60-day round trip from San Francisco down the coast to Mexico’s Guadelupe Island to establish the accuracy of its new measurements. These were made to validate air–sea fluxes, sea surface temperatures, and wind vectors derived by satellites. The automated surface vehicle also studied upwelling dynamics, river plumes, and the air–sea interactions of both frontal and diurnal warming regions on this deployment—meaning Saildrone’s versatile array of instruments got a workout not only above surface but just below it as well, in the water along the hull.
BAMS asked a few questions of the authors to gain insight into their research as well as their backgrounds. A sampling of answers are below:
BAMS: What would you like readers to learn from your article?
Chelle Gentemann, Farallon Institute: New measurement approaches are always being developed, allowing for new approaches to science. Understanding a dataset’s characteristics and uncertainties is important to have confidence in derived results.
BAMS: How did you become interested in working with Saildrone?
Gentemann: The ocean is a challenging environment to work in: it can be beautiful but dangerous, and gathering ship observations can require long absences from your family. I learned about Saildrones in 2016 and wanted to see how an autonomous vehicle might be able to gather data at the air–sea interface and adapt sampling to changing conditions. There are some questions that are hard to get at from existing remote sensing and in situ datasets; I thought that if these vehicles are able to collect high-quality data, they could be useful for science.
BAMS: How have you followed up on this experiment?
Gentemann: We sent two more [Saildrones] to the Arctic last Summer (2019) and are planning for two more in 2021. There are few in situ observations in the Arctic Ocean because of the seasonal ice cover, so sending Saildrones up there for the summer has allowed us to sample temperature and salinity fronts during a record heat wave.
Sebastien de Halleux, Saildrone, Inc.: I believe we are on the cusp of a new golden age in oceanography, as a wave of new enabling technologies is making planetary-scale in situ observations technically and economically feasible. The fact that Saildrones are zero-emission is a big bonus as we try to reduce our carbon footprint. I am excited to engage further with the science community to explore new ways of using this technology and developing tools to further the value of the data collected for the benefit of humanity.
BAMS: What got you initially interested in oceanography?
de Halleux:Having had the opportunity to sail across the Pacific several times, I developed a strong interest in learning more about the 70% of the planet covered by water—only to realize that the challenge of collecting data is formidable over such a vast domain. Being exposed to the amazing power of satellites to produce large-scale remote sensing datasets was only tempered by the realization of their challenges with fine features, land proximity, and of course the need to connect them to subsurface phenomena. This is how we began to explore the intersection of science, robotics, and big data with the goal to help enable new insights. Yet we are only at the beginning of an amazing journey.
BAMS:What surprises/surprised you the most about Saildrone’s capabilities?
Peter Minnett, Univ. of Miami, Florida: The ability to reprogram the vehicles in real time to focus on sampling and resampling interesting surface features. The quality of the measurements is impressive.
Saildrones are currently deployed around the world. In June 2019 , there were three circumnavigating Antarctica, six in the U.S. Arctic, seven surveying fish stock off the U.S. West Coast and two in Norway, four surveying the tropical Pacific, and one conducting a multibeam bathymetry survey in the Gulf of Mexico. In 2020, Saildrone, Inc. has deployed fleets in Europe, the Arctic, the tropical Pacific, along the West Coast, the Gulf of Mexico, the Atlantic, the Caribbean, and Antarctica. NOAA and NASA-funded Saildrone data are distributed openly and publicly.
The Thanksgiving holiday weekend has long been heralded as the start of the Western United States winter ski season. But new research using regional climate models sees Thanksgiving skiing going cold turkey.
As climate change ramps up into the mid twenty-first century, we can expect shorter ski seasons from the Southwest to the northern Rockies. This includes projections for less snow as well as poorer conditions for artificial snowmaking in the mountain states of the interior West. These are the findings from new research presented by Christian Lackner (Univ. of Wyoming and Johannes Gutenberg-Univ. of Mainz) this week at the American Meteorological Society’s 19th Conference on Mountain Meteorology. Despite being entirely on-line, the meeting achieved record attendance.
Lackner’s presentation, co-authored with Bart Geerts and Yonggang Wang, showed that the downturn in the ski season is projected to impact lower-elevation ski areas such as those in Arizona and New Mexico the most. Ski seasons by 2050 will start about two weeks later and end two-to-three weeks earlier than in the baseline period of 1981-2010. For many resorts that means the season length is seen to fall below the 100-day threshold long viewed as the make-it-or-break point for staying viable in the ski industry.
Higher-elevation ski resorts in Colorado, Utah, and western Wyoming, as well as higher latitude ski areas in Montana and Idaho, will fair better, although they’ll see their seasons shrink by 10-20 days. That will drop them below 120 days—the high-elevation, high-latitude resorts’ economic threshold—by 2050.
Lackner et al.’s study looked at climate change impacts at 71 ski resorts in Arizona, Colorado, Idaho, Montana, New Mexico, and Wyoming from November 15-April 15, the key cold-season months.
The good news is the Christmas holiday week still looks good for shooshing down Western slopes, despite the climate projections.
The Thanksgiving holiday weekend has long been heralded as the start of the Western United States winter ski season. But new research using regional climate models sees Thanksgiving skiing going cold turkey.
As climate change ramps up into the mid twenty-first century, we can expect shorter ski seasons from the Southwest to the northern Rockies. This includes projections for less snow as well as poorer conditions for artificial snowmaking in the mountain states of the interior West. These are the findings from new research presented by Christian Lackner (Univ. of Wyoming and Johannes Gutenberg-Univ. of Mainz) this week at the American Meteorological Society’s 19th Conference on Mountain Meteorology. Despite being entirely on-line, the meeting achieved record attendance.
Lackner’s presentation, co-authored with Bart Geerts and Yonggang Wang, showed that the downturn in the ski season is projected to impact lower-elevation ski areas such as those in Arizona and New Mexico the most. Ski seasons by 2050 will start about two weeks later and end two-to-three weeks earlier than in the baseline period of 1981-2010. For many resorts that means the season length is seen to fall below the 100-day threshold long viewed as the make-it-or-break point for staying viable in the ski industry.
Higher-elevation ski resorts in Colorado, Utah, and western Wyoming, as well as higher latitude ski areas in Montana and Idaho, will fair better, although they’ll see their seasons shrink by 10-20 days. That will drop them below 120 days—the high-elevation, high-latitude resorts’ economic threshold—by 2050.
Lackner et al.’s study looked at climate change impacts at 71 ski resorts in Arizona, Colorado, Idaho, Montana, New Mexico, and Wyoming from November 15-April 15, the key cold-season months.
The good news is the Christmas holiday week still looks good for shooshing down Western slopes, despite the climate projections.
Observations and models–that’s often an uneasy relationship. It’s not always easy to find the common ground needed to turn observations into model input and then models themselves into physically realistic output consistent with those observations.
DOE’s Atmospheric Radiation Measurement (ARM) program is trying to pull observing and modeling—ranging over vast time and space scales—tighter together, into effective bundles of science. Naturally, they’re using an initiative called “LASSO.”
Focused on shallow convection (often small, low-level scattered clouds), LASSO, or, the “Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation” project centers on the capabilities of DOE’s Southern Plains observatory in Oklahoma. LASSO is designed to “add value to observations” through a carefully crafted modeling framework that evaluates how well the model “captures reality,” write William I. Gustafson and his colleagues in a paper recently published in the Bulletin of the American Meteorological Society (BAMS).
LASSO bundles data such as observations, LES input and output, and “quick-look” plots of the observations into a library of cases for study by modelers, theoreticians, and observationalists. LASSO includes diagnostics and skill scores of the extensive observations in the bundles and makes them freely available with simplified access for speedy use.
The goal of the data packaging approach is to enable scientists to more easily bridge the gap from smaller scale measurements and processes to the larger scale at which most modeling parameterizations operate.
We asked Gustafson to explain: BAMS: What would you like readers to learn from this article? Gustafson: In the atmospheric sciences we work with so many scales that we often get siloed into thinking in very scale-specific ways based on our sub-specialty and the type of research we do. This can happen whether we are modelers trying to wrap our brains around comparing parcel model simulations with global climate models, or as observationalists trying to rationalize differences between point-based surface measurements and big, pixel-based satellite measurements. The LASSO project is one attempt to get past limitations sometimes imposed by certain scales. For example, the DOE ARM program has such a wealth of measurements, and at the same time, DOE is developing a new and improved climate model. LASSO is one way to help marry the two together to add value for researchers working with both sets of data. How did you become interested in the topic of this article?
My training is as a modeler, and over the years, a lot of my research has looked at issues of scale and how atmospheric models can better deal with unresolved detail—the so-called subgrid information. We know that subgrid information can be critical for properly simulating things like clouds and radiation. Yet, we cannot run global models with sufficient resolution to track this information. So, we need tools like large-eddy simulation to help us make better physics parameterizations for the coarser models used for weather prediction and climate change projections. Marrying the LES more tightly with observations seemed like a great way to help the atmospheric community move forward and make progress improving the models. What got you initially interested in meteorology or the related field you are in?
I find weather fascinating and awe inspiring, and science has always been one of my interests alongside computers. Coming out of my undergrad years with a physics degree, I knew I wanted to pursue something related to computing, but I did not want to do it for a company for the sole purpose of making money for somebody. Atmospheric modeling seemed like a great way to apply my computer interests in an impactful way that would also be a lot of fun. Not many people get to play on giant supercomputers for a living trying to figure out what makes clouds do what they do. I have never looked back and much of my job I see as a grown-up playground where I get to build with computer bits instead of the sand I used to play with as a kid. What surprises/surprised you the most about the work you document in this article?
This is not an article with filled with “ahah” moments. It is the result of years of effort put into developing a new data product that combines input from a large number of people with many different specialties. So, I would not say that I came across surprises.
However, I have come to really appreciate the help from so many people to make LASSO happen. We have people helping to collect input from dozens of instruments that have had to be maintained, data that has to be quality controlled, computers that are maintained, the actual modeling and packaging of the observations with the model output, the database and website development to make the product findable by users, the backhouse archive support, and communications specialists that have all been
critical to make LASSO happen. What was the biggest challenge you encountered while doing this work?
Working with a long-term dataset has been one of our big challenges. We have been trying to put together a standardized data bundle that would make it easy for researchers to compare simulations from different cases spanning years. However, instrumentation changes from year to year, which means we continually have to adapt. Sometimes this presents itself as a new opportunity because of a new capability, such as a new photogrammetric cloud fraction product we are starting to work with. Other
times, existing instruments malfunction or are replaced with instruments that do not have the same capabilities, such as a switch from a two-channel to a three-channel microwave radiometer. The latter, in theory, could offer improved results, but in reality, led to years of calibration issues. What’s next? How will you follow up?
The LASSO activity has been well received and we are excited to be expanding to new weather regimes. During 2020 we have been developing a new LASSO scenario that focuses on deep convection in Argentina. This is really exciting because storms in this area are some of the tallest in the world. It will also be a lot of fun working with LES of deep convection with all its associated cloud motions and detail. We plan to have this new scenario ready for release in 2021.
Observations and models–that’s often an uneasy relationship. It’s not always easy to find the common ground needed to turn observations into model input and then models themselves into physically realistic output consistent with those observations.
DOE’s Atmospheric Radiation Measurement (ARM) program is trying to pull observing and modeling—ranging over vast time and space scales—tighter together, into effective bundles of science. Naturally, they’re using an initiative called “LASSO.”
Focused on shallow convection (often small, low-level scattered clouds), LASSO, or, the “Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation” project centers on the capabilities of DOE’s Southern Plains observatory in Oklahoma. LASSO is designed to “add value to observations” through a carefully crafted modeling framework that evaluates how well the model “captures reality,” write William I. Gustafson and his colleagues in a paper recently published in the Bulletin of the American Meteorological Society (BAMS).
LASSO bundles data such as observations, LES input and output, and “quick-look” plots of the observations into a library of cases for study by modelers, theoreticians, and observationalists. LASSO includes diagnostics and skill scores of the extensive observations in the bundles and makes them freely available with simplified access for speedy use.
The goal of the data packaging approach is to enable scientists to more easily bridge the gap from smaller scale measurements and processes to the larger scale at which most modeling parameterizations operate.
We asked Gustafson to explain:
BAMS: What would you like readers to learn from this article?
Gustafson: In the atmospheric sciences we work with so many scales that we often get siloed into thinking in very scale-specific ways based on our sub-specialty and the type of research we do. This can happen whether we are modelers trying to wrap our brains around comparing parcel model simulations with global climate models, or as observationalists trying to rationalize differences between point-based surface measurements and big, pixel-based satellite measurements. The LASSO project is one attempt to get past limitations sometimes imposed by certain scales. For example, the DOE ARM program has such a wealth of measurements, and at the same time, DOE is developing a new and improved climate model. LASSO is one way to help marry the two together to add value for researchers working with both sets of data.
How did you become interested in the topic of this article?
My training is as a modeler, and over the years, a lot of my research has looked at issues of scale and how atmospheric models can better deal with unresolved detail—the so-called subgrid information. We know that subgrid information can be critical for properly simulating things like clouds and radiation. Yet, we cannot run global models with sufficient resolution to track this information. So, we need tools like large-eddy simulation to help us make better physics parameterizations for the coarser models used for weather prediction and climate change projections. Marrying the LES more tightly with observations seemed like a great way to help the atmospheric community move forward and make progress improving the models.
What got you initially interested in meteorology or the related field you are in?
I find weather fascinating and awe inspiring, and science has always been one of my interests alongside computers. Coming out of my undergrad years with a physics degree, I knew I wanted to pursue something related to computing, but I did not want to do it for a company for the sole purpose of making money for somebody. Atmospheric modeling seemed like a great way to apply my computer interests in an impactful way that would also be a lot of fun. Not many people get to play on giant supercomputers for a living trying to figure out what makes clouds do what they do. I have never looked back and much of my job I see as a grown-up playground where I get to build with computer bits instead of the sand I used to play with as a kid.
What surprises/surprised you the most about the work you document in this article?
This is not an article with filled with “ahah” moments. It is the result of years of effort put into developing a new data product that combines input from a large number of people with many different specialties. So, I would not say that I came across surprises.
However, I have come to really appreciate the help from so many people to make LASSO happen. We have people helping to collect input from dozens of instruments that have had to be maintained, data that has to be quality controlled, computers that are maintained, the actual modeling and packaging of the observations with the model output, the database and website development to make the product findable by users, the backhouse archive support, and communications specialists that have all been
critical to make LASSO happen.
What was the biggest challenge you encountered while doing this work?
Working with a long-term dataset has been one of our big challenges. We have been trying to put together a standardized data bundle that would make it easy for researchers to compare simulations from different cases spanning years. However, instrumentation changes from year to year, which means we continually have to adapt. Sometimes this presents itself as a new opportunity because of a new capability, such as a new photogrammetric cloud fraction product we are starting to work with. Other
times, existing instruments malfunction or are replaced with instruments that do not have the same capabilities, such as a switch from a two-channel to a three-channel microwave radiometer. The latter, in theory, could offer improved results, but in reality, led to years of calibration issues.
What’s next? How will you follow up?
The LASSO activity has been well received and we are excited to be expanding to new weather regimes. During 2020 we have been developing a new LASSO scenario that focuses on deep convection in Argentina. This is really exciting because storms in this area are some of the tallest in the world. It will also be a lot of fun working with LES of deep convection with all its associated cloud motions and detail. We plan to have this new scenario ready for release in 2021.