Officially, the Atlantic season is almost upon us. The season of tropical storms and hurricanes, yes, but more to the point, the season of heat-seeking machines and relentless monsters.

At least, that’s the metaphorical language of broadcast meteorologists when confronted with catastrophic threats like Hurricane Harvey in Houston in 2017. A new analysis in BAMS of the figures of speech used by KHOU-TV meteorologists to convey the dangers of this record storm shows how these risk communicators exercised great verbal skill to not only connect with viewers’ emotions, but also convey essential understanding in a time of urgent need.

For their recently released paper, Robert Prestley (Univ. of Kentucky) and co-authors selected from the CBS-affiliate’s live broadcasts during Harvey’s onslaught the more than six hours of on-air time for the station’s four meteorologists. The words the meteorologists used were coded and systematically analyzed and categorized in a partly automated, partly by-hand process. No mere “intermediaries” between weather service warnings and the public, the meteorologists—David Paul, Chita Craft, Brooks Garner, and Blake Matthews—relied on “figurative and intense language” on-air to “express their concern and disbelief” as well as explain risks.

As monster, the hurricane frequently displayed gargantuan appetite—for example, “just sitting and spinning and grabbing moisture from off the Gulf of Mexico and pulling it up,” in Paul’s words. The storm was reaching for its “food,” or moisture. The authors write, “The use of the term ‘feeder bands’…fed into this analogy.” Eventually Matthews straight out said, “We’re dealing with a monster” and Craft called the disaster a “beast.”

When the metaphor shifted to machines, Harvey was like a battery “recharging” with Gulf moisture and heat or a combustion engine tending to “blow” up or “explode.” Paul noted the lingering storm was “put in park with the engine revving.”

Other figurative language was prominent. Garner explained how atmospheric factors could “wring out that wet washcloth” and that the saturated ground was like “pudding putty, Jello.” The storm was often compared to a tall layered cake, which at one point Garner noted was tipped over like the Leaning Tower of Pisa.

In conveying impact risks, the KHOU team resorted frequently to words like “incredible” and “tremendous.” To create a frame of reference, they initially referred to local experience, like “Allison 2.0”—referring to the flood disaster caused by a “mere” tropical storm in 2001 that deluged the Houston area with three feet of rain—until Harvey was clearly beyond such a frame of reference. Then they clarified the unprecedented nature of threats, that it would be a storm “you can tell your kids about.”

The authors note, “By using figurative language to help viewers make sense of the storm, the meteorologists fulfilled the “storyteller” role that broadcast meteorologists often play during hurricanes. They were able to weave these explanations together with contextual information from their community in an unscripted, ‘off-the-cuff’ live broadcast environment.” They conclude that the KHOU team’s word choices could “be added to a lexicon of rhetorical language in broadcast meteorology” and serve as a “a toolkit of language strategies” for broadcast meteorologists to use in times of extreme weather.

Of course all of this colorful language was, perhaps, not just good science communication but also personal reality. Prestley et al. note: “The KHOU meteorologists also faced personal challenges, such as sleep deprivation, anxiety about the safety of their families, and the flooding of their studio. The flood eventually forced the meteorologists to broadcast out of a makeshift studio in a second-floor conference room before evacuating their building and going off air.”

As water entered the building, Matthews told viewers, “There are certain things in life you think you’ll never see. And then here it is. It’s happening right now.”

The new BAMS article is open access, now in early online release.

 

{ 0 comments }

For a fifth consecutive year, NOAA is forecasting an above-average number of tropical cyclones (TCs) in the Atlantic, with 13-19 named storms expected in 2020. The number of TCs includes both tropical storms and hurricanes. This is in line with recent hurricane season forecasts by The Weather Channel, Penn State, Tropical Storm Risk, and others.

NOAA-2020-outlook

The recent spate of highly-active TC seasons, however, contrasts sharply with future trends in a majority of climate models, which simulate decreasing annual numbers of TCs as Earth’s climate continues to warm. That’s one of a number of findings in a recent paper by Tom Knutson (NOAA) and colleagues in the Bulletin of the American Meteorological Society.

In the paper, a team of tropical meteorology and hurricane experts led by Knutson assessed model projections of TCs in a world 2°C warmer than pre-industrial levels. The authors indicated mixed confidence in a downward TC frequency trend, even though 22 of 27 climate models the authors reviewed indicating the decrease. Some reputable models, though a minority, showed the frequency in named storms will instead increase in a warmer world, which lowered confidence in this particular finding.

As noted in Knutson et al. (2019, Part I of their two-part study: “Tropical Cyclones and Climate Change Assessment”), there is no clear observational evidence for a detectable human influence on historical global TC frequency. Therefore, there is no clear observational evidence to either support or refute the notion of decreased global TC frequency with climate warming. This apparent discrepancy between model projections and historical observations could be due to a number of factors. Among these are the relatively short available global TC records, the relatively modest expected sensitivity of global TC frequency to global warming since the 1970s, errors arising from limitations of model projections, differences between historical climate forcings and those used for twenty-first-century projections, or even observational limitations. However, the growing TC observational databases may soon provide a means of distinguishing between some highly divergent modeled scenarios of global TC frequency.

An average hurricane season in the Atlantic, which includes storms forming in the Caribbean Sea and Gulf of Mexico, sees 12 named storms with 6 becoming hurricanes. Of those hurricanes, typically three strengthen their sustained winds above 110 mph, becoming major hurricanes.

NOAA’s forecast cited warmer-than-usual sea surface temperatures, light winds aloft, and the lack of an El Niño, which tends to shear apart hurricanes, as factors for this year’s potentially active season. “Similar conditions have been producing more active seasons since the current high-activity era began in 1995,” NOAA stated in a release Thursday.

Knutson and his colleagues explain that the reason or reasons for a future decrease in TC frequency is uncertain, even as a warmer world would mean a continuation of warming seas. One possibility, the team entertains, is a decrease in large-scale rising air, termed “upward mass flux,” in the future. Its mechanism, however, is unclear, they find. Another is a reduction in saturation of the middle atmosphere in the models. Both are unfavorable for TC genesis.

The authors state that projections of TC frequency in different TC basins are “less robust” than the global signal. Comparing basins, they did find that the southwest Pacific and southern Indian oceans had greater TC decreases than the Atlantic and the Eastern and Western Pacific oceans.

They conclude this portion of the study stating that “reconciling projection results with theories or mechanistic understanding of TC genesis may eventually lead to improved confidence in projections of TC frequency.”

Knutson’s team found greater certainty in other facets of future TCs in the same study. For example, they expressed medium-to-high confidence that hurricanes will become stronger and wetter by the end of the twenty-first century.

{ 0 comments }

Even with a modest amount of global warming, future hurricanes will become nastier. They’ll push ashore higher storm surges, grow into superstorms like Hurricanes Dorian and Irma more often, and unleash inundating rains similar to Hurricanes Harvey and Florence more frequently.

That’s the assessment of published, peer-reviewed research in the past decade, according to an assessment by Thomas Knutson (NOAA) and colleagues, recently published in the Bulletin of the American Meteorological Society. It’s the second in a two part study conducted by the author team, 11 experts in climate and tropical cyclones (TCs). Part 1 found there are indeed already detectable changes in tropical cyclone activity attributable to human-caused climate change. Part 2, in the March 2020 BAMS online, project changes in the climatology of these storms worldwide due to human-induced global warming of just 2°C.

Highest confidence among the experts was in storm surge flooding. Rising sea levels due to warming and expanding oceans, responding to atmospheric warming and glacial ice melt, are already making it easier for hurricanes and even tropical storms to drive greater amounts of seawater ashore at landfall. And this will only worsen.

With CO2 levels climbing to about 414 ppm in March, as measured atop Mauna Loa in Hawaii, Earth is on track to reach a 2°C average global temperature increase by mid century. Already global average surface temperature has risen 1.2°C since the Industrial Revolution began.

In the assessment the authors have medium-to-high-confidence that rainfall rates in tropical cyclones will increase globally by 14% due to the increasing amount of water vapor available in a warmer atmosphere. They project a 5% global increase in tropical cyclone intensity along with an increase in the number of Category 4 and 5s ̶ although the range of opinions among the experts involved is 1-10%. In the Atlantic Basin, which includes the Caribbean Sea and Gulf of Mexico, the number of storms is projected to decrease while intensity as well as the number of intense hurricanes increases.

Other studies found that hurricanes will slow down, making them even more prolific rainmakers, among other changes. Authors of the new assessment discussed these additional changes, but cited less confidence in general and that different tropical basins around the world had different projections:

Author opinion was more mixed and confidence levels generally lower for some other TC projections, including a further poleward expansion of the latitude of maximum intensity of TCs in the western North Pacific basin, a decrease of global TC frequency, and an increase in the global frequency (as opposed to proportion) of very intense (category 4–5) TCs. The vast majority of modeling studies project decreasing global TC frequency (median of about −13% for 2°C of global warming), while a few studies project an increase. It is difficult to identify/quantify a robust consensus in projected changes in TC tracks across studies, although several project either poleward or eastward expansion of TC occurrence over the North Pacific. Projected TC size metric changes are on the order of 10% or less, and highly variable between basins and studies. Confidence in projections of TC translation speed is low due to the potential for data artifacts in the observed slowdown and a lack of model consensus. Confidence in various TC projections in general was lower at the individual basin scale than for the global average.

 Summary of TC projections for a 2°C global anthropogenic warming. Shown for each basin and the globe are median and percentile ranges for projected percentage changes in TC frequency, category 4–5 TC frequency, TC intensity, and TC near-storm rain rate. For TC frequency, the 5th–95th-percentile range across published estimates is shown. For category 4–5, TC frequency, TC intensity, and TC near-storm rain rates the 10th–90th-percentile range is shown. Note the different vertical-axis scales for the combined TC frequency and category 4–5 frequency plot vs the combined TC intensity and TC rain rate plot. See the supplemental material for further details on underlying studies used. Summary of TC projections for a 2°C global anthropogenic warming. Shown for each basin and the globe are median and percentile ranges for projected percentage changes in TC frequency, category 4–5 TC frequency, TC intensity, and TC near-storm rain rate. For TC frequency, the 5th–95th-percentile range across published estimates is shown. For category 4–5, TC frequency, TC intensity, and TC near-storm rain rates the 10th–90th-percentile range is shown.

{ 0 comments }

Imagine you live in a part of the country where few people have experienced tornadoes. It would make sense that your neighbors wouldn’t know the difference between a tornado watch or warning, or know how to seek safety.

A new, openly available online tool shows exactly that, by combining societal databases with survey results about people’s understanding of weather information. But there are some surprising wrinkles in the data. For example, the database drills down to county-level information and finds “noteworthy differences” within regions of similar tornado climatology.

How is it that Norman, Oklahoma, residents score higher in what people think they know of severe weather information than those in Fort Worth, Texas? And why is there a similar gap between what people actually do know, as tested in Peachtree City, Georgia, versus Birmingham, Alabama?

“Differences like this create important opportunities for research and learning within the weather enterprise,” say Joseph T. Ripberger and colleagues, who describe the weather demographics tool in a recently published Bulletin of the American Meteorological Society article. “The online tool—the Severe Weather and Society Dashboard (WxDash)—is meant to provide this opportunity.”

For example, in one key set of metrics, the WxDash website looks at survey data on how well people receive and pay attention to tornado warnings (reception), how well they understand that information (both “subjective” comprehension—what people think they know—and “objective” comprehension—what they actually know), and response to tornado warnings.

From the BAMS article, a figure showing knowledge and response to average person percentile (APP) estimates of tornado warning reception, subjective comprehension, objective comprehension, and response by county warning area (CWA). The inset plots indicate the frequency distribution of APP estimates across CWAs. These estimates compare the average percentile of all adults who live in a CWA to the distribution of all adults across the country. For example, an APP estimate of 62 indicates that, on average, adults in that CWA score higher than 62% of adults nationally. The range of APP scores is wide. CWAs range from 38 to 61 on the reception scale, 32 to 69 on the subjective comprehension scale, and 37 to 60 on the objective comprehension scale. Response scores vary less. Not surprisingly, all categories broadly reflect the higher frequency of tornadoes in middle and southeastern CWAs.
From the BAMS article, a figure showing knowledge and response to average person percentile (APP) estimates of tornado warning reception, subjective comprehension, objective comprehension, and response by county warning area (CWA). The inset plots indicate the frequency distribution of APP estimates across CWAs. These estimates compare the average percentile of all adults who live in a CWA to the distribution of all adults across the country. For example, an APP estimate of 62 indicates that, on average, adults in that CWA score higher than 62% of adults nationally. The range of APP scores is wide. CWAs range from 38 to 61 on the reception scale, 32 to 69 on the subjective comprehension scale, and 37 to 60 on the objective comprehension scale. Response scores vary less. Not surprisingly, all categories broadly reflect the higher frequency of tornadoes in middle and southeastern CWAs.

 

WxDash combines U.S. Census data with an annual Severe Weather and Society Survey (Wx Survey) by the University of Oklahoma Center for Risk and Crisis Management. The database then “downscales” the broader scale information to the local level, in a demographic equivalent to the way large scale climate models downscale to useful information on regional scales.

The site also provides information on public trust in weather information sources, perceptions about the efficacy of protective action, vulnerability to beliefs about a variety of tornado myths, and other weather-related factors that can then be studied in light of regional and demographic factors.

Some of the key findings seen in the database:

  • Men and women demonstrate roughly comparable levels of reception, objective comprehension, and response, but men have more confidence in subjective warning comprehension than women.
  • Tornado climatology has a relatively strong effect on tornado warning reception and comprehension, but little effect on warning response.
  • The findings suggest that geography, and the community differences that overlap with geographic boundaries, likely exert more direct influence on warning reception and comprehension than on response.

Even the relatively expected relation of severe weather climatology to severe weather understanding is problematic, Ripberger and colleagues write.

Tornadoes are possible almost everywhere in the US and people who live on the coasts can move—both temporarily and permanently— throughout the country. These factors prompt some concern about the low levels of reception and comprehension in some communities, especially those in the west.

In addition to interacting with these data, you can download one of the calculated databases for community-scale information, the raw survey data, and the code necessary to reproduce the calculations.

The idea is social scientists can dig in and figure out why what we know about weather isn’t nearly as closely correlated with what we experience as we might think. The hope is an improvement in public education and risk communication strategies related to severe weather.

{ 0 comments }

Snow WallNorth American meteorologists, welcome to the snow climate of western Japan. Every year in winter lake effect-like snow events bury coastal cities in northern and central Japan under 20-30 feet of snow. Above is the “snow corridor” experienced each spring when the Tateyama Kurobe Alpine Route through the Hida Mountains reopens, revealing the season’s snows in its towering walls. The Hida Mountains, where upwards of 512 inches of snow on average accumulates each winter, are known as the northern Japanese Alps.

The tremendous snow accumulations largely occur from December to February during the East Asian winter monsoon when sea-effect snowbands form behind frequent cold outbreaks. But their snowfall isn’t just pretty to look at and play in — extreme snowfalls combined with dense populations in cities adjacent to the Sea of Japan such as Sapporo (pop. 1.95 million) are public safety hazards, turning exceptionally deadly every year. On average 100 people die and four times that number are injured from snow and ice in Japan, not only from snow removal but also from “roofalanches” — masses of snow sliding off roofs onto people.

Similar to their counterparts downwind of North America’s Great Lakes, the Sea of Japan snowbands invite research from Japanese scientists and those in many other locales where bodies of water enhance snowfall over populated lands. A new paper in BAMS by Jim Steenburgh (University of Utah) et al. not only highlights what’s known about the Japanese snow events but also is designed to “stimulate increased collaborations between sea- and lake-effect researchers and forecasters in North America, Japan, East Asia, and other regions of the world” who can collectively realize the “significant potential to advance our understanding and prediction of sea- and lake-effect precipitation.”

{ 0 comments }

Monitoring the atmosphere by satellite has come a long, long way technologically since TIROS sent back its first snapshots of Earth in 1960. Along with marked advances in spectral, spatial, temporal, and radiometric resolution of state-of-the-art instrumentation, however, come copious volumes of new data as well as unique challenges with how to view it all.

We as users are hardly up to the task alone — there’s insufficient time, especially for operational forecasters. The solution: blended imagery. In short, the seamless display of multivariate atmospheric information gleaned from today’s advanced satellites.

Value-added imagery from NOAA’s GOES-R satellite series, for example, isn’t just useful, but rather at its best it’s “a balance of science and art,” report Steven Miller (Colorado State University) and colleagues of a new paper in the Journal of Atmospheric and Oceanic Technology. Such multidimensional blending of key weather parameters into visually intuitive products maximizes the information available to users.

To illustrate this, the author’s applied the blending technique to new GOES-16’s GEOCOLOR imagery. Below is an example of a “sandwich product” in which (a) color-enhanced infrared imagery with a transparency of 70% is superimposed upon (b) visible reflectance imagery of thunderstorms over Texas, Louisiana, and Arkansas at 2319 UTC April 6, 2018, to dynamically (c) blend the images.

PoN_miller

This “partial transparency” blending technique highlights the overshooting cloud tops in the convection, enabling forecasters to pinpoint the most intense cells. It’s just one of a number of methods the paper highlights to simultaneously display satellite information and thereby present valuable insight.

The technique, Miller et al. state, blurs the line between qualitative imagery users want and quantitative products they need.

To the trained human analyst, capable of drawing context from such value-added imagery, combining the best of both worlds provides a powerful new paradigm for working with the new generation of information-rich satellites.

{ 0 comments }

The success of the D-Day Invasion of Normandy was due in part to one of history’s most famous weather forecasts, but new research shows this scientific success resulted more from luck than skill. Oft-neglected historical documentation, including audio files of top-secret phone calls, shows the forecasters were experiencing a situation still researched and practiced today: “decision-making under meteorological uncertainty.”

New research recently published in BAMS into that weather forecast for June 6, 1944, which enabled the Allies in World War II to gain a foothold in Europe, answers questions about three popular perceptions: were the forecasts, which predicted a break in the weather, that good? were the German meteorologists so ill-informed, missing that weather-break? and was the American analog system for prediction so great and better than what the Germans had?

The “alleged” weather break

An expected ridge and fair weather between two areas of low pressure, one departing and one arriving over the area, didn’t materialize. The departing low instead lingered and created a lull in visibility and lifted the cloud ceiling height, but it didn’t slow winds much. They blew at Force 4-5 (~13-24 mph), creating very choppy seas that sickened many troops prior to the invasion.

Synoptic analyses at 00 UTC from 5 to 8 June 1944. The low that was supposed to move northeast to southern Norway remained over the North Sea for some days. On 6 and 8 June the observed winds in the Channel were force 4 and occasionally force 5.
Synoptic analyses at 00 UTC from June 5-8, 1944. The low that was supposed to move northeast to southern Norway remained over the North Sea for some days.

 

A blown German Forecast?

Because the invasion came as a complete surprise to the Germans it has been surmised their weather forecast for June 6 had to be bad. German forecasters prior to the war were the best at “extended” forecasts, and their synoptic maps and forecast for that day were more realistic than the Allies, with a less optimistic speculation of any break in the weather.

The German's European-Atlantic map at 00 UTC June 6, 1944, where the analysis over the North Atlantic appears not to be based on observations but intercepted American coded analyses.
The German’s European-Atlantic map at 00 UTC June 6, 1944, where the analysis over the North Atlantic appears not to be based on observations but intercepted American coded analyses.

 

A historically debated forecast

The analog weather prediction system employed by the Allies for the invasion was claimed by its creators to have correctly identified the weather break. But historical analysis and review doesn’t bear this out. What it does find, though, is that the system correctly identified a transition from zonal to meridional flow, which delivered the break the Allies needed for success. History’s finding: The forecast was “Overoptimistic.”

The 1984 Fort Ord meeting about the D-Day forecast got coverage in the local Monterey newspapers. The invasion was said to have occurred in a "break" or a period of a "brief lull" in the weather.
The 1984 Fort Ord, California, AMS meeting about the D-Day forecast got coverage in the local Monterey newspapers. The American forecasting group was led by Lt. Col. (Dr.) Irving Krick of Caltech. The president of the Naval Post Graduate School, Robert Allen, Jr., at the time an Air Force officer conducting high-level weather briefings at the Pentagon, also spoke at the meeting.

 

As a lesson learned from this most famous of weather forecasts, the paper’s author, Anders Persson of Swedin’s Uppsala University, concludes:

It was 75[+] years ago and the observational coverage has improved tremendously since then, both qualitatively and quantitatively. Our understanding of the atmosphere is much better,and the forecast methods have reached a standard that could hardly have been dreamt of in 1944. However, there’s one element that has a familiar ring to it and is of great interest today. That is when Air Marshall Tedder [Deputy Supreme Commander of the Invasion under General Eisenhower] asks about an assessment of the confidence in the forecast he has just heard … This illustrates that the D-day forecast is a significant early example of decision-making under meteorological uncertainty.

{ 0 comments }

by Mary Glackin, AMS President

In normal times, our thousands of AMS professionals and colleagues are completely dedicated to helping people make the best possible weather-, water-, and climate-related decisions. In this COVID-19 period, were not just providing critical information; we are also receiving it. We are each of us following guidance from public health experts and local officials so that we can keep ourselves, our families, and our friends safe and well. We’re joining in the national and global efforts to “flatten the curve.”

amsseal-blueWe all continue to work, but these duties are now competing with new ones: caring for children who would normally be in school, searching for basic necessities that would routinely be in stock on supermarket shelves, protecting elderly friends and family members. With campuses and laboratories shut down, professors and students have scrambled to adjust to online teaching and reimagining plans for field experiments. Nonetheless, critical weather and hydrologic services are provided with sharp eyes for spring floods and convective weather. Preparations for the coming hurricane season are moving forward.

COVID-19 doesn’t “slightly tweak” the task of building a Weather-Ready Nation; it completely rearranges the landscape. Goals of shelter-in-place and evacuation have to be reconfigured for a world where we are advised by health experts to maintain physical separation from others—more than a challenge in a communal evacuation center.

COVID-19 provides a unique learning opportunity for all of us in the Enterprise. We can experience firsthand how even the best-intended top-down risk communication can sound to someone in harm’s way—and step up our own communications accordingly.

Finally, it’s worth noting as AMS embarks on its second century that our founding coincided with the 1918-19 influenza pandemic. The link between weather, water, climate, and public health (enshrined in the AMS seal) has been integral to building a sustainable and resilient world, and it will likely play a larger role in the future.

Thank you for maintaining essential services and supporting research and education during such a critical, difficult time. Stay well, and stay safe—and at the same time, stay focused, on our contributions to a safer, healthier world.

{ 0 comments }

by Keith L. Seitter, CCM, AMS Executive Director

One of the AMS Core Values is: “We believe that a diverse, inclusive, and respectful community is essential for our science.”

AMS lives this value, which is articulated in the Centennial Update to the AMS Strategic Goals. We work to foster a culture that celebrates our diversity, strives for equity in all we do, and encourages inclusion across all activities so that everyone can experience a sense of belonging in the Society.

To formalize these efforts and provide a clearer path for providing resources toward them, the Council approved the creation of a new entity in AMS in fall 2019. At its meeting this past January, the Council approved the terms of reference for this new component of the Society’s structure and that Dr. Melissa Burt would serve as its first chair. This Culture and Inclusion Cabinet (CIC) has the following charge:

To accelerate the integration of a culture of inclusion, belonging, diversity, equity, and accessibility across the AMS and evaluate and assess progress towards culture and inclusion strategic goals within the Society. Meaningful integration into all areas and components of the AMS will require time and sustained effort. Fully integrating diversity, equity, inclusion, and belonging (DEIB) will result in an organizational culture that is accessible, advances science, serves society, and is responsive to social justice.

The Council designates this new body as a “Cabinet” to reinforce that it is not quite like any of the other entities making up the volunteer structure of the Society (council, commission, board, committee, task force, etc.). The CIC will play a unique role and therefore was given a unique name.

The CIC sits at the highest level of the organizational structure for AMS save the Council itself, to which it reports directly. Being at this level it can more readily ensure that issues of diversity, equity, inclusion, accessibility, social justice, and belonging are addressed throughout all AMS programs and activities.

The CIC does not replace any of the other components of the Society that work in these arenas—most notably the Board on Women and Minorities (BWM), which has a long record of addressing equity and inclusion issues in AMS. The BWM will continue to oversee specific programs aimed at diversity, equity, and inclusion, and will likely expand its role in AMS programs as the CIC helps integrate those efforts more broadly in the Society.

AMS has a strong record of addressing diversity and equity issues and a culture of inclusivity that other organizations could learn from. The creation of the CIC builds on those strengths and puts AMS in a position of leadership among scientific organizations in elevating these issues to the highest levels so that they can be threaded through every program in foundational ways.

For many of us, the sense of belonging in AMS is an important part of what makes the Society so special, and we want everyone in the community to feel that sense of belonging as an intrinsic aspect of the AMS culture. I am confident the new Culture and Inclusion Cabinet will take us there and will assist our entire community in creating an even more inclusive environment—strengthening our enterprise in the process.

{ 1 comment }

Undergrads at Penn State recently took to their cellphones to mingle with and snap pics of tiny snowflakes to reinforce meteorological concepts. The class, called “Snowflake Selfies” and described in a new paper in BAMS, was designed to use low-cost, low-tech methods that can be widely adapted at other institutions to engage students in hands-on field research.

In addition to photographing snow crystals, students measured snowfall amounts and snow-to-liquid ratios, and then gained meteorological insight into the observations using radar data and thermodynamic soundings. The goal of the course was to reinforce concepts from their other undergraduate meteorology courses, such as atmospheric thermodynamics, cloud physics, and radar and mesoscale meteorology.

As a writing intensive course at Penn State that meets the communication skills requirement of the AMS guidance for a Bachelor’s Degree in Atmospheric Science, “Snowflake Selfies” also was designed to help students communicate meteorological science. Students shared their observations with the local National Weather Service office in State College and also wrote up their work in term papers and presented their pics and findings to the class.

Snow crystal photographs taken by students in the "Snowflake Selfies" class.
Snow crystal photos taken by students in the “Snowflake Selfies” class.

 

Of course to have such a class, you need snow, and “the relative lack of snowfall events during the observational period” in winter 2018 was definitively a challenge for students, the BAMS paper states. Pennsylvania’s long winters often see many opportunities to photograph snow, but the course creators caution that perhaps a longer observational period is needed in case nature doesn’t cooperate. It also would allow students enough time to closely observe snowflakes while juggling their other classes and activities.

A survey conducted at the end of the class found that “Snowflake Selfies” was well received by students, engaging them and encouraging their introduction to field science. And they “strongly agreed [it] helped reinforce their understanding of cloud physics and physical meteorology compared to” a previous such course where students designed, built, and deployed their own 3-D printed rain gauges to measure precipitation.

Actually, that previous course sounds like a lot of fun, too!

{ 0 comments }