New Map Peeks Under Antarctic Ice

With data compiled from a number of satellite, aircraft, and surface-based surveys, the recently completed Bedmap2 project comprises three datasets to map the ice-covered continent of Antarctica: surface elevation, ice thickness, and bedrock topography. The new dataset updates 2001’s original Bedmap compilation with tighter grid spacing, millions of additional data points, and extensive use of GPS data—enhancements that have improved the dataset’s resolution, coverage, and precision. For example, it depicts many surface and sub-ice features that were too small to be visible in the original Bedmap. Data from Bedmap2 reveal that Antarctica’s average bedrock depth, deepest point, and ice thickness estimates are all greater than that recorded in the original Bedmap.
As outlined in the NASA video below, the updated information obtained from Bedmap2 should enhance currently limited data on the continent’s ice thickness and what is beneath the ice, which could help researchers better understand how Antarctica will respond to a changing climate. It also “will be an important resource for the next generation of ice sheet modelers, physical oceanographers, and structural geologists,” according to the British Antarctic Survey’s Peter Fretwell, lead author of a recently published article on Bedmap2 that appeared in The Cryosphere. The article and Bedmap2’s data can be accessed here.

Remembering the Battle . . . and the Weather

As Gettysburg National Military Park commemorates the 150th anniversary of the momentous and bloody battle fought there (it took place July 1-3, 1863), we can look back and examine the role weather played in those three days of conflict (and AccuWeather already has, in this interesting post). This is possible thanks in large part to a local man, Rev. Dr. Michael Jacobs, who took weather observations three times a day, even as the fighting raged on around him. His notes, which can be seen here, show that temperatures were slightly below average for all three days, and that cloud cover was considerate much of the time. This benefited the soldiers, who would have been most uncomfortable in their wool uniforms during extreme heat and/or humidity. Late on July 3, a thunderstorm broke out, and it is testament to the ferocity of the battle that Jacobs noted the thunder “seemed tame” after the nonstop cacophony of gunfire that echoed throughout that afternoon.
As the battle wound down, the weather intensified, with rain falling throughout the day (a total of 1.39 inches, according to Jacobs) on July 4, the day after the combat had ended. The inclement weather turned out to be significant, as some wounded soldiers were still lying on the battlefield; tragically, those who were in low-lying areas drowned when the rainfall caused the Plum Run Creek to overrun its banks. The rains also added insult to injury for the retreating Confederate army–the dirt roads they traveled on rapidly became treacherous, and as they moved southward they were trapped for a period of time on the north side of the Potomac after the river swelled, making it temporarily impassible. They weren’t able to cross until July 13th.

The Other Science for Broadcast Meteorologists: Psychology!

The agenda of the 41st AMS Broadcast Meteorology Conference, held today through Friday in Nashville, covers a wide range of weather and meteorology. Not surprisingly, there’s a lot about psychology, too–including one presentation advising on-air meteorologists on “How to Develop Alligator Skin in order to ‘Survive.'”
The following column, by Rob Haswell, a Certified Broadcast Meteorologist in Milwaukee, delves into the ins and outs of that second science all too familiar to weathercasters.

Let me start by saying I love being on TV. I love what I do and everything that comes along with it–even the bad stuff. Sure, I’d like to make more money, I wish I didn’t work at crazy hours, and sometimes I’d like to be able to shop for groceries without dealing with that guy who has to grab my arm and pronounce, “Hey, you’re that guy on TV!”. But otherwise, I love broadcast meteorology.

With that on the record, I have to say there are times when I wish I could just forecast and not worry about how it was heard by the viewing and listening public. How a forecast is received has numerous variables that are outside of the realm of atmospheric science. Viewers have a form of selective listening that causes them to hear what they want–or not hear you at all. They want specifics but demand we generalize everything, and they suffer from severe long-term memory loss that causes them to relate only to what is happening in the present or very near past and future.
Perhaps the biggest challenge to any broadcast meteorologist is that of selective listening on the part of viewers or radio listeners. It’s similar to the selective hearing that children have: they can’t hear their own name shouted from the front porch but can make out the bells of an ice cream truck from miles away. Let’s take, for example, a viewer who has weekend plans to play golf or attend a wedding. When a forecaster says there is a chance of rain on Saturday, a pessimist will hear, “Your golf game will be rained out,” while an optimist (or perhaps someone in denial) will hear, “Your wedding will be beautiful and dry.” None of that changes what the actual chance of rain is for that area.
The human ear is capable of taking in everything we’re saying, but the human brain tends to lean toward the dramatic. So when a forecaster calls for a 7-15 centimeters of snow in an area the viewer will typically only pick up on the higher number and forget the lower number. When the storm passes with an average of 7-9 centimeters the viewer accuses us of exaggerating for ratings. This becomes an even bigger challenge when a broadcaster covers a large area with a variety of microclimates or that is affected by systems and fronts differently. If I feel the storm will leave 10-15 centimeters in our northern coverage area I will inform viewers to expect that snow north of a major east-to-west route just south of the snowfall forecast area. Nonetheless, a viewer well south of that route might just hear 15 centimeters and then cry foul when their area does not get that much snow! Then, when we explain to that viewer that his or her area was not in the forecast to get that much snow, they will might accuse you of “massaging your numbers” or simply deny you ever said such a thing. Are they delusional? No. They heard what they heard and that is their own reality.
Of course we could combat this by providing a more detailed forecast. In some ways the internet enables us to do that. We can put more detailed information online than we can present on the air. However, despite viewers’ demand for accuracy, they also demand brevity and generalities. Yes, a growing group of weather junkies love it when I break out the water vapor imagery or talk about vorticity, but the much larger group simply wants to know if it is going to rain or snow. We have to cater to the crowd that wants to know if it is sweater weather or t-shirt weather. We must remember that we are only part of a program whose main goal is to attract a large audience, not necessarily to teach the viewer about the intricacies of the atmosphere. We can’t expect a television or radio station to devote enough time for a complete, in-depth forecast discussion in each and every quarter hour.
So, viewers demand that I tell them the amount of snow in their driveway to the centimeter or the exact high temperature to within a degree for their backyard, but at the same time they want me to tell them in a brief, generalized manner that doesn’t overly tax their brains. In a sense, they are their own worst enemies if they want a more accurate forecast.
Lastly, the broadcast meteorologist is up against the viewer’s memory. Today’s viewer lives in the now. Our father’s and grandfather’s generations were more connected to the world around them in their daily lives. They seemed to remember what last winter brought and what the average spring is. That was partly because families weren’t as mobile then. Nowadays it’s common to move across the country or across the world, and as a result people don’t know their local climate. However, the average person–in particular from post-GenX generations–have short attention spans, which leads to confusion about climate–and in particular when discussing climate change.
Take for example the colder- and snowier-than-average February and March in much of the Great Lakes. Due to some late-season snowfalls and cold snaps, viewers were convinced that this was the harshest winter on record. They were incapable of remembering the well above-average December or the nearly snowless January. This climate amnesia is a “what have you done for me lately?” mindset.
This memory problem—this “now” focus—hits fever pitch on the issue of global climate change, which, sadly, is so contentious that very few on-air meteorologists will even touch it publicly.  If a few days in a row are unseasonably cold it won’t be very long before the broadcast meteorologist has to contend with e-mails or Facebook posts snarking, “Where’s Global Warming now?!” Or if we manage, as we did here in Wisconsin, to have a couple of below-average months back to back, you’ll hear calls of “Global Warming Fraud” because viewers have forgotten the numerous consecutive months of above-average temperatures, not to mention the deadly heat or extensive drought of the previous summer.
There you have it. The broadcast meteorologist is up against not only the scientific challenges of forecasting but also the challenge of psychology. We’re speaking to an audience of selective listeners who hear what they what to hear. A group of folks who want spot-on accuracy delivered in broad strokes and witty banter. And an audience that seems to relate only to what is happening in the world around them at this very moment.
So do we give up and just assume we’ll never get through to them? No. These are just challenges, not insurmountable obstacles. Broadcast meteorologist need to use all the tools at their disposal to provide specifics and focus their audience on what they need to know. Use Twitter and Facebook to engage the viewer and keep the forecast up to the minute. Take advantage of the internet to post more detailed data for those who crave it, and use the on-air portion of our job to create more weather junkies who will consume that data. We need to keep it simple while at the same time not falling for the traps of oversimplification. We need to use climate as a history lesson for the viewer to remind them over and over about what the world outside has been like so as to put today’s weather in context.
Lastly, we need to grow a thick skin. For no matter how much we work at educating, informing, and entertaining, some viewers will always revel in what they see as our shortcomings. Remember the old saying, “Weep for the weather forecaster. When he’s wrong, no one forgets. When he’s right, no one remembers.”

Pubs Department Gets the Word(s) Out

At last month’s meeting of the AMS Publications Commission–which mostly comprises the Chief Editors of the Society’s scientific journals–many of the attendees were concerned that their fellow AMS members may not be aware of the many recent notable achievements by AMS Publications. So with that in mind, here’s a quick list of some of the accomplishments over the past year:

Of course, all of this was done in addition to the continued publication of AMS’s journals, 6 of which were ranked in the top 20 for impact factor in the most recent edition of the Thompson-ISI rankings.
And out of everything discussed at the meeting, the Publications Commission was perhaps most adamant about publicizing the rapidly improving production time of those journals. Production times for AMS journals (the number of days between when a paper is accepted and when it is published in final form) have been declining for years, and in May of this year the average production time for all journals was just over 150 days. In January of 2008, it was around 280 days. That improvement has been accomplished despite a continuing rise in article submissions, which reached an all-time high of 2,999 in 2012. AMS journals published almost 27,000 pages last year.
“Publishing in AMS journals now is much faster, more efficient, and streamlined than even just a few years ago, and the word seems to be getting out; submissions are at an all-time high and continuing to increase,” says AMS Director of Publications Ken Heideman. “Our goal is not just to get you to publish with us but to keep you publishing with us!”
At the meeting, Heideman underscored the ongoing commitment of his staff to continually reduce production times, and highlighted a number of initiatives for the department in the upcoming year.
Keep an eye out for an article in BAMS later this year with more detailed highlights from the Publications Commission meeting.

A Scientist's Scientist

Joseph Farman–the man who found the ozone hole–had a very straightforward, unglamorous way of describing the work of  a scientist:

Science is thinking you know how things work and so you make something work and it either works as you think it does or it doesn’t work as you think it does and now you move on.

Farman, who passed away last month at the age of 82, reported the existence of the ozone hole in a 1985 paper based on in situ measurements made with Brian Gardiner and Joe Shanklin in Antarctica.  Despite the renown that followed this discovery, Farman’s legacy will stand–as he wished–on a dogged ability to follow his simple model of research at the highest level.
An employee of the British Antarctic Survey from 1956 until his retirement in 1990, Farman ventured to Antarctica at the beginning of his career and studied the atmosphere over that continent for 25 years, assigning other scientists to continue measurements after he returned to Britain in 1959. His superiors questioned his indefatigable efforts to compile ground-based ozone readings–after all, NASA satellites were already monitoring the ozone over Antarctica. Farman told the BBC:

The long-term monitoring of the environment is a very difficult subject. There are so many things you can monitor. And basically it’s quite expensive to do it. And, when nothing much was happening in the environmental field, all the politicians and funding agencies completely lost interest in it. And there was a huge struggle to keep going. And in fact we could have been closed down with our ozone measurements the year before we actually published our paper.

But Farman was a strict proponent of the simple scientific act of collecting data–“just doing a little job, and persevering at it,” as described by Sharon Roan, author of Ozone Crises: The 15-Year Evolution of a Sudden Global Emergency. This commitment to scientific principles made him “a model scientist,” according to Roan.
Faithful dedication to the scientific process yielded momentous results. Isn’t that how it’s supposed to work?
This New York Times obituary tells a more complete story of Farman’s achievements, and for those who really want to delve into his life and sometimes controversial views, there is this interview collected by the British Library (audio version available here).

Twister that Killed 4 Storm Chasers Widest Ever

The tornado that killed 18 people in and around El Reno, Oklahoma on Friday, including three professional tornado researchers and an amateur storm chaser, was a record 2.6 miles wide, according to the National Weather Service (NWS).

El-Reno-tornado-path
Path of the May 31, 2013 tornado in El Reno, Oklahoma.
(Source: NWS Forecast Office, Norman, Oklahoma)

 
The NWS in Norman, Oklahoma posted the image above to its Facebook page Tuesday. In addition to being the widest tornado in U.S. history, the El Reno tornado was also rated an EF-5 with winds “well over 200 mph,” the Norman NWS stated on Facebook.
According to a blog post by Jason Samenow of the Washington Post’s Capital Weather Gang, the previous record width of a tornado was 2.5 miles, belonging to the Wilber-Hallam, Nebraska twister of May 22, 2004. It was rated EF-4 in Hallam, south of Lincoln, and damaged or destroyed about 95 percent of the village of 200 people, killing one person and injuring 37.
Friday’s tornado in El Reno, a small city just west of Oklahoma City, was upgraded to an EF-5 on the 0-5 Enhanced Fujita Scale not because of its size but because of radar-measured winds in its enormous vortex of nearly 300 mph.
According to Samenow’s post, radar teams headed by renowned tornado researchers Howard Bluestein of the University of Oklahoma and Josh Wurman of the Center for Severe Weather Research were near the El Reno tornado gathering data. Bluestein said two of his graduate students measured winds of 296 mph in the tornado’s funnel, while Wurman’s team observed winds of 246-258 mph. Both teams were scanning the tornado with mobile Doppler radars, but from different locations.
The violent and deadly El Reno tornado occurred less than two weeks and a mere 20 miles from the EF-5 tornado that devastated Moore, Oklahoma on May 20. Two dozen people lost their lives in that tornado. It brought hard luck and hard lessons back to Moore, crossing the path of the infamous EF-5 tornado of May 3, 1999. Wurman’s Doppler on Wheels radar clocked winds in the 1999 Moore tornado at over 300 mph.
Over the weekend, numerous media outlets (KFOR-TV, CNN, The Weather Channel), cable TV channel websites (NatGeo, The Discovery Channel), and blog posts (Capital Weather Gang, Weatherunderground) covered the shocking news of the first-ever deaths of storm chasers by a tornado. Tim Samaras, a professional storm chaser and tornado researcher for nearly 30 years and an Associate Member of the AMS, along with his photographer son Paul and researcher Carl Young were killed when their chase vehicle was violently thrown and mangled by the El Reno tornado. The Daily Oklahoman reported Tuesday that amateur storm chaser Richard Charles Henderson was killed the same way. His pickup truck was overrun by the tornado winds moments after he sent a friend a cellphone photo of the El Reno tornado.

A New Metric for Hurricane Destruction Potential

Hurricanes Katrina (2005), Ike (2008), and Sandy (2012) have proven the Saffir-Simpson Scale is inadequate for expressing hurricane destructiveness. This is especially true for storm surge, which the original Category 1-5 wind damage potential rating scale wasn’t designed to classify.
As another Atlantic hurricane season begins, a study now accepted for publication in Monthly Weather Review introduces a new metric for measuring the destructive potential of tropical cyclones: Track Integrated Kinetic Energy. TIKE builds on the earlier concept of Integrated Kinetic Energy to represent destructive potential by computing a storm’s sustained wind field quadrant-by-quadrant along its entire track. Summing up the IKE values over the tropical cyclone’s lifecycle more accurately determines the potential for destruction, the study concludes.
Additionally, TIKE can be accumulated for all of a tropical cyclone basin’s storms in a given year to create “an important metric of that season,” the authors write in a summary of their research (to appear in an upcoming issue of the Bulletin of the AMS).
Vasu Misra, lead author of the study “The Track Integrated Kinetic Energy of the Atlantic Tropical Cyclones,” adds:

Existing metrics such as Accumulated Cyclone Energy (ACE) or the Power Dissipation Index (PDI) only consider the peak wind in the storm, which is difficult to measure and typically only covers a very small area and contributes little to storm surge and wave damage.  TIKE takes into account the wind forcing over a large area surrounding the storm and is therefore much more reliable as an objective measure of hurricane destructive potential. In effect TIKE accounts for the intensity, duration, size, and structure of the tropical cyclones.

The study by Misra and his colleagues also looks at seasonal and season-to-season as well geographic variations of TIKE. Among its findings:

  • TIKE peaks in September along with hurricane season overall, since that’s when the Atlantic Ocean is warm enough to fuel large and long-lived storms;
  • Very active hurricane seasons such as 2005 may not be the most destructive since some large and powerful hurricanes may be short-lived;
  • Annual variations in TIKE are related to sea surface temperature variations in both the equatorial Pacific (warmer temperatures there relate to lower TIKE in the Atlantic) and the Atlantic (its warmer temperatures relate to higher TIKE there).

The MWR article abstract is open to all readers, while subscribers can read the full Early Online Release on the AMS journals website.

Crowdsourcing the Search for Carbon Dioxide Emissions



According to Arizona State University (ASU) Professor Kevin Gurney, there are approximately 30,000 power plants throughout the world, and collectively they account for close to half of all fossil-fuel CO2 emissions. As a modeler of these emissions, Gurney is trying to learn as much as possible about every one of the plants. Where are they? What kind of fuel does each plant use? How much CO2 does each one release into the atmosphere?
Obtaining this kind of data, however, is a monumental task. There is no worldwide database with all of the power plant information Gurney is looking for, and even after enlisting a number of undergraduate students in his lab to scour Google Earth for the locations of the largest plants, in six months they were able to identify the locations of only 500 across the globe. Realizing that the effort was “like looking for 25,000 needles in a giant haystack,” as Gurney described it, he has now taken another approach by creating an online game that utilizes contributions from the general public to pinpoint the locations of power plants and hopefully quantify the amount of CO2 each releases into the atmosphere.
The project is called Ventus, which is Latin for “wind.” In the game, players are asked for four pieces of information: the location of the power plant (within a few hundred meters), the type of fuel used at the plant, the amount of electricity the plant generates, and the amount of CO2 that is emitted from the plant. Participants in the game can contribute as much information as they have by placing pins on a Google map at the location(s) of the plants. When the game is completed in 2014, the person who contributed the largest amount of useable information will be declared “Supreme Power Plant Emissions GURU!” and will receive a trophy, as well as be a coauthor on a scientific paper about crowd-sourcing in scientific research.
“Our logic is that for every power plant in the world, there are probably at least a dozen people who live near it, work at it, or know someone who works at it” explained ASU’s Darragh O’Keefe, who built the website. “With the proliferation of phones and GPS, it makes it pretty easy to locate things.”
Early response to the game was enthusiastic, with Gurney reporting that people had logged on from almost every country in the world within a day of its mid-May launch.
“I’m always surprised by how fast this type of thing moves around the planet,” he told the Los Angeles Times.

Avoiding Toaster Strudel Exchanges

by Keith L. Seitter, CCM, AMS Executive Director
Those of us who have siblings know that the relationship is built, in part, on needling.
When my two sons, Kevin and Matt, were eight and three years old, respectively, Kevin enjoyed Toaster Strudel® as an occasional breakfast treat. Matt, meanwhile, was just beginning to learn the joys of thoroughly annoying a sibling and was quickly becoming quite good at it.  One weekend morning, the following exchange took place:

Matt (to Kevin): We don’t have any Toaster Strudel.
Kevin:  Yes we do.
Matt:  No we don’t.
Kevin:  We do.  Mom picked some up at the store.
Matt:  No we don’t.
Kevin (becoming annoyed):  Matt, we do have some, I saw mom put it in the freezer!
Matt (remaining completely calm and collected):  No we don’t.
Kevin (stomping to the freezer and pulling the box out):  See!  We do have it!
Matt (still calm and collected):  No we don’t.

At about this point, when Kevin was clearly exasperated, I think I did the parental thing and intervened to calm things down.
I relay this little story because some of the “debate” on climate change seems to be taking on the character of this Toaster Strudel exchange.  And it is far less amusing when it is happening among adults in the media and in the blogosphere.
Frequent readers of my monthly column in BAMS will know that I have long been advocating for open and respectful dialogue on the science of climate change, with all parties recognizing that as scientists it is our job to be skeptical and require solid theory and evidence to back up claims.  We must always be cognizant of how hard it is to keep our intrinsic values from triggering confirmation bias as we review research results or listen to alternative explanations for observational evidence.  Our training as scientists, however, makes it clear that our goal must always be the objective truth — whether it supports our belief system or not.  We must all strive for that level of integrity.
I continue to feel that with open and respectful dialogue on the various complex issues involved in climate change we can achieve greater understanding within our community and less divisiveness.  We have to recognize, however, that “Toaster Strudel exchanges” are not about the evidence.  They have an entirely different goal from finding the objective truth, and failing to recognize that will only lead to frustration.
 

Hard Luck, Hard Lessons in Moore

While we hope, pray, and provide for survivors of Monday’s tragedy in Moore, Oklahoma, it is impossible to ignore the terrible turn of bad luck this tornado represents.
In 1999 Moore was struck by what has been considered the most powerful tornado ever observed on radar–winds over 300 miles an hour aloft. That was a billion dollar disaster that claimed 36 lives. Then, in 2003, the same path of destruction was crossed again–fortunately claiming no lives. Nonetheless at the time this powerful twister was rated an F-4 on the old version of the Fujita scale. And this week…unspeakable destruction and loss of loved ones as a mile-wide-plus tornado—an EF-5 on the Enhanced Fujita scale—yet again crossed the benighted path through Moore.
People in tornado country are vulnerable. It should be as simple as that. But the people of Moore are being tested beyond any threshold of resilience we might expect from the odds.
Clearly, lightning can strike in the same place twice. The people of Moore, and of Oklahoma in general, understood that, and have been open to the advice of the weather and climate community. For example, in 2002 the greater Oklahoma City metropolis spent $4.5 million to upgrade and expand its warning siren system. The Moore area alone has a network of 36 sirens and apparently took full advantage of the 16 minute warning lead time. Furthermore, areport from Oklahoma Climate Survey’s Andrea Dawn Melvin revealed the terrible vulnerability of schools in the 1999 disaster (she presented these findings at the 2002 AMS Annual Meeting; and at the symposium for the one-year anniversary of the tornado, in Norman). In response, school districts in the state have taken her advice to heart, revising emergency plans, and in some case building or reinforcing shelters.
But making good luck out of bad is an unceasing, and apparently unforgiving task, for meteorologists and citizens alike. Preparations are rarely perfect. Even though Melvin’s report helped spur Oklahoma City and other jurisdictions to create safe rooms in schools, other cities, like Moore, did not go this far in safety preparations. The two schools damaged in 1999 were rebuilt with safe rooms, but the other schools in the district–including those destroyed on Monday–were not upgraded in this manner.
Furthermore, while the 1999 tornado was among the most thoroughly analyzed of all severe storms in history, lessons drawn about building safety were not always heeded. A Weather and Forecasting paper by engineer and storm chaser Tim Marshall showed how the damage from the 1999 Moore tornado looked like the work of extreme winds until you examined how the houses had been built. Connections between frame and foundation, roof and walls had been compromised easily because of poor construction practices. Garage doors had been uncommonly weak points, forcing otherwise fine houses to yield to the storm. Marshall concluded, “Houses with F4 or F5 damage likely failed when wind gusts reached F2 on the original F scale.”
And yet, inspecting the rebuilding in Moore 40 days after the disaster, Marshall already found numerous instances of the same construction mistakes being repeated. It was rare for builders to exceed code standards in order to strengthen houses for a repeat tornado.
Unfortunately nature did repeat. While construction improvements would not prevent failure of a house in the worst case scenario, there are a lot of tornadoes in which safety can be improved by using the right kinds of fasteners, improving shelters, updating sirens, and the like. Monday’s disaster goes far beyond the placement of hardware and planks, but that is not the point. These tornadoes are a reminder  that all this happened before and can happen again.
Pray that hard luck finally ends for Moore, but remember that we are a community that must keep on learning hard lessons.