Is It Just Us…or Was That the BBQ Talking?

So many conversations at the 2018 AMS Annual Meeting started–and ended–on the same note, and Dakota Smith captures it just right in his “Weather Nerds Assemble” vlog:

Communication is a huge aspect in this field….If a forecast is a hundred percent accurate, but no one understands it, it’s not a useful forecast. That in a nutshell was what this meeting was about.

According to Smith, all that geeked out conversation amongst 4,200 weather, water, and climate nerds added up to at least these four lessons:

  1. The future is bright: “I talked with so many intelligent, bright, passionate students who are bound to make an impact on our community. Keep up the grind!”
  2. Meteorologists are incredibly strong: The communications workshop reflecting on the experience of Harvey, Irma, and Maria showed that  “meteorologists across the country used…love and passion to fuel them through this relentless hurricane season.”
  3. Austin has incredible BBQ.
  4. Meteorologists are awesome. “I already knew this before…we love weather, and we love science!”

The last two are obvious, right? The first two make our day. Share your own take-away points; meanwhile, you owe yourself the injection of enthusiasm–just in case you got lost in the trees since returning home:

 

Revisiting the Hurricane Town Hall Meeting

The all-star panel comprising Monday’s special Town Hall Meeting on the 2017 hurricane season provided a riveting discussion of the science, communication, and impacts of Harvey, Irma, and Maria, highlighted by Ada Monzón’s emotional talk about the devastating effects Maria has had on Puerto Rico. The session created a buzz among #AMS2018 attendees.


The entire session has now been posted to the AMS YouTube channel, and you can also watch it below.

With Floods or Baseball, It's a Game of Percentages

Perhaps no one thought that Game 5 of the World Series would end the way it did. It started with two of the game’s best pitchers facing off; a low-scoring duel seemed likely. But the hitters gained the upper hand. In the extra-inning slugfest the score climbed to 13-12.
If you started that game thinking every at-bat was a potential strike-out, and ended the game thinking every at-bat was a potential home run, then you’ll understand the findings about human expectations demonstrated in a new study in the AMS journal, Weather, Climate and Society. University of Washington researchers Margaret Grounds, Jared LeClerc, and Susan Joslyn shed light on the way our shifting expectations of flood frequency are based on recent events.
There are two common ways to quantify the likelihood of flooding. One is to give a “return period,” which tells (usually in years) how often a flood (or a greater magnitude flood) occurs in the historical record. It is an “average recurrence interval,” not a consistent pattern. The University of Washington authors note that a return period “almost invites this misinterpretation.” Too many people believe a 10-year return period means flooding happens on schedule, every 10 years, or that in every 10-year period, there will be one flood that meets or exceeds that water level.
Grounds et al. write:

This misinterpretation may create what we refer to as a ‘‘flood is due’’ effect. People may think that floods are more likely if a flood has not occurred in a span of time approaching the return period. Conversely, if a flood of that magnitude has just occurred, people may think the likelihood of another similar flood is less than what is intended by the expression.

In reality, floods that great can happen more frequently, or less frequently, over a short set of return periods. But in the long haul, the average time between floods of that magnitude or greater will be 10 years.
One might think the second common method of communicating about floods corrects for this problem. That is to give something like a batting average–a statistical probability that a flood exceeding a named threshold will occur in any given time period (usually a year). Based on the same numbers as a return period, this statistic helps convey the idea that, in any given year, a flood “might” occur. A 100-year return period would look like a 1% chance of a flood in any given year.
Grounds and her colleagues, however, found that people have variable expectations due to recent experience, despite the numbers. The “flood is due” effect is remarkably resilient.
The researchers surveyed 243 college students. Each student was shown just one of the three panels below of flood information for a hypothetical creek in the American West:
FloodBlog1
Each panel showed a different method of labeling flooding (panel A showed return periods; panel B percent chance of flooding; panel C had no quantification, marking levels A-B-C). The group for each panel was further subdivided into two subgroups: one subgroup was told a flood at the 10-year (or 10% or “A”) marker had occurred last year; the other subgroup was told such a flood last occurred 10 years ago. This fact affected the students’ assessment of the relative likelihood of another flood soon (they marked these assessments proportionally, on an unlabeled number line, which the researchers translated into probabilities).
Floodblog2
Notice, the group on the right, who did not deal with quantified risks (merely A-B-C), assessed a higher imminent threat if a flood had occurred last year. This “persistence” effect is as if a home run last inning made another home run seem more likely this inning. The opposite, “flood due” effect, appeared as expected for the group evaluating return period statistics. Participants dealing with percentage chances of floods were least prone to either effect.
This test gave participants a visualization, and also did not quantify water levels. Researchers realized both conditions might have thrown them a curve ball, skewing results, so the researchers tried another survey with 803 people (gathered through Amazon.com) to control test conditions. The same pattern emerged: an even bigger flood-is-due effect in the group evaluating return-period, a bigger persistence effect in the group with unquantified risks, and neither bias in the group assessing percentage risks.
In general, that A-B-C (“unquantified”) group again showed the highest estimation of flood risk. The group with percentage risk information showed the least overestimation of risk, but still tended to exaggerate this risk on the scales they marked.
Throughout the tests, the researchers had subjects rank their concern for the hypothetical flood-prone residents because flood communication stops not at understanding, but at concern that motivates a response. Grounds et al. conclude:

Although percent chance is often thought to be a confusing form of likelihood expression…the evidence reported here suggests that this format conveys the intended likelihood information, without a significant loss in concern, better than the return period or omitting likelihood information altogether.

How concerned these participants felt watching the flood of hits in the World Series…well, that depended on which team they were rooting for.
 

Persisting Gender Gap for Weathercasters

While the ranks of women weathercasters are growing slowly, they continue to lag behind their male colleagues in job responsibilities.
A new study to be published in the Bulletin of the American Meteorological Society shows that 29% of weathercasters in the 210 U.S. television markets are female. This percentage has roughly doubled since the 1980s.
Despite this progress, only 8% of chief meteorologists are female. Meanwhile, 44% of the women work weekends while 37% work mornings. Only 14% of the women work the widely viewed evening shifts.
WeathercastersTime
Last year, 11% of evening or primetime weathercasters were female. That’s only about one-third the percentage reported in a study published in 2008, suggesting a possible decline in female representation in this high-profile broadcast slot.
Educational levels were gender dependent, as well. While 52% of the women had meteorology degrees, 59% of the men did.
Alexandra Cranford of WWL-TV in Slidell, Louisiana, the author of the new study, gathered her data in 2016 from TV station websites. The biographical information was compiled for 2,040 weathercasters, making the study the largest of its kind. Because it relied on self-reported, publicly available information, Cranford suggests that there may be some underrepresentation of various factors: online bios tend might tend to omit information that makes broadcasters look less qualified or experienced. The bios on males were somewhat less likely to omit their exact position at the a station, while the bios on females were somewhat less likely to omit education information.
Previous studies have shown that viewers tend to perceive men as more credible and thus more suited for a variety of broadcast roles, from serious news situations to commercial voice-overs. This perception was one of the motivations for Cranford’s study and may be weighing on hiring and assignment practices for weathercasters. In the article, Cranford notes that past research had indicated,

Constructs including the “weather girl” stereotype and gender-based differences in perceived credibility could potentially contribute to the percentage of women in broadcast meteorology remaining low, especially in chief and evening positions.

Based on the new data, Cranford concludes,

Additional research should explore if factors such as persistent sexism in hiring practices or women’s personal choices could explain why fewer female weathercasters have degrees and why women work weekend shifts while remaining underrepresented in chief meteorologist and evening positions.

 

Peer Review Week 2017: 4. Shifting Demands of Integrity

For Journal of Hydrometeorology Chief Editor Christa Peters-Lidard, peer review upholds essential standards for a journal. “Maintaining that integrity is very important to me,” she said during an interview at the AMS Publications Commission meeting in Boston, in May. Historically, the burden of integrity has fallen on editors as much as authors and reviewers.

The peer review process we follow at the AMS is an anonymous process. Authors do not know who their reviewers are. So it’s really up to us, as the editors and chief editors, to ensure that the authors have a fair opportunity to not only get reviews that are constructive and not attacking them personally, but also by people that are recognized experts in the field.

Even when reviewers are not experts, “they know enough about it to ask the right questions, and that leads the author to write the arguments and discussion in a way that, in the end, can have more impact because more people can understand it.”
Anonymity has its advantages for upholding integrity, especially in relatively small field, like hydrometeorology. Peters-Lidard pointed anonymity helps reviewers state viewpoints honestly and helps authors receive those comments as constructive, rather than personal.
“In my experience there have been almost uniformly constructive reviews,” Peters-Lidard says, and that means papers improve during peer review. “Knowing who the authors are, we know what their focus has been, where their blind spots might be, and how we can lead them to recognize the full scope of the processes that might be involved in whatever they’re studying. Ultimately that context helps the reviewer in making the right types of suggestions.
But the need for integrity is subtly shifting its burden onto authors more and more heavily. Peters-Lidard spoke about the trend in science towards an end-to-end transparency in how conclusions were reached. She sees this movement in climate assessment work, for example, where the policy implications are clear. Other peer reviewed research developing this way.

We’re moving the direction where ultimately you have a repository of code that you deliver with the article….Part of that also relates to a data issue. In the geosciences we speak of ‘provenance,’ where we know not only the source of the data—you know, the satellite or the sensor—but we know which version of the processing was applied, when it was downloaded, and how it was averaged or processed. It’s back to that reproducibility idea a little bit but also there are questions about the statistical methods….We’re moving in this direction but we’re not there yet.

Hear the full interview:

Peer Review Week 2017: 3. Transparency Is Reproducibility

David Kristovich, chief editor of the Journal of Applied Meteorology and Climatology, explains how AMS peer review process, as a somewhat private process, ultimately produces the necessary transparency.
Peer review is an unpublished exchange between authors, editors and reviewers. It is meant to assure quality in a journal. At the same time, Kristovich noted that peer review is not just something readers are trusting, blindly. Rather, the peer review process is meant to lead to a more fundamental transparency—namely, it leads to papers that reveal enough to be reproducible.
“Transparency focuses on the way we tend to approach our science,” Kristovich said during an interview at the AMS Publications Commission meeting in Boston in May. “If someone can repeat all of the steps you took in conducting a study, they should come up with the same answer.”
“The most important part of a paper is to clearly define how you did all the important steps. Why did I choose this method? Why didn’t I do this, instead?”
Transparency also is enhanced by revealing information about potential biases, assumptions, and possible errors. This raises fundamental questions about the limits of information one can include in a paper, to cover every aspect of a research project.
“Studies often take years to complete,” Kristovich pointed out. “Realistically, can you put in every step, everything you were thinking about, every day of the study? The answer is, no you can’t. So a big part of the decision process is, ‘What is relevant to the conclusions I ended up with?’”
The transparency of scientific publishing then depends on peer review to uphold this standard, while recognizing that the process of science itself is inherently opaque to the researchers themselves, while they’re doing their work.
“The difference between scientific research and development of a product, or doing a homework assignment—thinking about my kids—is that you don’t know what the real answer is,” Kristovich said. Science “changes your thinking as you move along, so at each step you’re learning what steps you should be taking.”
You can hear the entire interview here.

Peer Review Week 2017: 2. What Makes a Good Review?

peer review week banner
At the AMS Annual Meeting panel on Peer Review last January, journal editors Tony Broccoli, Carolyn Reynolds, Walt Robinson, and Jeff Rosenfeld spoke about how authors and reviewers together make good reviews happen:
Robinson: If you want good reviews, and by good I mean insightful and constructive and that are going to help you make your paper better, the way to do that is to write a really good paper. Make sure your ducks are in a row before you send it in. You should have read over that and edited it multiple times. I’m going to, at some point in my life, write a self-help book in which the single word title is, “Edit!” because it applies to many parts of life. Have your colleagues—not even the co-authors—look at it. Buy the person in the office next door a beer to look over the paper and get their comment. There may be problems with the science–and none of our science is ever perfect–but if it’s a really well constructed, well formulated, well written paper, that will elicit really good reviews.
The flip side of that is, if the paper is indecipherable, you’ll get a review back saying, “I’m trying to figure this out” with a lot of questions, and often it’s major revisions. (We don’t reject that many things out of the box.)
The problem is, the author goes back and finally makes the paper at a standard he or she should have sent in the first time. It goes back to the reviewer, and then the reviewer understands the paper and comes back and starts criticizing the science. Then the author gets angry…”You didn’t bring that up the first time!” Well, that’s because the reviewer couldn’t understand the science the first time. So, if you want good, constructive reviews, write good papers!
Reynolds:  You want to make things as easy as possible for the reviewers. Make the English clear, make the figures clear. Allow them to focus on the really important aspects.
Broccoli: I would add, affirming what Walt said, that the best reviews constructively give the authors ideas for making their papers better. Some reviewers are comfortable taking the role as the gatekeeper and trying to say whether this is good enough to pass muster. But then maybe they aren’t as strong as need be at explaining what needs to be done to make the paper good enough. The best reviews are ones that apply high standards but also try to be constructive. They’re the reviewers I want to go back to.
Rosenfeld: I like Walt’s word, “Edit.” Thinking like an editor when you are a reviewer has a lot to do with empathy. In journals, generally, the group of authors is identical or nearly the same as the group of readers, so empathy is relatively easy. It’s less true in BAMS, but it still applies. You have to think like an editor would, “What is the author trying to do here? What is the author trying to say? Why are they not succeeding? What is it that they need to show me?” If you can put yourself in the shoes of the author—or in the case of BAMS, in the shoes of the reader—then you’re going to be able to write an effective review that we can use to initiate a constructive conversation with the author.
Broccoli: That reminds me: Occasionally we see a reviewer trying to coax the author into writing the paper the reviewer would have written, and that’s not the most effective form of review. It’s good to have diverse approaches to science. I would rather the reviewer try to make the author’s approach to the problem communicated better and more sound than trying to say, “This is the way you should have done it.”

Peer Review Week 2017: 1. Looking for Reviewers

peer review week banner
It’s natural that AMS–an organization deeply involved in peer review–participates in Peer Review Week 2017. This annual reflection on peer review was kicked off today by the International Congress of Peer Review and Scientific Publication in Chicago. If you want to follow the presentations there, check out the videos and live streams.
Since peer review is near and dear to AMS, we’ll be posting this week about peer review, in particular the official international theme, “Transparency in Review.”
To help bring some transparency to peer review, AMS Publications Department presented a panel discussion on the process in January at the 2017 AMS Annual Meeting in Seattle. Tony Broccoli, longtime chief editor of the Journal of Climate, was the moderator; other editors on the panel were Carolyn Reynolds and Yvette Richardson of Monthly Weather Review, Walt Robinson of the Journal of Atmospheric Sciences, and Jeff Rosenfeld of the Bulletin of the American Meteorological Society.
You can hear the whole thing online, but we’ll cover parts of the discussion here over the course of the week.
For starters, a lot of authors and readers wonder where editors get peer reviewers for AMS journal papers. The panel offered these answers (slightly edited here because, you know, that’s what editors do):
Richardson: We try to evaluate what different types of expertise are needed to evaluate a paper. That’s probably the first thing. For example, if there’s any kind of data assimilation, then I need a data assimilation expert. If the data assimilation is geared toward severe storms, then I probably need a severe storms expert too. First I try to figure that out.
Sometimes the work is really related to something someone else did, and that person might be a good person to ask. Sometimes looking through what papers they are citing can be a good place to look for reviewers.
And then I try to keep reaching out to different people and keep going after others when they turn me down…. Actually, people are generally very good about agreeing to do reviews and we really have to thank them. It would all be impossible without that.
Reynolds: If you suggest reviewers when you submit to us, I’ll certainly consider them. I usually won’t pick just from the reviewers suggested by the authors. I try to go outside that group as well.
Broccoli: I would add, sometimes if there’s a paper on a topic where there are different points of view, or the topic is yet to be resolved, it can be useful to identify as at least one of the reviewers someone who you know may have a different perspective on that topic. That doesn’t mean you’re going to believe the opinion of that reviewer above the opinion of others but it can be a good way of getting a perspective on a topic.
Rosenfeld: Multidisciplinary papers can present problems for finding the right reviewers. For these papers, I do a lot of literature searching and hunt for that key person who happens to somehow intersect, or be in between disciplines or perspectives; or someone who is a generalist in some way, whose opinion I trust. It’s a tricky process and it’s a double whammy for people who do that kind of research because it’s hard to get a good evaluation.

•  •   •

If you’re interested in becoming a reviewer, the first step is to let AMS know. For more information read the web page here, or submit this form to AMS editors.
 

Disaster Do-Overs

hurricane-irma-noaa
Ready to do it all over again? Fresh on the heels of a $100+ billion hurricane, we very well may be headed for another soon.
As Houston and the Gulf Coast begin a long recovery from Hurricane Harvey, Hurricane Irma is now rampaging through the Atlantic. With 185 m.p.h. sustained winds on Tuesday, Irma became the strongest hurricane in Atlantic history outside of the Caribbean and Gulf. The hurricane made its first landfall early Wednesday in Barbuda and still threatens the Virgin Islands, Puerto Rico, Cuba, and the United States.
085945_5day_cone_no_line_and_wind
If Irma continues along the general path of 1960’s Hurricane Donna, it could easily tally $50 billion in damage. This estimate, from a study by Karen Clark and Co. (discussed recently on Category 6 Blog), is already four years old (i.e., too low). Increased building costs—which the report notes rise “much faster” than inflation–and continued development could drive recovery costs even higher.
Donna1960
In short, as bad as Houston is suffering, there are do-overs on the horizon—a magnitude of repeated damage costs unthinkable not long ago, before Katrina ($160 million) and Sandy ($70 million).
Repeated megadisasters yield lessons, some of them specific to locale and circumstances. In Miami after Hurricane Andrew, the focus was on building codes as well as the variability of the winds within the storms. After Hurricane Rita, the focus was on improving policies on evacuation. After Hurricane Katrina, while the emergency management community reevaluated its response, the weather community took stock of the whole warnings process. It was frustrating to see that, even with good forecasts, more than a thousand people lost their lives. How could observations and models improve? How could the message be clarified?
Ten years after Katrina, the 2016 AMS Annual Meeting in New Orleans convened a symposium on the lessons of that storm and of the more recent Hurricane Sandy (2012). A number of experts weighed in on progress since 2005. It was clear that challenges remained. Shuyi Chen of the University of Miami, for example, highlighted the need for forecasts of the impacts of weather, not just of the weather itself. She urged the community to base those impacts forecasts on model-produced quantitative uncertainty estimates. She also noted the need for observations to initialize and check models that predict storm surge, which in turn feeds applications for coastal and emergency managers and planners. She noted that such efforts must expand beyond typical meteorological time horizons, incorporating sea level rise and other changes due to climate change.
These life-saving measures are part accomplished and part underway—the sign of a vigorous science enterprise. Weather forecasters continue to hone their craft with so many do-overs. Some mistakes recur. As NOAA social scientist Vankita Brown told the AMS audience about warnings messages at the 2016 Katrina symposium, “Consistency was a problem; not everyone was on the same page.”  Katrina presented a classic problem where the intensity of the storm, as measured in the oft-communicated Saffir-Simpson rating, was not the key to catastrophe in New Orleans. Mentioning categories can actually create confusion. And again, in Hurricane Harvey this was part of the problem with conveying the threat of the rainfall, not just the wind or storm surge. Communications expert Gina Eosco noted that talk about Harvey being “downgraded” after landfall drowned out the critical message about floods.
Hurricane Harvey poses lessons that are more fundamental than the warnings process itself and are eerily reminiscent of the Hurricane Katrina experience: There’s the state of coastal wetlands, of infrastructure; of community resilience before emergency help can arrive. Houston, like New Orleans before it, will be considering development practices, concentrations of vulnerable populations, and more. There are no quick fixes.
In short, as AMS Associate Executive Director William Hooke observes, both storms challenge us to meet the same basic requirement:

The lessons of Houston are no different from the lessons of New Orleans. As a nation, we have to give priority to putting Houston and Houstonians, and others, extending from Corpus Christi to Beaumont and Port Arthur, back on their feet. We can’t afford to rebuild just as before. We have to rebuild better.

All of these challenges, simple or complex, stem from an underlying issue that the Weather Channel’s Bryan Norcross emphatically delineated when evaluating the Katrina experience back in 2007 at an AMS Annual Meeting in San Antonio:

This is the bottom line, and I think all of us in this business should think about this:  The distance between the National Hurricane Center’s understanding of what’s going to happen in a given community and the general public’s is bigger than ever. What happens every time we have a hurricane—every time–is most people are surprised by what happens. Anybody who’s been through this knows that. People in New Orleans were surprised [by Katrina], people in Miami were surprised by Wilma, people [in Texas] were surprised by Rita, and every one of these storms; but the National Hurricane Center is very rarely surprised. They envision what will happen and indeed something very close to that happens. But when that message gets from their minds to the people’s brains at home, there is a disconnect and that disconnect is increasing. It’s not getting less.

Solve that, and facing the next hurricane, and the next, will get a little easier. The challenge is the same every time, and it is, to a great extent, ours. As Norcross pointed out, “If the public is confused, it’s not their fault.”
Hurricanes Harvey and Katrina caused catastrophic floods for different reasons. Ten years from now we may gather as a weather community and enumerate unique lessons of Harvey’s incredible deluge of rain. But the bottom line will be a common challenge: In Hurricane Harvey, like Katrina, a city’s–indeed, a nation’s–entire way of dealing with the inevitable was exposed. Both New Orleans and Houston were disasters waiting to happen, and neither predicament was a secret.
Meteorologists are constantly getting do-overs, like Irma. Sooner or later, Houston will get one, too.
 

The Trouble with Harvey: Hurricane Intensification Dilemmas

Hurricanes like rapidly-changing Harvey are still full of surprises for forecasters.
The remnants of Caribbean Tropical Storm Harvey made a startling burst Thursday from a tropical depression with 35 mph winds to an 85 mph hurricane in a little more than 12 hours. It has been moving steadily toward a collision with the middle Texas coast and landfall is later Friday. If intensification continues at the same rate, Harvey is likely to be a major hurricane by then, according to a Thursday afternoon advisory from the National Hurricane Center, with sustained winds of 120-125 mph and even higher gusts.
That’s a big “if.”
The drop in central pressure, which had been precipitous all day—a sign of rapid strengthening—had largely slowed by Thursday afternoon. Harvey’s wind speed jumped 50 mph in fits during the same time, but leveled off by late afternoon at about 85 mph. Harvey was a strong Category 1 hurricane on the Saffir-Simpson Hurricane Wind Scale by dinner time.
The intensifying process then slowed. But it turns out this was temporary.
Many signs pointed to continued rapid intensification: a favorable, low-shear environment; expanding upper-air outflow; and warm sea surface temperatures. Overnight and Friday morning, Harvey continued to traverse an eddy of water with high oceanic heat content that has detached from the warm Gulf of Mexico loop current and drifted westward toward the Texas coast. Its impact is apparent as the pressure resumed its plunge and winds have responded, blowing Friday morning at a steady 110 mph with higher gusts.
Further intensification is possible.
In fact, the SHIPS (Statistical Hurricane Intensity Prediction Scheme) Rapid Intensification indices “are incredibly high,” Hurricane Specialist Robbie Berg wrote in the Thursday morning forecast discussion. Guidance from the model then showed a 70 percent chance of another 50 mph jump in wind speed prior to landfall. The afternoon guidance lowered those odds a bit, but still showed a a 64 percent probability.of a 35 mph increase.
It wouldn’t the first time a hurricane has intensified rapidly so close to the Texas coast. In 1999 Hurricane Bret did it, ramping up to Category 4 intensity with 140 mph winds before crashing into sparsely populated Kennedy County and the extreme northern part of Padre Island.
Hurricane Alicia exploded into a major hurricane just prior to lashing Houston in 1983. And 2007’s Hurricane Humberto crashed ashore losing its warm water energy source and capping its intensity at 90 mph just 19 hours after being designated a tropical depression that morning off the northern Texas Coast, a similar boost in intensity as Hurricane Harvey.
Rapid intensification so close to landfall is a hurricane forecasting nightmare. An abundance of peer-reviewed papers reveal that there’s a lot more we need to learn about tropical cyclone intensity, with more than 20 papers published in AMS journals this year alone. Ongoing research into rapidly intensifying storms like Harvey, is helping solve the scientific puzzle, including recent cases such as Typhoon Megi  and Hurricane Patricia. Nonetheless, despite strides in predicting storm motion in past decades, intensification forecasting remains largely an educated guessing game.