The all-star panel comprising Monday’s special Town Hall Meeting on the 2017 hurricane season provided a riveting discussion of the science, communication, and impacts of Harvey, Irma, and Maria, highlighted by Ada Monzón’s emotional talk about the devastating effects Maria has had on Puerto Rico. The session created a buzz among #AMS2018 attendees.

The entire session has now been posted to the AMS YouTube channel, and you can also watch it below.

{ 0 comments }

A good writer inevitably is also a good listener, always mining every conversation and interaction for the next gem that could be used in their work. Authors of AMS books are no exception, and this week in Austin you could be the person to provide one of them with a new idea or angle. A collection of authors will be reading from their works and participating in Q&A sessions with meeting attendees, providing you with an opportunity to discuss your interests with them and learn more about the writing process.

The events will take place on Tuesday and Wednesday at the AMS Resource Center in the Exhibit Hall. The Tuesday session will feature historical topics, with Bob Reeves exploring the history of long-range forecasting (4:oo PM), Jen Henderson speaking about Ted Fujita (4:20), Paul Menzel (4:40) and John Lewis (4:55) discussing Verner Suomi, and Lourdes Avilés looking back at the Great New England Hurricane (5:10).

Wednesday’s event will focus on science and society: Matt Barlow will speak about his forthcoming handbook for atmospheric dynamics (4:00 PM), Bob Henson will discuss climate change science and policy (4:20), and Bill Hooke (4:40) and Bill Gail (5:00) will consider the human relationship to climate.

A unique aspect of AMS books is the collegiality between the authors and their readers, and with this event we invite you to get to know some of them better and perhaps even help them with their craft. It’s the collaborative process at work!

{ 0 comments }

From his book and PBS-TV series, “Earth: The Operator’s Manual,” to his renowned lectures at Penn State, Dr. Richard Alley is known for his humorous descriptions about serious science. Today he is the featured speaker as the 98th Annual AMS Annual Meeting begins in Austin, Texas.  Richard Alley

At the 18th AMS Presidential Forum  (4 pm, Ballroom D) Dr. Alley will use his unique brand of communication to discuss why communicating science to the public is no longer optional, but rather an imperative.

Dr. Alley, a renowned glaciologist and climate scientist, has a way with words. His colorful metaphors–like The Two-Mile Time Machine, the title his award-winning popular book about ice cores–put complex scientific issues into a comfortable perspective for perplexed audiences.

Last May, a couple months before a large piece of Antarctica’s Larson C ice shelf broke off,  Rolling Stone published an article about  potential catastrophic collapse of West Antarctica ice. In it, Dr. Alley explained that the Larson C breakage would not necessarily be an “end-of-the-world screaming hairy disaster conniption fit.”

And here, transcribed from a 2012 talk at the Smithsonian in which Dr. Alley explained the impact of burning fossil fuels and releasing CO2:

“You fill up a car and it’s a fairly big tank–you’re putting in a hundred pounds of gasoline. If you had to bring it home in gallon jugs it’d be a different world. But you drive off with it. And when you burn it — you add oxygen — and that makes CO2, and it goes out the tailpipe and you don’t see that 300 pounds per fill-up. Now, our students really get a kick out of it: at this point you say okay, suppose that our transportation system packaged the CO2 in a way we could see it … as horse ploppies…It’s a pound per mile driven for a typical vehicle in the fleet at this point. Ya know … Nnnnn — thffft. Nnnnn — thffft …. Our CO2 turned to the density of horse ploppies and spread over the roads of America would cover every road in America an inch deep every year. On average. Okay. In a decade … there are no joggers. We’d all be cross-country skiers. If we saw this it would be a completely different world. But it just drifts away and we don’t even see it.”

Sunday’s keynote talk at the Presidential Forum is likely to be just as, ummm, vivid. Simple but powerful. Definitely memorable.

It’s exactly the way Alley envisions engaging the public: by building the broad understanding necessary to make science actionable.

{ 0 comments }

Perhaps no one thought that Game 5 of the World Series would end the way it did. It started with two of the game’s best pitchers facing off; a low-scoring duel seemed likely. But the hitters gained the upper hand. In the extra-inning slugfest the score climbed to 13-12.

If you started that game thinking every at-bat was a potential strike-out, and ended the game thinking every at-bat was a potential home run, then you’ll understand the findings about human expectations demonstrated in a new study in the AMS journal, Weather, Climate and Society. University of Washington researchers Margaret Grounds, Jared LeClerc, and Susan Joslyn shed light on the way our shifting expectations of flood frequency are based on recent events.

There are two common ways to quantify the likelihood of flooding. One is to give a “return period,” which tells (usually in years) how often a flood (or a greater magnitude flood) occurs in the historical record. It is an “average recurrence interval,” not a consistent pattern. The University of Washington authors note that a return period “almost invites this misinterpretation.” Too many people believe a 10-year return period means flooding happens on schedule, every 10 years, or that in every 10-year period, there will be one flood that meets or exceeds that water level.

Grounds et al. write:

This misinterpretation may create what we refer to as a ‘‘flood is due’’ effect. People may think that floods are more likely if a flood has not occurred in a span of time approaching the return period. Conversely, if a flood of that magnitude has just occurred, people may think the likelihood of another similar flood is less than what is intended by the expression.

In reality, floods that great can happen more frequently, or less frequently, over a short set of return periods. But in the long haul, the average time between floods of that magnitude or greater will be 10 years.

One might think the second common method of communicating about floods corrects for this problem. That is to give something like a batting average–a statistical probability that a flood exceeding a named threshold will occur in any given time period (usually a year). Based on the same numbers as a return period, this statistic helps convey the idea that, in any given year, a flood “might” occur. A 100-year return period would look like a 1% chance of a flood in any given year.

Grounds and her colleagues, however, found that people have variable expectations due to recent experience, despite the numbers. The “flood is due” effect is remarkably resilient.

The researchers surveyed 243 college students. Each student was shown just one of the three panels below of flood information for a hypothetical creek in the American West:

FloodBlog1

Each panel showed a different method of labeling flooding (panel A showed return periods; panel B percent chance of flooding; panel C had no quantification, marking levels A-B-C). The group for each panel was further subdivided into two subgroups: one subgroup was told a flood at the 10-year (or 10% or “A”) marker had occurred last year; the other subgroup was told such a flood last occurred 10 years ago. This fact affected the students’ assessment of the relative likelihood of another flood soon (they marked these assessments proportionally, on an unlabeled number line, which the researchers translated into probabilities).

Floodblog2

Notice, the group on the right, who did not deal with quantified risks (merely A-B-C), assessed a higher imminent threat if a flood had occurred last year. This “persistence” effect is as if a home run last inning made another home run seem more likely this inning. The opposite, “flood due” effect, appeared as expected for the group evaluating return period statistics. Participants dealing with percentage chances of floods were least prone to either effect.

This test gave participants a visualization, and also did not quantify water levels. Researchers realized both conditions might have thrown them a curve ball, skewing results, so the researchers tried another survey with 803 people (gathered through Amazon.com) to control test conditions. The same pattern emerged: an even bigger flood-is-due effect in the group evaluating return-period, a bigger persistence effect in the group with unquantified risks, and neither bias in the group assessing percentage risks.

In general, that A-B-C (“unquantified”) group again showed the highest estimation of flood risk. The group with percentage risk information showed the least overestimation of risk, but still tended to exaggerate this risk on the scales they marked.

Throughout the tests, the researchers had subjects rank their concern for the hypothetical flood-prone residents because flood communication stops not at understanding, but at concern that motivates a response. Grounds et al. conclude:

Although percent chance is often thought to be a confusing form of likelihood expression…the evidence reported here suggests that this format conveys the intended likelihood information, without a significant loss in concern, better than the return period or omitting likelihood information altogether.

How concerned these participants felt watching the flood of hits in the World Series…well, that depended on which team they were rooting for.

 

{ 0 comments }

While the ranks of women weathercasters are growing slowly, they continue to lag behind their male colleagues in job responsibilities.

A new study to be published in the Bulletin of the American Meteorological Society shows that 29% of weathercasters in the 210 U.S. television markets are female. This percentage has roughly doubled since the 1980s.

Despite this progress, only 8% of chief meteorologists are female. Meanwhile, 44% of the women work weekends while 37% work mornings. Only 14% of the women work the widely viewed evening shifts.

WeathercastersTime

Last year, 11% of evening or primetime weathercasters were female. That’s only about one-third the percentage reported in a study published in 2008, suggesting a possible decline in female representation in this high-profile broadcast slot.

Educational levels were gender dependent, as well. While 52% of the women had meteorology degrees, 59% of the men did.

Alexandra Cranford of WWL-TV in Slidell, Louisiana, the author of the new study, gathered her data in 2016 from TV station websites. The biographical information was compiled for 2,040 weathercasters, making the study the largest of its kind. Because it relied on self-reported, publicly available information, Cranford suggests that there may be some underrepresentation of various factors: online bios tend might tend to omit information that makes broadcasters look less qualified or experienced. The bios on males were somewhat less likely to omit their exact position at the a station, while the bios on females were somewhat less likely to omit education information.

Previous studies have shown that viewers tend to perceive men as more credible and thus more suited for a variety of broadcast roles, from serious news situations to commercial voice-overs. This perception was one of the motivations for Cranford’s study and may be weighing on hiring and assignment practices for weathercasters. In the article, Cranford notes that past research had indicated,

Constructs including the “weather girl” stereotype and gender-based differences in perceived credibility could potentially contribute to the percentage of women in broadcast meteorology remaining low, especially in chief and evening positions.

Based on the new data, Cranford concludes,

Additional research should explore if factors such as persistent sexism in hiring practices or women’s personal choices could explain why fewer female weathercasters have degrees and why women work weekend shifts while remaining underrepresented in chief meteorologist and evening positions.

 

{ 0 comments }

by Keith Seitter, AMS Executive Director

In carrying out its mission, AMS provides a broad range of support for the science and services making up the atmospheric and related sciences. As a part of this support, AMS has a long history of being a voice on behalf of science and the scientific method—as do most other scientific societies such as AAAS, AGU, Sigma Xi, and many, many others. This past year has been especially challenging for all of us as pressures and outright attacks on science have become far more prevalent. AMS has always been careful to be nonpartisan, to avoid being policy prescriptive, and to really focus on science. We have not, however, shied away from taking strong positions on behalf of the integrity of science.

The hope is that the community and society will view AMS journals, statements, and other material as reliable sources of information on the scientific disciplines AMS covers. AMS statements, in particular, are developed with the goal of being broadly accessible to those seeking credible summaries of current scientific knowledge and understanding on various topics. Beyond being a resource, however, it is vital that AMS proactively stand up for the integrity of science and the scientific process—especially when it is mischaracterized in ways that might impact policy decisions or mislead the public.

There is an extraordinary amount of misinformation being disseminated through many outlets on a variety of topics (but perhaps most notably those associated with climate change)—far more than one can effectively monitor or hope to address. With so many incorrect or misleading statements out there, it can be hard to know when to jump into the discussion. Recognizing that we cannot address all instances of misinformation, AMS has focused instead on taking a more public stance when policy makers in leadership positions make statements that mischaracterize the science. Thus, this past year for example, AMS has sent letters to the EPA administrator and the Secretary of Energy (see the “AMS Position Letters” for an archive of all letters that have been sent by AMS).

Protecting the academic freedom of researchers, and the freedom to present their scientific results broadly and without censorship, intimidation, or political interference, has also been important to AMS for many years. These fundamental precepts upon which scientific advancements depend have come under attack before, and AMS has maintained a strong “Statement on the Freedom of Scientific Expression” for a number of years to make the Society’s position clear.
Scientific advance requires that all data and methodologies leading to research results be openly and freely available to others wishing to replicate or assess that research. That said, AMS has spoken out to protect the confidentiality of discussions among researchers as they develop ideas and critically assess the work of others. These candid discussions are essential and must be able to happen without fear among those involved that comments might be taken out of context to attack the research or the researchers.

AMS membership is diverse and not all members have been supportive of these efforts. I can appreciate the concerns some may feel, and know there is a danger of acting out of bias, despite our putting a lot of time and energy into avoiding biases. I know, as well, how easily inherent biases can color the way one might read these statements or letters. I also know, however, that to remain silent in the face of clear mischaracterization of science or to fail to defend the scientific process is wholly inconsistent with the AMS mission of “advancing the atmospheric and related sciences, technologies, applications, and services for the benefit of society.” I’m proud to be part of an organization that has such a strong history of standing up for the integrity of science.

(Note: This letter also appears in the September 2017 issue of BAMS.)

{ 0 comments }

Untitled

by Douglas Hilderbrand, Chair, AMS Board on Enterprise Communication

Early August seems forever ago. Hurricanes Maria, Irma, and Harvey were only faint ripples in the atmosphere. The nation was getting increasingly excited for the solar eclipse of 2017; the biggest weather question was where clear skies were expected later in the month.

During this brief period of calm in an otherwise highly impactful weather year, leaders and future leaders from the Weather, Water, and Climate Enterprise gathered together at the AMS Summer Community Meeting in Madison, Wisconsin, to better understand how “The Enterprise” could work in more meaningful, collaborative ways to best serve communities across the country and the world. Consisting of government, industry, and the academic sector, the Enterprise plays a vital role in protecting lives, minimizing impacts from extreme events, and enhancing the American economy.

The AMS Summer Community Meeting is a unique time when the three sectors learn more about each other, about physical and social science advances, and discuss collaboration opportunities.  Strengthening relationships across the Enterprise results in collaborations on joint efforts, coordination in ways that improve communication and consistency in message, and discussion of issues on which those in the room may not always see eye-to-eye. Every summer, one theme always rises to the top — the three sectors that make up the Weather, Water, and Climate Enterprise are stronger when working together vs. “everyone for themselves.”

This “true-ism” becomes most evident during extreme events, such as the trifecta of devastating hurricanes to impact communities from Texas to Florida to Puerto Rico and the U.S. Virgin Islands. The AMS Summer Community Meeting (full program and recorded presentations now available) featured experts on weather satellites, radar-based observations, applications that bring together various datasets, communications, and even the science behind decision making. As Hurricanes Harvey, Irma, and Maria formed, strengthened, and tracked toward land, relevant topics discussed at the Summer Community Meeting were applied under the most urgent of circumstances. GOES-16 images, though currently “preliminary and non-operational,” delivered jaw-dropping imagery and critical information to forecasters. As Harvey’s predicted rainfall totals created a dire flooding threat, the entire Enterprise rallied together to set the expectation that the flooding in eastern Texas and southwestern Louisiana would be “catastrophic and life threatening.”  This consistency and forceful messaging likely saved countless lives — partially due to the Enterprise coming together a month before Harvey to stress the importance of consistency in messaging during extreme events.

If you are unfamiliar with the AMS Summer Community Meeting, and are interested in participating in the summer of 2018, take some time and click on the recorded presentations over the past few years (2017, 2016, 2015).  In 2016, the Enterprise met in Tuscaloosa, Alabama, home of NOAA’s National Water Center, and discussed recent advances in water forecasting and the launch of the National Water Model.  A year earlier, in Raleigh, NC, future advances across the entire end-to-end warning paradigm were discussed.

We don’t know when or what the next big challenge will be for the Weather, Water, and Climate Enterprise, but a few things are certain… The state of our science — both physical and social —  will be tested. Communities will be counting on us to help keep them safe. And to maximize the value chain across the Weather, Water, and Climate Enterprise, we will need government, industry, and academia continuing to work together and rely on each other. These certainties aren’t going away and provide the impetus for you to consider participating in future AMS Summer Community Meetings.

{ 0 comments }

For Journal of Hydrometeorology Chief Editor Christa Peters-Lidard, peer review upholds essential standards for a journal. “Maintaining that integrity is very important to me,” she said during an interview at the AMS Publications Commission meeting in Boston, in May. Historically, the burden of integrity has fallen on editors as much as authors and reviewers.

The peer review process we follow at the AMS is an anonymous process. Authors do not know who their reviewers are. So it’s really up to us, as the editors and chief editors, to ensure that the authors have a fair opportunity to not only get reviews that are constructive and not attacking them personally, but also by people that are recognized experts in the field.

Even when reviewers are not experts, “they know enough about it to ask the right questions, and that leads the author to write the arguments and discussion in a way that, in the end, can have more impact because more people can understand it.”

Anonymity has its advantages for upholding integrity, especially in relatively small field, like hydrometeorology. Peters-Lidard pointed anonymity helps reviewers state viewpoints honestly and helps authors receive those comments as constructive, rather than personal.

“In my experience there have been almost uniformly constructive reviews,” Peters-Lidard says, and that means papers improve during peer review. “Knowing who the authors are, we know what their focus has been, where their blind spots might be, and how we can lead them to recognize the full scope of the processes that might be involved in whatever they’re studying. Ultimately that context helps the reviewer in making the right types of suggestions.

But the need for integrity is subtly shifting its burden onto authors more and more heavily. Peters-Lidard spoke about the trend in science towards an end-to-end transparency in how conclusions were reached. She sees this movement in climate assessment work, for example, where the policy implications are clear. Other peer reviewed research developing this way.

We’re moving the direction where ultimately you have a repository of code that you deliver with the article….Part of that also relates to a data issue. In the geosciences we speak of ‘provenance,’ where we know not only the source of the data—you know, the satellite or the sensor—but we know which version of the processing was applied, when it was downloaded, and how it was averaged or processed. It’s back to that reproducibility idea a little bit but also there are questions about the statistical methods….We’re moving in this direction but we’re not there yet.

Hear the full interview:

{ 0 comments }

David Kristovich, chief editor of the Journal of Applied Meteorology and Climatology, explains how AMS peer review process, as a somewhat private process, ultimately produces the necessary transparency.

Peer review is an unpublished exchange between authors, editors and reviewers. It is meant to assure quality in a journal. At the same time, Kristovich noted that peer review is not just something readers are trusting, blindly. Rather, the peer review process is meant to lead to a more fundamental transparency—namely, it leads to papers that reveal enough to be reproducible.

“Transparency focuses on the way we tend to approach our science,” Kristovich said during an interview at the AMS Publications Commission meeting in Boston in May. “If someone can repeat all of the steps you took in conducting a study, they should come up with the same answer.”

“The most important part of a paper is to clearly define how you did all the important steps. Why did I choose this method? Why didn’t I do this, instead?”

Transparency also is enhanced by revealing information about potential biases, assumptions, and possible errors. This raises fundamental questions about the limits of information one can include in a paper, to cover every aspect of a research project.

“Studies often take years to complete,” Kristovich pointed out. “Realistically, can you put in every step, everything you were thinking about, every day of the study? The answer is, no you can’t. So a big part of the decision process is, ‘What is relevant to the conclusions I ended up with?’”

The transparency of scientific publishing then depends on peer review to uphold this standard, while recognizing that the process of science itself is inherently opaque to the researchers themselves, while they’re doing their work.

“The difference between scientific research and development of a product, or doing a homework assignment—thinking about my kids—is that you don’t know what the real answer is,” Kristovich said. Science “changes your thinking as you move along, so at each step you’re learning what steps you should be taking.”

You can hear the entire interview here.

{ 0 comments }

peer review week banner

At the AMS Annual Meeting panel on Peer Review last January, journal editors Tony Broccoli, Carolyn Reynolds, Walt Robinson, and Jeff Rosenfeld spoke about how authors and reviewers together make good reviews happen:

Robinson: If you want good reviews, and by good I mean insightful and constructive and that are going to help you make your paper better, the way to do that is to write a really good paper. Make sure your ducks are in a row before you send it in. You should have read over that and edited it multiple times. I’m going to, at some point in my life, write a self-help book in which the single word title is, “Edit!” because it applies to many parts of life. Have your colleagues—not even the co-authors—look at it. Buy the person in the office next door a beer to look over the paper and get their comment. There may be problems with the science–and none of our science is ever perfect–but if it’s a really well constructed, well formulated, well written paper, that will elicit really good reviews.

The flip side of that is, if the paper is indecipherable, you’ll get a review back saying, “I’m trying to figure this out” with a lot of questions, and often it’s major revisions. (We don’t reject that many things out of the box.)

The problem is, the author goes back and finally makes the paper at a standard he or she should have sent in the first time. It goes back to the reviewer, and then the reviewer understands the paper and comes back and starts criticizing the science. Then the author gets angry…”You didn’t bring that up the first time!” Well, that’s because the reviewer couldn’t understand the science the first time. So, if you want good, constructive reviews, write good papers!

Reynolds:  You want to make things as easy as possible for the reviewers. Make the English clear, make the figures clear. Allow them to focus on the really important aspects.

Broccoli: I would add, affirming what Walt said, that the best reviews constructively give the authors ideas for making their papers better. Some reviewers are comfortable taking the role as the gatekeeper and trying to say whether this is good enough to pass muster. But then maybe they aren’t as strong as need be at explaining what needs to be done to make the paper good enough. The best reviews are ones that apply high standards but also try to be constructive. They’re the reviewers I want to go back to.

Rosenfeld: I like Walt’s word, “Edit.” Thinking like an editor when you are a reviewer has a lot to do with empathy. In journals, generally, the group of authors is identical or nearly the same as the group of readers, so empathy is relatively easy. It’s less true in BAMS, but it still applies. You have to think like an editor would, “What is the author trying to do here? What is the author trying to say? Why are they not succeeding? What is it that they need to show me?” If you can put yourself in the shoes of the author—or in the case of BAMS, in the shoes of the reader—then you’re going to be able to write an effective review that we can use to initiate a constructive conversation with the author.

Broccoli: That reminds me: Occasionally we see a reviewer trying to coax the author into writing the paper the reviewer would have written, and that’s not the most effective form of review. It’s good to have diverse approaches to science. I would rather the reviewer try to make the author’s approach to the problem communicated better and more sound than trying to say, “This is the way you should have done it.”

{ 0 comments }