Nav: Home

The Forecaster's Dilemma: Evaluating forecasts of extreme events

April 10, 2017

When it comes to extreme events, public discussion of forecasts often focuses on predictive performance. After the international financial crisis of 2007, for example, the public paid a great deal of attention to economists who had correctly predicted the crisis, attributing it to their superior predictive ability. However, restricting forecast evaluation to subsets of extreme observations has unexpected and undesired effects, and is bound to discredit even the most expert forecasts. In a recent article, statisticians Dr. Sebastian Lerch and Prof. Tilmann Gneiting (both affiliated with HITS and the Karlsruhe Institute of Technology), together with their coauthors from Norway and Italy, analyzed and explained this phenomenon and suggested potential remedies. The research team used theoretical arguments, simulation experiments and a real data study on economic variables. The article has just been published in the peer-reviewed journal Statistical Science. It is based on research funded by the Volkswagen Foundation.

Predicting calamities every time - a worthwhile strategy?

Forecast evaluation is often only conducted in the public arena if an extreme event has been observed; in particular, if forecasters have failed to predict an event with high economic or societal impact. An example of what this can mean for forecasters is the devastating L'Aquila earthquake in 2009 that caused 309 deaths. In the aftermath, six Italian seismologists were put on trial for not predicting the earthquake. They were found guilty of involuntary manslaughter and sentenced to six years in prison until the Supreme Court in Rome acquitted them of the charges.

But how can scientists and outsiders, such as the media, evaluate the accuracy of forecasts predicting extreme events? At first sight, the practice of selecting extreme observations while discarding non-extreme ones and proceeding on the basis of standard evaluation tools seems quite logical. Intuitively, accurate predictions on the subset of extreme observations may suggest superior predictive abilities. But limiting the analyzed data to selected subsets can be problematic. "In a nutshell, if forecast evaluation is conditional on observing a catastrophic event, predicting a disaster every time becomes a worthwhile strategy," says Sebastian Lerch, member of the "Computational Statistics" group at HITS. Given that media attention tends to focus on extreme events, expert forecasts are bound to fail in the public eye, and it becomes tempting to base decision making on misguided inferential procedures. "We refer to this critical issue as the 'forecaster's dilemma,'" adds Tilmann Gneiting.

Avoiding the forecaster's dilemma: Method is everything.

This predicament can be avoided if forecasts take the form of probability distributions, for which standard evaluation methods can be generalized to allow for specifically emphasizing extreme events. The paper uses economic forecasts of GDP growth and inflation rates in the United States between 1985 and 2011 to illustrate the forecaster's dilemma and how these tools can be used to address it.

The results of the study are especially relevant for scientists seeking to evaluate the forecasts of their own methods and models, and for external third parties who need to choose between competing forecast providers, for example to produce hazard warnings or make financial decisions.

Although the research paper focused on an economic data set, the conclusions are important for many other applications: The forecast evaluation tools are currently being tested for use by national and international weather services.
-end-
Publication:

Lerch, S., Thorarinsdottir, T. L., Ravazzolo, F. and Gneiting, T. (2017). Forecaster's dilemma: Extreme events and forecast evaluation. Statistical Science, in press.

DOI: 10.1214/16-STS588

Link:http://projecteuclid.org/euclid.ss/1491465630

Scientific Contact:

Prof. Dr. Tilmann Gneiting
Computational Statistics (CST) group
HITS Heidelberg Institute for Theoretical Studies
Tilmann.gneiting@h.its.org
Schloss-Wolfsbrunnenweg 35
69118 Heidelberg

Press Contact:

Dr. Peter Saueressig
Head of Communications
Heidelberg Institute for Theoretical Studies (HITS)
Phone: +49 6221 533 245
peter.saueressig@h-its.org

About HITS

The Heidelberg Institute for Theoretical Studies (HITS) was established in 2010 by the physicist and SAP co-founder Klaus Tschira (1940-2015) and the Klaus Tschira Foundation as a private, non-profit research institute. HITS conducts basic research in the natural sciences, mathematics and computer science, with a focus on the processing, structuring, and analyzing of large amounts of complex data and the development of computational methods and software. The research fields range from molecular biology to astrophysics. The shareholders of HITS are the HITS Stiftung, which is a subsidiary of the Klaus Tschira Foundation, Heidelberg University and the Karlsruhe Institute of Technology (KIT). HITS also cooperates with other universities and research institutes and with industrial partners. The base funding of HITS is provided by the HITS Stiftung with funds received from the Klaus Tschira Foundation. The primary external funding agencies are the Federal Ministry of Education and Research (BMBF), the German Research Foundation (DFG), and the European Union.

Heidelberg Institute for Theoretical Studies (HITS)

Related Media Articles:

Your (social media) votes matter
Tim Weninger, assistant professor of computer science and engineering at the University of Notre Dame, conducted two large-scale experiments on Reddit and the results provide insight into how a single up/down vote can influence what content users see on the site.
Social media in industrial China and social media in rural China
How has the biggest mass migration in human history affected Chinese communication habits?
Media not the scapegoat when it comes to teen sex
Parents and society shouldn't shift the blame for young people's sexual behavior on what teens supposedly see and read in the media about intimate encounters.
Teenage brain on social media
Teenagers' brains have been scanned while they used social media in a first-of-its-kind UCLA study.
Silencing cholera's social media
Bacteria use a form of 'social media' communication, quorum sensing, to monitor how many of their species are in the neighborhood.
Digital media may be changing how you think
Tablet and laptop users beware. Using digital platforms such as tablets and laptops for reading may make you more inclined to focus on concrete details rather than interpreting information more abstractly, according to a new study published in the proceedings of ACM CHI '16, the ACM Conference on Human Factors in Computing Systems, May 7-12, 2016.
Measuring happiness on social media
In a study published in March in the journal PLOS One, University of Iowa computer scientists used two years of Twitter data to measure users' life satisfaction, a component of happiness.
What social media data could tell us about the future
Can a flow of information across Twitter signal when a momentous event is about to occur?
Is exposure to sexuality on mass media related to sexual self-presenting on social media?
A new study found that watching sexual reality television stimulated adolescents aged 13-17 to produce and share sexual images of themselves on social media.
Using superlatives in the media for cancer drugs
The use of superlatives to describe cancer drugs in news articles as 'breakthrough,' 'revolutionary,' 'miracle' or in other grandiose terms was common even when drugs were not yet approved, had no clinical data or not yet shown overall survival benefits, according to an article published online by JAMA Oncology.

Related Media Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Digital Manipulation
Technology has reshaped our lives in amazing ways. But at what cost? This hour, TED speakers reveal how what we see, read, believe — even how we vote — can be manipulated by the technology we use. Guests include journalist Carole Cadwalladr, consumer advocate Finn Myrstad, writer and marketing professor Scott Galloway, behavioral designer Nir Eyal, and computer graphics researcher Doug Roble.
Now Playing: Science for the People

#529 Do You Really Want to Find Out Who's Your Daddy?
At least some of you by now have probably spit into a tube and mailed it off to find out who your closest relatives are, where you might be from, and what terrible diseases might await you. But what exactly did you find out? And what did you give away? In this live panel at Awesome Con we bring in science writer Tina Saey to talk about all her DNA testing, and bioethicist Debra Mathews, to determine whether Tina should have done it at all. Related links: What FamilyTreeDNA sharing genetic data with police means for you Crime solvers embraced...