How Repeatable is Quantitative Risk Analysis – European Benchmark StudiesPosted: June 30, 2012
I was at PSAM11/ESREL12 this week, and I plan to write a few posts on topics I found interesting. First though there were some papers that I mentioned during my own talk that are not in the bibliography of the accompanying paper. Normally, this would cause me to start questioning just how comprehensive my work was – after all, I was making claims about the lack of empirical evaluation of quantitative analysis, and here are three relevant papers that I hadn’t even seen. On the other hand, no one in my research group had heard of the studies either, and they aren’t well known in the whole Probabilistic Risk Assessment community. Important work, bypassing its key audience. Maybe the research councils have a point when they insist on an impact plan in grant applications.
These papers report three investigations into variation in risk assessment. In the first study, 11 teams were asked to quantify the risk presented by an ammonia plant. All teams had access to the same information, but the results were evenly spread over six orders of magnitude. In the second study they tried to pin down what was causing the variation. At several stages during this study they reduced the variation by standardising assumptions and methods, but the teams (using similar source information to the first study) still spread over four orders of magnitude. Once you take into accound the reductions in variation along the way, this was a good (but not completely independent) replication of the first study’s results.
The third study is to do with the ability of reviewers to interpret studies. Multiple reviewers were given the same study, and asked to draw conclusions from it.
Anyway, the three papers are: