The Risks of Risk Communication (for Italian Earthquakes)


The Basic Facts

In October 2008, the city of L’Aquila in Italy began experiencing earth tremors. Given that the city has been pretty much destroyed by earthquakes twice in the past, this was a matter of some concern. For the next six months the tremors continued. On March 31 the National Committee for the Prediction and Prevention of Major risks had a meeting, and a civil service spokesman re-assured the public that there was no immediate danger. Within a week, more than 300 people were killed when a major earthquake struck.

Seven members of the National Committee were charged with manslaughter for providing the public with information that was “inexact, incomplete and contradictory”, and on Monday this week they were sentenced to jail terms of six years.  There are many misleading summaries of this case available, perhaps because of the tangential prosecution and defence cases. The defence, both in court and in public maintained that it was not reasonable to expect scientists to predict the timing of an earthquake. However, that isn’t what the prosecution was alleging. The prosecution case was that the scientists didn’t communicate the risk appropriately. According to the prosecution, at least 29 of the victims stayed in an unsafe situation due to receiving misleading information about the risk they faced.

Two news stories with accurate portrayal:

There are three underlying questions here:

1) Can you do risk estimation for an earthquake?
2) How should you communicate risk information of this sort?
3) Should you get six years in jail if you do either of these incorrectly?

Risk Estimation for Earthquakes

Unlike most infrequent high-consequence events, the risk of an earthquake in certain areas does build up over time. So while it is nonsense to say that we are “overdue” for a big storm, a flood, or a stock-market crash, it does actually make some sense to say that an area is overdue for an earthquake. These are still low probabilities though – for example, Southern California is waiting for the next big one, meaning that there’s around a 2% chance of a major quake in the next thirty years.

Most earthquake science is focussed on modelling what’s likely to happen during an earthquake, rather than when the earthquake will come. This make sense, because the most effective protective measures are long-term anyway:  Good building design; removing or protecting buildings built before the modern building codes; and teaching people appropriate earthquake behaviour. By understanding the range of scenarios that are likely or possible, we can put appropriate protection in place.

As far as short-term prediction goes, big earthquakes do often have fore-shocks – smaller movements leading up to the highest energy event. These can, on a scale of a few minutes to a couple of days, give warning of increased risk. There is a lot of uncertainty involved, and it’s an area where more research and more computer power may be able to improve predictions in future. At the moment, it’s akin to clouds as a predictor of rain. You aren’t totally safe if the sky is clear, and seeing clouds doesn’t tell you that it’s going to rain, but looking at the sky does help you determine the odds.

The Southern California Earthquake Center has good general information on earthquake modelling, as well as a handy guide to living in an earthquake-prone area.

Risk Communication

Risk communication is a whole scientific field in its own right, with a compelling message that it isn’t enough to simply tell the truth about risk, you have to tell it in a way that people can make sense of. An Australian earthquake expert Professor Paul Somerville said that the Italian team had a case to answer because of the way they communicated. As far as I can tell from the news reports though, his answer is that the scientists should stick to reporting numbers and let society do the interpretation. This isn’t what the research on risk communication suggests.

The archetypical example is doctors telling patients about the risks of treatment. Simply quoting numbers like one in a thousand or one in ten-thousand patients experiencing a side-effect is meaningless without context. Patients can’t make useful sense of these numbers. Comparisons instead or as well are often very useful. “This drug is about as dangerous as panadol, and much safer than doing nothing”; or “There is a small chance that this drug will make things worse – about the same chance as you being in a car accident in the next year”.

One consistent result in risk communication research is that deliberately being inaccurate in order to reassure people doesn’t work. The situation in Italy was complicated by scare-mongering by various parties, including one seismic technician giving warnings through a megaphone. The scientists rightly thought that it was important for people to have a realistic view of the risk. With hindsight, it’s easy to put better words in their mouths such as “The tremors are a sign of increased risk, but they don’t tell us whether or not a big earthquake is about to happen. Citizens should, as always, make sure that they have followed the steps in the earthquake preparedness booklet, and know what to do if an earthquake does happen”.

The work of Professor Terje Aven at University of Stavenger is a good starting point for reading about risk and risk communication.

Jail for Poor Risk Communication?

Personally, I think that not being able to properly communicate the risk of a major hazard is a pretty big deal when your job description is to communicate the risk of major hazards. The committee deserved to be dragged over the proverbial coals whether or not the earthquake happened.

Drawing a causal link between their inept communication and people dieing is another thing altogether. It’s easy to say afterwards that person X would not have stayed in their shaky house if they’d known the true risk. That’s rather hard to prove though, given that people were living in poor housing in an earthquake zone for months and years. Yes, some people might have skipped town for a couple of days, but that wouldn’t have been a rational response to the risk. Those same people would still have been killed if the earthquake happened a week later, or two weeks later, or a month later.

Essentially, the court has found that a combination of proper risk communication, irrational response, and good luck, would have saved lives, therefore the scientists are at fault. Hopefully, they win their appeal. I don’t think that this will have a chilling effect however. I hope and believe that this will make more scientists realise that communication is an important part of their role, and requires as much care and attention as the research itself.

Other Links

The Professor Paul Somerville interview is here.


The Avengers Initiative – A System of Systems Safety Challenge


“And there came a day, a day unlike any other, when Earth’s mightiest heroes and heroines found themselves united against a common threat. On that day, the Avengers were born—to fight the foes no single super hero could withstand! Through the years, their roster has prospered, changing many times, but their glory has never been denied! Heed the call, then—for now, the Avengers Assemble!”

System Definition

“The Avengers” is a team of super-heroes from the Marvel Comics universe. As with any hazard identification, it is important to be precise about what is considered in and out of the system. This is complicated by the changing composition of the super-hero team, including multiple incarnations in parallel continuities. There are also interfaces with other organisations such as Shield and non-aligned super-heroes. In fact, it may be more useful to consider the Avengers as a “System of Systems” (SoS).

Hall-May describes SoS as “systems whose constituent components are sufficiently complex and autonomous to be considered as systems in their own right and which operate collectively with a shared purpose”[1]. This would certainly apply to the Avengers.

Held [2] says that a System of Systems has the characteristics:

  1. The system can be subdivided into independently operating systems. The independent systems must themselves be systems.
  2. The system does not depend on all elements for survival. For example, if the rudder on a 747 fails, the aircraft is very likely to crash destroying the rest of the nodes and ceasing to be a system as a whole. The 747 cannot be an SoS. The airport, however, will continue to operate. The airport can be an SoS.
  3. Systems in an SoS have some form of communication. Communication is any form of information passing, regardless of intent. For example a deer showing a white tail while running is passing the information of danger to any other observing deer. The intent of the deer was to run in fear, not to communicate the danger.
  4. Elements have a common mission. A mission can be described which encapsulates the behavior of the group.

This definition also applies to the Avengers. My own view [3] is that Systems of Systems is not an absolute definition, but should be applied when it is useful to do so. One case where it is definitely useful is when systems have a fluid configuration, with limited information about future configurations. Again, this suggests that System of Systems treatment of the Avengers is appropriate.

Configurations

Each Avengers Ensemble will comprise of between four and six super-hero systems, selected from the following set. Each system may have multiple operating modes.

  • Ironman / Tony Stark  (Powered flying suit and operator | Genius Technologist)
  • Hulk / Bruce Banner (Enraged Green Monster | Gamma-Ray Scientist)
  • Thor (Norse God)
  • Henry Pym (Ant-Man | Giant Man | Wasp)
  • Captain America (Super Soldier)
  • Hawkeye (Archer)
  • Quicksilver (Fast Moving Mutant)
  • Scarlet Witch (Magic Wielding Mutant)
  • Black Widow (Spy)
  • Beast (Scientist)

Configurations may also include up to two other external super-hero systems. Modes and capabilities of these other systems cannot be fully anticipated. A stereotypical example is Spiderman. Spiderman will be used throughout this analysis as a test case for the ability of the System of Systems to incorporate an external super-hero system.

Each configuration will typically include one or more vehicles.

Lifecycle

An important aspect of system-of-systems is that the lifecycle of the SoS does not neatly align with the lifecycles of the component systems. All of the initial component super-hero systems were specified, designed and implemented before the assembly of the Avengers system. As a consequence it was not possible to incorporate features into the design of each of them to support Avengers working. A key example of this is inter-operability of equipment and power sources. Thor’s main weapon system is incompatible with everyone but Thor. Iron Man’s entire weapons and propulsion platform uses bespoke technology.

Contrast this with a lifecycle where the Avenger’s concept was determined before the design of the superheroes. At the very least, dangerous fashion incompatibility could have been avoided.

Top Level Hazards(TLH)

With any system-of-systems there are a standard set of hazards that may be applicable. In fact, with the Avengers we find that all of them are applicable.

TLH1: Fratricide

Fratricide, also known as “friendly fire” or “blue on blue” incidents, typically results from misidentification of targets, incorrect aiming of direct or indirect fire, or failure to establish and enforce zones with clear rules of engagement. The mix of indirect-fire weapons (ants, lightning), direct fire weapons (shield, hammer, arrows, guns, magnetic repulsors) and melee suggests that Avenger fratricide is a risk which must be carefully managed.

The Avengers have rightly determined that procedural mitigations are insufficient here. It’s one thing to tell Iron Man, Thor or Hulk to carefully identify targets, it is another thing to actually expect them to do so. An alternate strategy is to mitigate the consequences of fratricide by designing defensive capability to be universally stronger than the offensive systems. The main mechanisms of these systems are:

  • Invulnerability  (Thor, Hulk)
  • Physical Protection (Iron Man, Captain America)
  • Agility (Beast, Quicksilver, Black Widow)

Fratricide risk applying to the other super-hero systems remains high-likelihood, high-consequence. Blind luck and comic-book physics are the only explanations for the continued survival of Henry Pym, Hawkeye and the Scarlet Witch.

 TLH2: Collision

Combat frequently requires fast-moving objects to co-ordinate their movements in close proximity. Whilst in many respects the speed and mass of the super-hero units makes collision a greater threat than fratricide, the mitigations are the same. Ideally zones of operation and movement would be determined and enforced through non-procedural means. In practice ambiguous verbal communication (“look out”, “gang way”, “duck”) seems to be the main strategy for collision avoidance. For units such as Hulk, Thor or Iron Man this level of mitigation is acceptable, because the consequences of collision are minor. In the case of Ant Man his small size not only magnifies the consequence, but reduces the effectiveness of the mitigation. Other units are unlikely to see Ant Man, and his own voice requires magnification to be heard.

TLH3: Resource Competition

Typical finite resources which must be considered in System of Systems are:

  • Communication channels
  • Logistic support channels
  • Fuel
  • Ammunition
  • Movement corridors
  • Shared support facilities

Fortunately, none of the super-hero units are heavily resource dependent. Units such as Hawkeye have ammunition restrictions, but these restrictions are typically plot-imposed rather than dependent on logistic organisation. In most respects the lack of weapon-system compatability is an advantage here, as no pair of units have common consumable supplies or parts.

TLH4: Mission Capability Shortfall

The fluid configuration of a system-of-systems presents a risk that a vital capability will be lacking in some configurations. To account for this hazard we must consider what the vital capabilities are, and how they may be provided.

For a super-hero team, plotlines typically require:

  • Investigative ability
  • Novel solutions to complex problems
  • Transport to and from the location of incidents
  • Combat capability

Full details are not provided here, but a simple table showing the capabilities of each hero-unit will show that not all combinations of hero-unit can deliver all of the required capabilities. For example, a team consisting of Thor, Henry Pym as Giant Man, Quicksilver and Bruce Banner in Enraged Green Monster mode would have considerable combat capability, but so little intellectual capability that they would present an enormous threat to the general public. On the other hand, a team consisting of Tony Stark in Genius mode, Bruce Banner in Gamma Scientist Mode, Black Widow and Henry Pym as ant-man would have insufficient firepower for most plot challenges, presenting serious risk to themselves and unprotected bystanders.