Normal Accident Theory Explained

by Emma Parkinson

 
 

Live events are complex. Really, really complex. So far there are no prizes for stating the blindingly obvious. However, when it comes to making sense of these situations, academia has some interesting perspectives to offer the events industry.

As far back as 1984, Charles Perrow was trying to understand the disaster at Three Mile Island. In doing so he established what he called Normal Accident Theory: a situation where the systems involved were so complex and tightly coupled that an accident was, perhaps, the inevitable outcome. In “complexity”, Perrow described any system where two or more discrete failures might interact in unexpected ways. In “coupling” he referred to a system where one element might have a “prompt and major impact” on another.

So how do we get from nuclear meltdown to live events? Once you start to dissect them, some of the proceedings we manage exhibit both these features in abundance.

All the time our industry deals with crowd dynamics scenarios, foul weather, temporary infrastructure and communications, staff unfamiliar with their role or location, large-scale deployment of team members with low levels of training, audiences who aren’t straight or sober, and so much more. Any single one of these elements has the potential to suffer a failure that might interact unexpectedly with another part of the system. There are so many variables for an event control room to deal with – especially when you add the unpredictability of humanity into the mix – that it’s easy to imagine the interaction of dozens of potential accident scenarios over the course of just a single evening. It’s Perrow’s “complexity” in a nutshell.

Now consider the ways in which the worst accidents in our industry have seen people lose their lives: structural collapse, crushing injuries, compressive asphyxia, drug reactions and malicious acts of violence. The commonality? The shocking speed with which these events unfold, and in particular the way that a problem in one part of the system can impact with ferocious rapidity on another. Remember Hillsborough: a multitude of errors compounded to result in the tragic outcome, but key was that compressive asphyxia takes place in less than two minutes. A deadly crowd surge can come from nowhere and dissipate just as quickly in seconds. A lone actor takes moments to wield a firearm with devastating results.

Framed like this, it doesn’t take much to conceptualize large-scale events as exhibiting the facets of Normal Accident Theory. But there’s plenty that we can do to protect our co-workers and audiences from that which Perrow thought so inevitable.

Step one is to reduce the complexity wherever it can be found: if the audience is moving between areas in large volumes, do everything possible to stagger those moves – overlap similar artists to reduce the potential for mass crowd shift at the end of each set or separate key attractions in such a way as to make the move between them unattractive. Using a mixture of police and private security providers? Make sure they have discrete responsibilities to prevent issues of confusion with regard to who is doing what. Got a centralized control function? Consider devolving control of individual areas to dedicated teams who can concentrate on problem areas alone without the distraction of the wider site. Simplify, simplify simplify, so that one malfunction doesn’t have the opportunity to interact with another.

Step two is to ensure that where fast-paced coupling exists, the systems designed to manage it can react at a similar speed to any potential unfolding incident. One of the key features of Hillsborough was the rigidly hierarchical command structure, with a single police officer taking all significant decisions. By the time information about the problem reached him, people were dying: in the time it took him to react and provide a response, people were dead. Communications were of poor quality, runners were used in place of radios and people felt that they couldn’t act without the Commander’s say-so: a single person became another weak point in a chain that was already badly compromised.

To resolve such issues in tightly-coupled organizations it’s vital to consider an approach of “subsidiarity” where decisions are taken at the lowest appropriate level and coordination happens at the highest necessary level. In such systems staff at all relevant levels are trained and – just as importantly – empowered and supported to take the potentially life-saving decisions, unencumbered by a management hierarchy that might slow them down while they wait for advice or response. Consider the well-trained pit-spotter with the power to stop the largest show or a low-level security officer, trained to move a crowd to safety in the event of an attack: in shaving seconds or minutes off a response a critical incident might be forestalled.

Of course this is easy to write and far harder to achieve: such is the complexity of the systems in which we operate that the impacts of decisions by individuals throughout the chain can have far-reaching effects, themselves adding to the problems we seek to resolve. Through planning and training, however, key roles and positions can be identified where fast-paced coupling can be matched. We might not be able to cheat the frailty of the human bodies we’re tasked with looking after, but that’s no reason to not do our best to out-run the reaper.

Jacob Worek