How hard is to assess which risks to mitigate? It turns out to be pretty hard.
Let’s start with a model of risk so simplified as to be completely unrealistic, yet will still retain a key feature. Suppose that we managed to translate every risk into some single normalized unit of “cost of expected harm”. Let us also suppose that we could bring together all of the payments that could be made to avoid risks. A mitigation policy given these simplifications must be pretty easy: just buy each of the “biggest for your dollar” risks.
Not so fast.
The problem with this is that many risk mitigation measures are discrete. Either you buy the air filter or you don’t. Either your town filters its water a certain way or it doesn’t. Either we have the infrastructure to divert the asteroid or we don’t. When risk mitigation measures become discrete, then allocating the costs becomes trickier. Given a budget of 80 “harms” to reduce, and risks of 50, 40, and 35, then buying the 50 leaves 15 “harms” that you were willing to pay to avoid left on the table.
Alright, so how hard can this be to sort this out? After all, just because going big isn’t always the best for your budget, doesn’t mean it isn’t easy to figure out. Unfortunately, this problem is also known as the “0−1 knapsack problem”, which computer scientists know to be NP-complete. This means that there isn’t any known process to find exact solutions that are polynomial in the size of the input, thus requiring looking through a good portion of the potential solution combinations, taking an exponential amount of time.
What does this tell us? First of all, it means that it isn’t appropriate to expect all individuals, organizations, or governments to make accurate comparative risk assessments for themselves, but neither should we discount the work that they have done. Accurate risk comparisons are hard won and many time-honed cautions are embedded in our insurance policies and laws.
However, as a result of this difficulty, we should expect that certain short-cuts are made, particularly cognitive short-cuts: sharp losses are felt more sharply, and have more clearly identifiable culprits, than slow shifts that erode our capacities. We therefore expect our laws and insurance policies to be biased towards sudden unusual losses, such as car accidents and burglaries, as opposed to a gradual increase in surrounding pollutants or a gradual decrease in salary as a profession becomes obsolete. Rare events may also not be included through processes of legal and financial adaptation. We should also expect them to pay more attention to issues we have no “control” over, even if the activities we do control are actually more dangerous. We should therefore be particularly careful of extreme risks that move slowly and depend upon our own activities, as we are naturally biased to ignore them compared to more flashy and sudden events. For this reason, models, games, and simulations are very important tools for risk policy. For one thing, they make these shifts perceivable by compressing them. Further, as they can move longer-term events into the short-term view of our emotional responses. However, these tools are only as good as the information they include, so we also need design methodologies that aim to broadly discover information to help avoid these biases.
The discrete, “all or nothing” character of some mitigation measures has another implication. It also tells us that we wouldn’t be able to make implicit assessments of how much individuals of different income levels value their lives by the amount they are willing to pay to avoid risks. Suppose that we have some number of relatively rare risks, each having a prevention stage, in which the risks have not manifested in any way, and a treatment stage, in which they have started to manifest. Even if the expected value favors prevention over treatment in all cases, if one cannot pay for all such prevention, then the best course in some cases is to pay for very few of them, leaving a pool of available resources to treat what does manifest, which we do not know ahead of time.
The implication for existential and other extreme risks is we should be very careful to clearly articulate what the warning signs for each of them are, for when it is appropriate to shift from acts of prevention to acts of treatment. In particular, we should sharply proceed with mitigating the cases where the best available theories suggest there will be no further warning signs. With existential risks, the boundary between remaining flexible and needing to commit requires sharply different responses, but with unknown tipping points, the location of the boundary is fuzzy. As a lack of knowledge knows no prevention and will always manifest, only treatment is feasible, so acting sharply to build our theories is vital.
We can draw another conclusion by expanding on how the model given at the beginning is unrealistic. There is no such thing as a completely normalized harm, as there are tradeoffs between irreconcilable criteria, the evaluation of which changes with experience across and within individuals. Even temporarily limiting an analysis to standard physical criteria (say lives), rare events pose a problem for actuarial assessment, with few occurrences giving poor bounds on likelihood. Existential risks provide no direct frequencies, nor opportunity for an update in Bayesian belief, so we are left to an inductive assessment of the risk’s potential pathways.
However, there is also no single pool for mitigation measures. People will form and dissolve different pools of resources for different purposes as they are persuaded and dissuaded. Therefore, those who take it upon themselves to investigate the theory leading to rare and one-pass harms, for whatever reason, provide a mitigation effort we might not rationally take for ourselves. It is my particular bias to think that information systems for aggregating these efforts and interrogating these findings, and methods for asking about further phenomena still, are worth the expenditure, and thus the loss in overall flexibility. This combination of our biases leads to a randomized strategy for investigating unknown risks.
In my view, the Lifeboat Foundation works from a similar strategy as an umbrella organization: one doesn’t have to yet agree that any particular risk, mitigation approach, or desired future is the one right thing to pursue, which of course can’t be known. It is merely the bet that pooling those pursuits will serve us. I have some hope this pooling will lead to efforts inductively combining the assessments of disparate risks and potential mitigation approaches.
A great assessment John.
An additional perspective which you’ve drawn into this concept is the tendency to ignore that ‘drift in prevailing conditions’. Although the meme of the boiling frog still does the rounds, it’s fairly inaccurate for most frogs — they DO get out of the water. Unlike humans who’ve developed a reliance on indicators other than those coming from their biology.
In Risk Management, typically risks are seen in isolation what some call silos or stove pipes that from a computational approach, are not normally considered in combinations.
At the same time, Risk Management tends to ignore the impact of positive actions that by default are mitigating some of the risk potential — the question filter is usually ‘if we don’t do X what are the risks’ as opposed to ‘are there any actions we are already taking that might mitigate some or all of the risk we are assessing?’
And in many commercial settings this leads to a naive approach — if we don’t insure the risk, what will it cost us to recover? In the end the fiscal costs (of policy or action insurance) seems to be the determinant where is the policy cost is high and the recovery cost not so much higher, no insurance is taken out.
As for life conditions — on this website much of the conversation tends to be about ‘what will the conditions be in the case of X event’ as opposed to ‘what will the recovery conditions be in the case of X event’.
Avoiding the perils of the amygdala is difficult, and approaching actuarial endeavors mathematically gives one at least the feeling that you are making a good faith effort to avoid your simion roots. So this aspect of your post is highly admirable.
When you say that the problem of actuarial estimation is NP complete, and therefore our hands are to be wrung mightily, there is better news on the horizon. Many NP complete problems have been beaten upon using Monte Carlo methodologies. While these will not give you precise answers, they often give you an idea of what better answers might be. If you are seriously interested in estimation of risk therefore, my suggestion is to chase the monte carlo simulation game and perhaps you can come up with some pretty good solutions, if not perfect ones.
In the meantime, the folks in Washington will continue to chase the collective amygdala… to our less than optimal outcome…
-Kelly
You deal with risk as it is all but nothing part of fighting the asteroid is building fallout styled shelters and food storage stocks, preparing for a star exploding somewhat too close would be besides deep shelters, sperm, egg and embryo banks to deal with to much radiation.
In an overwhelming disaster would humans feel some responsibility to see that some earth life survives?
A more together life system might consider earthlife one creature or two if underwater life is another with very scattered brain bits.
The following post by me is in part about risk,
http://www.phillyimc.org/en/bee-colony-collapse-and-dealing-disaster
Markus: I’m glad you enjoyed the post. I’m glad that you highlighted ‘drift in conditions’ as being too easily ignored. In general, distinguishing between background conditions, change, and noise is a fundamental challenge. I think this is one area where tools can help us: to discover trends present at different time-scales then we might not notice.
I tend to be very sympathetic to the challenges risk management faces. The cost of investigating and managing the information about a particular risk likely will tend to truncate it prematurely. I have hopes that providing the kind of information architecture that will let the results of one risk investigation transfer to the next can help extend this scope.
I also think that insurance regulation plays a more interesting role than it might initially seem: for which risks do we say “you have the right to do that, but you bear the consequences” versus “this is an ordinary risk in the course of what we are doing together, so we’ll help you in the case of harm”. Catastrophic risks in particular seem to pit mitigation alternatives with uncertain rewards and heavy short-term individual costs under specific regulatory frameworks against scientifically-uncertain irreversible long-term public costs that span borders. Figuring out when to hedge on the benefits while sharing the risk is tricky.
But, overall, you’re certainly right, there are often probably overlapping advantages of mitigation measures that are untapped due to current limitations in information tools and discovery processes.
Kelly: Your suggestion is a good one, as I enjoy the use of Monte Carlo approaches, but in a different way than you might suspect. It is entirely right that many great approximations to NP-complete problems are available, including randomized approaches such as Monte Carlo. However, I suspect that the hard part is in getting to this risk comparison stage and knowing when you’ve gotten there. Severe challenges lurk in discovering the right information and integrating cross-domain preferences and expertise into a comparable specification.
In this way, machine learning might provide a better metaphor than other algorithms: in addition to “Given this input, by what process do I find the right conclusion”, one can also ask “Given the input I have so far, how good are my conclusions? What more information do I need, and where do I look for it first so that I can learn the most?” However, this is where Monte Carlo can serve us again. Given developments in non-parametric Bayesian inference, we can use Monte Carlo to help ask “Given the information we’ve seen so far, what underlying random processes most likely produced it, and how much further do we have to look before discovering something important that we didn’t already know?” I think that there is great potential for design tools that incorporate this kind of discovery support.
I personally suspect that the amygdala is foundationally important; from what I’ve read, individuals who have damaged emotional systems are not hyper-rational but are completely indecisive, unable to make choices of any kind. However, I do think we are wise to frame risk issues such as to avoid unhelpful associations, and to look where framing may be doing us harm.
Richard: you are right to notice that I used asteroid strike mitigation in a rhetorical way. Be reassured that I think looking at it more broadly in terms of its consequences is entirely appropriate.
You are also right to bring up the value that people have for criteria other than their lives, including the survival of earth life. Whether or not people should have such a value, it’s clear that some certainly do and that should be included in any proper assessment. How that value ends up being assessed is an open question. Some have raised the question of if it’s appropriate to have people act as a proxy for nature, and if instead nature would be better represented by its own agents. I myself remain very suspicious about this notion’s plausibility, but it is worth entertaining.
Unfortunately, I feel as though it is very difficult to discuss risk matters in ordinary text in any cohesive way, as any of these factors could lead in many different tangents. This is one reason why I am proceeding from the subjects of methodology, information science, and cognitive limits: it would be a shame to have a foundation of such diverse talents without the information architecture to support their work effectively.