To follow up on my previous post on Nassim Taleb and his work, there is a critical distinction that is often overlooked between risk and uncertainty.* Risk is quantifiable. Whether or not it has been properly measured, it refers to something that is measurable. Uncertainty is not quantifiable. Risk can be "bought down" or hedged in ways that uncertainty cannot. A standard example of risk is the sort of game you find at a casino, like roulette, where the odds are clearly known. The state of affairs that will result from a war (the outcome or outcomes), on the other hand, is uncertain, relying on too many variables, complex interactions, and unknown unknowns to be meaningfully quantified. The outcomes of risks have known probability distributions. Not so with uncertainties. Here's an example of these concepts in action, in the context of baseball.
This conceptualization of risk and uncertainty is sometimes mapped onto the "known unknown" vs. "unknown unknown" divide, with known unknowns characterized as risks and unknown unknowns as uncertainties. But the distinction between known unknowns and unknown unknowns is based on the knowledge of the observer, while the distinction between risk and uncertainty is in some respect a difference in "knowability." There are plenty of "known uncertainties" in the "known unknown" category, where we are perfectly capable of identifying something we don't know which nonetheless has a probability distribution we do not or cannot know.
Part of Taleb's argument with regards to the financial crisis could be framed in these terms. Financial managers thought they had transformed some types of uncertainty into risk that could be reliably estimated through new statistical techniques. But they were operating with assumptions about the underlying probability distribution of events that were unwarranted, meaning that all they had managed to do was conceal significant uncertainties (unquantifiable indeterminacies) that ultimately came back to bite them. (See this Wired Magazine article for a fascinating description of how this came about.) The more generalized form of this argument is that we tend to operate as though we are facing risks, rather than uncertainties, in part as a result of our psychological biases. (Taleb discussed a variant of this a bit in the podcast I referred to in the earlier post.)
This may all seem rather esoteric, but in a world where there is a fresh appreciation for the limits of statistical knowledge there needs to be a way to act under uncertainty. The elemental caution reflected in Taleb's advice on operating in what he would call the "fourth quadrant" and what I would call conditions dominated by uncertainty is as reasonable a response as any to decision-making in this type of environment. When faced with less-structured problems, dominated by uncertainties and unknown unknowns, highly structured analytic tools are frequently neither appropriate nor helpful. Gaming can be one of the least-structured analytic approaches, which limits its outputs but allows it to constructively address issues characterized by deep uncertainty.
* These definitions do not conform to popular usage of either term, but they are generally used in this way in policy analysis and represent a helpful way to characterize different types of indeterminacy. Risk in this case is not limited to costs or negative events, but instead applies to probabilistic outcomes both positive and negative.
Subscribe to:
Post Comments (Atom)
1 comment:
Taleb is very cool. Chaos and the law of initial conditions rule. I posted this on my blog www.beyondrealtime.com in an article about Taleb's book, The Black Swan. What's really interesting now is a new theory gaining traction regarding fractals and the uniting of quantum and relativity. Now that's something to get excited about. I will link your blog to mine as I am learning a lot about economics, war games and uncertainty. I like the name as well.
Best,
Bob Moran
Post a Comment