She said it’s really not my habit to intrude
Furthermore, I hope my meaning
Won’t be lost or misconstrued
But I’ll repeat myself
At the risk of being crude
There must be
Fifty ways to leave your lover
Fifty ways to leave your lover
— Paul Simon, “Fifty Ways to Leave Your Lover”
In the unlikely event Your Humble Blogosopher were ever called upon, O Indulgent Ones, to develop a required reading syllabus on risk management for senior executives and government regulators of financial institutions, I am untroubled to admit I would fail to include even one textbook or how-to manual of the mathematical ilk which purport to convey the principles of that noble calling. There are plenty of such already to be found on the bookshelves of industry practitioners and regulators, replete with concise derivations of Itō’s lemma, elegant mathematical notation, and perfunctory handwaving about the real behavior of real human beings in real market contexts. As I am not eager to advance the wisdom common to most of this literature—that greed, fear, and other messy human behaviors can be ignored in favor of a market model based on the diffusion of gas molecules in a box—I would search for inspiration elsewhere.
Instead, I would be much more interested in collating writings and writers who I feel might inculcate a healthy fear of hubris, an aversion to overconfidence, and a deeply uncertain view of the ontological underpinnings of our epistemological beliefs. In other words, Dear Readers, I would like to scare the everlovin’ bejesus out of anyone who has the presumption to be a risk manager.
Leading candidates for my syllabus would include cautionary authors and works like David Hume, Oedipus Rex, War and Peace, and anything by Montaigne. And, I am pleased to say, this little gem of a commencement address by Atul Gawande:
Scientists have given a new name to the deaths that occur in surgery after something goes wrong—whether it is an infection or some bizarre twist of the stomach. They call them a “failure to rescue.” More than anything, this is what distinguished the great from the mediocre. They didn’t fail less. They rescued more.
This may in fact be the real story of human and societal improvement. We talk a lot about “risk management”—a nice hygienic phrase. But in the end, risk is necessary. Things can and will go wrong. Yet some have a better capacity to prepare for the possibility, to limit the damage, and to sometimes even retrieve success from failure.
When things go wrong, there seem to be three main pitfalls to avoid, three ways to fail to rescue. You could choose a wrong plan, an inadequate plan, or no plan at all. Say you’re cooking and you inadvertently set a grease pan on fire. Throwing gasoline on the fire would be a completely wrong plan. Trying to blow the fire out would be inadequate. And ignoring it—“Fire? What fire?”—would be no plan at all.
There is fourth major pitfall, one which Mr. Gawande is perhaps too diplomatic to mention: insisting, in the face of incontrovertible evidence to the contrary, that the something going wrong is only doing so because other people (or things) are not acting according to plan. The common corollary to this position is that it is not your theory which is wrong, but nature is behaving unexpectedly or people are acting “irrationally.” We have seen this video before, and it wasn’t convincing the first time.
As the cleverer among you might suspect, I have said similar things at this location in the past. The point of risk management is not to prevent failure, for that is impossible. The point is to have a plan ready to manage and control failure when it inevitably comes.
It is my belief that many quants, hedge fund managers, and investment bankers came to believe—consciously or not—that, by explicitly embracing and accounting for chance, they had tamed it. They spent countless millions of man hours designing and implementing elaborate mathematical models and risk control systems based on aleatory principles that could predict, with remarkable accuracy, the variation in return and behavior of securities and derivatives under normal circumstances. They spoke confidently about “value at risk” and “maximum expected daily trading loss” as if they knew what they were talking about. As if those terms actually meant anything. And then they trotted off to their bank, or their prime broker, or the Discount Window to borrow a couple more turns of leverage against their proprietary positions.
But you cannot tame chance. That is what makes it chance. At base, implicitly attributing the kind of predictability these individuals seemed to ascribe to chance was a fundamental error, a category-mistake.
To use an example from the not-so-distant past, could the principals at now-defunct hedge fund Long Term Capital not see that pegging the odds of losing all their capital in one year at 1024-to-1 against was ludicrous on its face? (And I am not arguing that Myron Scholes and the other LTCM propeller heads picked the wrong distribution for their probability estimates, as if settling on a Levy skew alpha-stable distribution with α = 1.8 and β = 0.931 would have been more accurate than a lognormal one.) In all intellectual honesty, how could they possibly know? Hubris, yes, but more importantly epistemic blindness was at play here.
For even if you have guessed (or calculated) the probabilities correctly, giving one-in-ten-million odds that a life-destroying asteroid will hit Earth in the next ten years does you no good when a Manhattan-sized meteorite is discovered hurtling toward Rio de Janeiro the following day. In retrospect, it seems pretty clear that it is far more important to plan how you intend to deal with an unlikely event when and if it does happen than to shrug and say it will probably never happen. Disaster planning and scenario testing are far more valuable risk management practices than fine-tuning the estimated volatility inputs to your CDO trading model.
Doctors like Mr. Gawande seem to have an instinctive handle on how to cope with the unexpected, probably in large part because they have seen or heard of so many “impossible” complications arising in a hospital context. They have a healthy respect for the unpredictability of the human body, and a healthy appreciation of the limits of their own knowledge and ability to predict its behavior. In one respect they benefit from a larger dataset of experiences to call upon than financial risk managers: people have heart attacks, surgical complications, and rare diseases with much higher frequency than we suffer from globe-rattling financial collapses. But the so-called Masters of the Universe could learn a lot from the humility of doctors, not only about how to prevent disasters from occurring, but more importantly how to recover from them when they do.
... we need to rediscover a little more respect (and fear) for the ineluctable and irreducible operations of chance in our lives, including in the markets. We need to keep reminding ourselves that having a 95% confidence level that our hedge fund will not lose more than 100 million dollars in a day does not mean it won’t lose $500 million tomorrow, or $75 million a day for ten days in a row. We need to rediscover that well-understood probabilities are usually more stable in the long run, so the whipsaw of short term events doesn’t blow us up before we can profit on our longer-term investments.
And it’s a good idea to have a plan, a direction in which you’d like to go. But it’s always a better idea to have back-up plans as well, alternate routes you have mapped out in case your main chance doesn’t work out as expected. Keep those in your back pocket, so you don’t frighten the Congressmen or limited partners you rely on into paralyzed immobility. But keep them nevertheless.
We probably have more than enough risk managers in the global financial system nowadays.
But we sure as hell need a lot more risk doctors.
Related reading:
Why So Serious? (December 10, 2008)
Nobody Expects the Spanish Inquisition (May 24, 2007)
P(x) = 1/1,000,000,000,000,000,000,000,000 (May 9, 2007)
© 2012 The Epicurean Dealmaker. All rights reserved.