The Black Swan
An analysis of high-impact, unpredictable events and how they shape our world more than we typically acknowledge.
This is all the more worrisome when we engage in deadly conflicts: wars are fundamentally unpredictable (and we do not know it). Owing to this misunderstanding of the causal chains between policy and actions, we can easily trigger Black Swans thanks to aggressive ignorance—like a child playing with a chemistry kit.
The strategy for the discoverers and entrepreneurs is to rely less on top-down planning and focus on maximum tinkering and recognizing opportunities when they present themselves. So I disagree with the followers of Marx and those of Adam Smith: the reason free markets work is because they allow people to be lucky, thanks to aggressive trial and error, not by giving rewards or “incentives” for skill. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can.
History and societies do not crawl. They make jumps. They go from fracture to fracture, with a few vibrations in between. Yet we (and historians) like to believe in the predictable, small incremental progression.
So we can learn a lot from data—but not as much as we expect. Sometimes a lot of data can be meaningless; at other times one single piece of information can be very meaningful. It is true that a thousand days cannot prove you right, but one day can prove you to be wrong.
An eyewitness to a crime might be drunk. But it remains the case that you know what is wrong with a lot more confidence than you know what is right. All pieces of information are not equal in importance.
Popper introduced the mechanism of conjectures and refutations, which works as follows: you formulate a (bold) conjecture and you start looking for the observation that would prove you wrong. This is the alternative to our search for confirmatory instances. If you think the task is easy, you will be disappointed—few humans have a natural ability to do this. I confess that I am not one of them; it does not come naturally to me.fn2
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
By a process called reverberation, a memory corresponds to the strengthening of connections from an increase of brain activity in a given sector of the brain—the more activity, the stronger the memory.
Much of the trouble with human nature resides in our inability to use much of System 2, or to use it in a prolonged way without having to take a long beach vacation. In addition, we often just forget to use it.
We favor the sensational and the extremely visible. This affects the way we judge heroes. There is little room in our consciousness for heroes who do not deliver visible results—or those heroes who focus on process rather than results.
The neglect of silent evidence is endemic to the way we study comparative talent, particularly in activities that are plagued with winner-take-all attributes. We may enjoy what we see, but there is no point reading too much into success stories because we do not see the full picture. Recall the winner-take-all effect from
Those who spend too much time with their noses glued to maps will tend to mistake the map for the territory.
For many people, knowledge has the remarkable power of producing confidence instead of measurable aptitude.
We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble.
What matters is not how often you are right, but how large your cumulative errors are. And these cumulative errors depend largely on the big surprises, the big opportunities.
We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living.
The term serendipity was coined in a letter by the writer Hugh Walpole, who derived it from a fairy tale, “The Three Princes of Serendip.” These princes “were always making discoveries by accident or sagacity, of things which they were not in quest of.”
We forget about unpredictability when it is our turn to predict. This is why people can read this chapter and similar accounts, agree entirely with them, yet fail to heed their arguments when thinking about the future.
Poincaré became a prolific essayist in his thirties. He seemed in a hurry and died prematurely, at fifty-eight; he was in such a rush that he did not bother correcting typos and grammatical errors in his text, even after spotting them, since he found doing so a gross misuse of his time. They no longer make geniuses like that—or they no longer let them write in their own way.
This multiplicative difficulty leading to the need for greater and greater precision in assumptions can be illustrated with the following simple exercise concerning the prediction of the movements of billiard balls on a table. I use the example as computed by the mathematician Michael Berry. If you know a set of basic parameters concerning the ball at rest, can compute the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the first hit. The second impact becomes more complicated, but possible; you need to be more careful about your knowledge of the initial states, and more precision is called for. The problem is that to correctly compute the ninth impact, you need to take into account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fifty-sixth impact, every single elementary particle of the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must figure in the calculations, since it exerts a meaningful effect on the outcome.
As individuals we should love free markets because operators in them can be as incompetent as they wish.
Consider that before we knew of bacteria, and their role in diseases, doctors rejected the practice of hand washing because it made no sense to them, despite the evidence of a meaningful decrease in hospital deaths.
Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.
This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty.
In the end we are being driven by history, all the while thinking that we are doing the driving.
Remember that for an event to be a Black Swan, it does not just have to be rare, or just wild; it has to be unexpected, has to lie outside our tunnel of possibilities.
The same illusion of concreteness affects what we call “standard” deviations. Take any series of historical prices or values. Break it up into subsegments and measure its “standard” deviation. Surprised? Every sample will yield a different “standard” deviation. Then why do people talk about standard deviations? Go figure.