It should be obvious that the death of an individual human being isn’t as bad as the death of all humankind. But that’s only true if you accept the following premise laid out by Nassim Nicholas Taleb in his upcoming book, Skin in the Game:
I have a finite shelf life; humanity should have an infinite duration. Or I am renewable, not humanity or the ecosystem.
The quotation actually comes from a draft version of one chapter available here. The book is not yet out.
But what does this mean in practical terms? The simple answer is that human societies should not engage in activities which risk destroying all of humanity. Nuclear war comes to mind. And, most, if not all, people recognize that a nuclear war would not only result in unthinkably large immediate casualties, but also might threaten all life on Earth with a years-long nuclear winter.
But are we humans risking annihilation through other activities? Climate change comes to mind. But so do our perturbations of the nitrogen cycle which we are now at the very beginnings of understanding. In addition, the introduction of novel genes into the plant kingdom with little testing through genetically engineered crops poses unknown risks not only to food production, but also to biological systems everywhere.
The thing that unites these examples is that they represent an introduction of novel elements (artificial gene combinations not seen in nature) or vast amounts of non-novel substances (carbon dioxide, nitrogen compounds and other greenhouse gases) into complex systems worldwide. Scale, it turns out, matters. A population of only 1 million humans on Earth living with our current technology would almost certainly not threaten climate stability or biodiversity.
But, there is one technology I mentioned that might threaten even this small-scale human society: genetically engineered plants. That’s because those plants have one characteristic which the other two threats I alluded to do not; plants are self-propagating. They can spread everywhere without humans making any additional effort beyond introduction into the environment. (For a discussion of why this matters, see “Ruin is forever: Why the precautionary principle is justified.”)
Here is the essential finding of the chapter in Taleb’s new book: If you keep repeating an action which has a nonzero chance of killing you over time, you will almost surely end up in the grave. If society does the same thing, it risks the same fate. Individuals may die before risky behavior catches up with them. Societies live on to repeat actions that risk ruin. And, we as a society are engaging in multiple activities that risk our ruin as a species, a circumstance in which the probabilities of societal ruin are not merely additive, but multiplicative.
As Taleb notes in his book chapter:
If you climb mountains and ride a motorcycle and hang around the mob and fly your own small plane and drink absinthe, your life expectancy is considerably reduced although not a single action will have a meaningful effect. This idea of repetition makes paranoia about some low probability events perfectly rational.
The confusion about risk entails 1) the inability to see that we are piling danger upon danger and 2) the failure to understand that we have assessed risk in the wrong way.
Taleb illustrates the two kinds of assessments:
Consider the following thought experiment.
First case, one hundred persons go to a casino to gamble a certain set amount each and have complimentary gin and tonic…. Some may lose, some may win, and we can infer at the end of the day what the “edge” is, that is, calculate the returns [for the casino] simply by counting the money left with the people who return. We can thus figure out if the casino is properly pricing the odds. Now assume that gambler number 28 goes bust. Will gambler number 29 be affected? No.
You can safely calculate, from your sample, that about 1% of the gamblers will go bust. And if you keep playing and playing, you will be expected have about the same ratio, 1% of gamblers over that time window.
Now compare to the second case in the thought experiment. One person, your cousin Theodorus Ibn Warqa, goes to the casino a hundred days in a row, starting with a set amount. On day 28 cousin Theodorus Ibn Warqa is bust. Will there be day 29? No. He has hit an uncle point; there is no game [any] more.
What Taleb is explaining is the difference between what he calls ensemble probability (involving 100 gamblers) versus time probability (involving repeated actions by one gambler). He asserts that almost all of social science research and economic theory is tainted by conflation of the two. Social scientists and economists don’t know that they are using ensemble probability (something like a snapshot which ignores tail risks) to gauge risk where time probability is appropriate (more like a long-running movie, the opposite of a snapshot and a movie which takes into account tail risks).
In the example, you as an individual do not have a 1 in 100 chance of going broke at a casino unless you never walk into a casino again the day you and 99 others are observed in the above experiment. If you visit the casino often enough, it is almost certain that you will lose all your money (at least what you’ve decided to put aside for gambling). Casinos finance themselves on this certainty. The house always has an edge or no one would ever build a casino. If you play against the house continuously, it may take some time, but you will be ruined.
The fact that casino gambling in the long run is unprofitable (for the players, not the owners) is known in advance. It is not hidden from the players. The payoff for placing a bet on the winning number in roulette is 35 to 1. If you get lucky and win, you get the chip you placed on the winning number plus 35 additional ones. Trouble is, there are 38 numbers on the roulette wheel. (You probably didn’t remember 0 and 00.) If you were to place a chip on every number so that you could win every turn of the wheel, you’d actually lose two chips out of the 38 you placed each time. The house has a mathematical advantage you cannot beat over time.
In the realm of complex Earth systems, however, the odds of ruin cannot be calculated. In fact, how the actions we take might cause ruin cannot be known for certain. Much about these systems is hidden from us. What we do know is that our very survival depends on their proper functioning, and that therefore, perturbing them as little as is possible makes sense. We cannot be certain exactly what actions at what scale will, for example, create runaway global warming.
Our situation is worse than that of the heedless gambler. At least he or she knows exactly what will lead to ruin and therefore what must be done (or not done) to prevent it. With regard to climate and other Earth systems, we are largely in the dark. We do know with unusual certainty that human activities are warming the planet. We have a good idea about what those activities are. What we don’t know is precisely how the climate will change as a result of our actions except for one thing: Our models have been too conservative about the extent and pace of warming. Climate change is moving faster than expected.
So there is one final crucial aspect of the dangers we are creating to the Earth’s systems. The precise extent and nature of the systemic planet-wide risks we are taking in fiddling with those systems are hidden. We cannot know all the interactions in the atmosphere, in the soil, in the oceans or in the plant kingdom that result from our actions. That means that our models cannot capture all the possibilities the way a gambler can calculate precisely the odds of winning at roulette. And, this is emphatically a reason for us to be very, very careful. That’s because we know that what we are doing could lead to the ruin of our civilization and its people. In fact, we know it is very likely over time because we keep repeating offending acts that have a nonzero chance of causing systemic ruin while increasing their scale.
And yet, we continue to release warming gases and pollutants into the atmosphere across the planet. We continue to release novel genes into the wild across the planet with little testing, genes which can and will self-propagate. We continue to disrupt the nitrogen cycle. Repeating such practices only brings us closer to ruin.
When someone says that genetically engineered crops have been around for years and no catastrophe has occurred that person is either a spokesperson for the industry or simply doesn’t understand that we are courting hidden risks under time probability. The same can be said for those downplaying the risks of climate change except the industry in question will, of course, be the fossil fuel industry.
So many of our institutions and arrangements have been premised on the idea that we are exposed only to risk under ensemble probability. The banking, finance and investment industry comes to mind. And yet, the 2008 financial crisis showed that we are, in fact, exposed to risk in that industry under time probability. Hidden risks blew up the world financial system only a year after the head of the U.S. Federal Reserve Bank at the time pronounced the system sound.
Without understanding the difference in the two kinds of risk, we will continue to take actions and build institutions for which we believe we understand the associated dangers—when, in fact, we remain entirely in the dark about the chances we are taking and the stakes involved.
Image: Gambling. George Cruikshank (1848) in “Drunkard’s Children” via Wikimedia Commons.