Thoughts on The Black Swan
*This post has very little editing, is constructed to reinforce my new knowledge, and therefore not that valuable to the public. I publish it to start a habit of reflection that I hope becomes valuable to more than just me.
Taleb describes the complexity of randomness as it is – unknowable. Randomness isn’t statistical randomness, as he puts it ‘Ludic Randomness’ or some platonic probability. Statistical randomness only occurs within manufactured contexts like casinos or politics. These contexts would fit within mediocristan – I think this refers to the mediocre impacts of randomness – they are in some ways predictable somewhat like a fractal pattern. The unknown unknowns is true randomness – it is not just the map or our interpretation of reality – it is the territory. It is the black swan. The analogy of the black swan is we knew for century of only white swans. I.e. the “Truth” was Swans = White. Until black swans were discovered in Australia, and the truth was dramatically changed. This demonstrates two things: the asymmetry of knowledge (or truth), and the fallacy of no evidence = evidence of absence. The first, asymmetry means 100 pieces of evidence for a fact, can be outweighed by a single piece of evidence. Taleb suggests we should be sceptical of evidence, truth, or the reliance of the past. An analogy is the Thanksgiving turkey (or the family chicken); for 100 days the turkey lives a happy life until the farmer kills the turkey for dinner. From the turkeys perspective, the probability of such an event occurring based on past events was 0. But from the farmers perspective it was certain. This is the crux of Taleb’s argument against the dependence on models using the statistical bell curve: because it works until it doesn’t. I’m a little bit hungover regarding the flaws or inadequacy of this. I see favour in using the model works for most contexts – but Talebs argument is it only works within the context of mediocrestan; not extremestan. There is no evidence that a black swan event will occur in mediocrestan until it occurs within the context of extremestan. We are narrative machines – we fail to predict black swan events (true random events) because our interpretation of the territory (reality) is within the bounds of mediocrestan philosophy and models. History, or society changes through large shifts, not in small iterations. And those large shifts only occur in retrospect. We give reasons to events after the fact, point out the signs, or what caused, when in fact it was probably random.
Part of this thinking has changed the way I view luck – luck is always misinterpreted with some form of conscientious, power to choose – randomness occurs all the time. For every Elon Musk there are at least 100 others who were never published, that were just as significant. People who prevent events are never remembered because the events never occurred. Only those who fixed event that had an impact. I guess this plays into humans being fantastic narrative machines, we make sense through order and linearity. We struggle with the unknown unknowables of prevented deaths from certain technology, vs the clear known preventions of death after an event of its nature.
It’s best to view these thoughts formed from The Black Swan and Antifragile – both have the same philosophy of randomness and uncertainty. Antifragile demonstrates more about the danger in depending on certainty that randomness can not occur. If you don’t account for the possibility of randomness you are fragile, like systems of power i.e. politics. Taleb argued that the introduction of randomness, or the allowance of randomness within systems created antifragile systems. Seeking out robust models made them more susceptible to crisis like the financial crash.