Unknown Knowns 1 – Probability: A Series of Excerpts from “Vital Foresight”, Chapter 5, by David Wood
David Wood
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknownsâthe ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
– Donald Rumsfeld
Editor’s Note: In the above quote, then-secretary Rumsfeld forgot perhaps the most important category: “unknown knowns”. Chapter 5 of David Wood’s Vital Foresight discusses many items in this final category. The following highly-condensed 5000-character excerpt includes some of my favorite paragraphs, connected in a more-or-less logical manner. There was much supporting these paragraphs that was removed for sake of quick digestibility, and the reader would be amiss to not purchase and read David’s full work, and leave a 5-star review. This series should comprise several short essays, but David’s full work can be found here.
~ Zach Richardson, Director of Publication, United States Transhumanist Party, January 2022
…In brief: to evaluate the likelihood of a state of affairs, such as someone being ill with a given disease, or them being guilty of a particular crime, you need more than just some evidence that seems to confirm that state of affairs. You also need to know the probability of a âfalse positiveâ, in which the evidence has arisen even without the state of affairs applying. That probability of a false positive depends, in turn, on assumptions about various background statistics known as âprior probabilitiesâ. In effect, people often naively assume that these prior probabilities should be split 50:50. But the case of wrongly insisting that someone should quarantine, based on what was a false positive test, shows this to be a mistake.
Because many judges and lawyers lack an understanding of Bayesâ Theorem, questionable verdicts have been reached in a number of court cases. It is likely that other cases feature similarly unsafe conclusions whose dubious nature has not even been noticed; such is the extent of poor understanding of probabilities.
Thoughtful application of Bayes’ Theorem has produced some stunning results. Following the unexplained 2009 disappearance of Air France flight AF 447 while it flew over the Atlantic from Rio de Janeiro toward Paris, teams of wreckage recovery experts had failed on four occasions over the course of two years to locate the remains of the airplane. Then a team of statisticians were brought into the project, to reconsider all the information from the past failed searches. Putting the data into formulae that included Bayes’ Theorem, the team gave their reasons for focusing the search in a particular area of the seabed. Within two weeks of this new search, the wreckage of the flight was found â along with the black-box recorder and the critical information it included.
A previous application of those same search methods had discovered the wreckage of a ship that had been missing since 1857, the SS Central America. Because that ship was carrying gold worth fifty million dollars at present-day prices, numerous searches had been conducted for it over the decades. Finally, a young mathematician called Lawrence Stone used Bayes’ Theorem, along with results from all previous searches, to narrow down the search region, leading to the recovery in September 1988 of more than one ton of gold bars and coins. It was the same researcher, Stone, that headed the team of statisticians who found AF 447. Stone has stressed the advantages of rigorous mathematical approaches over âad hoc methodsâ that âdelayed successâ by years. Our intuitions, expressed in these âad hoc methodsâ, are too prone to mislead us in complicated situations.
Our assessment of the credibility of forecasts of the future likewise need to move beyond ad hoc intuitions to a more rigorous basis. If the BBC weather forecast says thereâs a 71% probability of rain tomorrow, but the day stays dry throughout, does this mean we should stop listening to BBC weather forecasts? After all, the forecast seemed confident.
Again, if a military specialist forecasts a 15% chance of nuclear war happening in the next ten years, but no such war breaks out during that time, does this mean we should disregard any future predictions the same forecaster makes regarding an increased likelihood of nuclear war?
The Bulletin of Atomic Scientists donât give numerical probabilities for their forecasts, but instead use the metaphor of the hands on a clock. In 1947, they launched their âDoomsday Clockâ, with its hands set to seven minutes before midnight, apparently close to an imminent global catastrophe. In 1949, after the Soviet Union tested its first atomic bomb, the hands of the clock were advanced four minutes. Then in 1952, the Soviet Union tested a hydrogen bomb â which is much more powerful than an atomic bomb â much sooner than western-based observers had anticipated. The Korean War was still in progress, and US President Harry Truman had been considering the use of nuclear weapons. The Bulletin of Atomic Scientists nudged the hands of the clock forward by another minute, leaving just two minutes before annihilation. Given that no such annihilation has come to pass, can we now ignore any subsequent updates from that Bulletin?
Not so fast. This is not a matter of truth versus falsity. Itâs a matter of statistics. When the weather forecast predicts rain with a probability of 71%, on many different occasions, the actual number of times rain occurs is pretty close to 71%. We would be foolish to follow any instincts that told us to ignore a forecaster just because of one variant outcome. Even if there are many cases of apparently wrong predictions, that would still be compatible if the forecaster gave a low probability each time. Even if the probability is low, we should still pay attention to that forecast if:
- The predicted impact is high;
- The forecasters have followed appropriate processes in reaching their prediction;
- The forecasters have updated their models in the light of what happened since their earlier forecasts.
… Alongside our inabilities to think seriously about exponentials and probabilities, we have inherited some dangerous mental assumptions regarding the reliability of our own thinking processes. We are over-confident, insufficiently willing to seek evidence that would challenge our current ideas, and too prone to spend time exchanging self-reinforcing views with people who think the same as us.
David Wood is Chair of the London Futurists.Â