Monday, April 5, 2010

Two Fallacies

After reading a huge chunk of Nassim Nicholas Taleb's hugely famous magnum opus "The Black Swan" (which alternates between brilliant and rage-inducing), I was thinking about his chapter on "silent evidence", the evidence that never makes it into the pages of history and thus causes distortion. An example of "silent evidence" that Taleb gives is Casanova, or rather, those like him who never became famous. Casanova believed that his "etoile" (star) pulled him out of tricky situations, and indeed he did weather numerous setbacks only to come back as strong as ever. However, if we consider a large number of potential Casanovas, who, upon suffering a setback, have a probability of p of bouncing back (again, each setback in reality is different, but we're keeping it simple so it's easy to follow), many will be eliminated when they don't bounce back (probability 1 - p for each setback for each potential Casanova), and, at the end, only those who made it through make it into the pages of history. This demonstrates that a lucky string of one-in-a-million will often appear when there are enough potential Casanovas, even in the absence of an outside guiding influence. It only seems miraculous and probabilistically impossible because we read about the surviving Casanovas, and not the ones who were pulled under.

This came back to me later when Taleb switched to infuriatingly wrong, in a paragraph about how casinos prepare for all the wrong tragedies. His example was that casinos try to guard against probabilistic variability and cheating through diversification and surveillance, but the things that cost them are things like tigers mauling performers. His conclusion is that the majority of risks fall outside the casinos' models. But this falls victim to his own 'silent evidence' fallacy! Variability doesn't harm casinos BECAUSE they guard against it - so the potential threats don't make it into the books. They are silent evidence. So is cheating, since cheaters get caught or scared away, so the potential risk is totally unknown.

Unfortunately, this analysis of the defect could very well fall prey to precisely another fallacy - the so-called 'Elephant Repellant' fallacy. The idea is as such: a guy walking along a road sees a farmer in a field in England spraying a very smelly substance all over the fields. Naturally curious he goes over, and, holding his nose, asks him what he is spraying on the fields. "Elephant repellant" replies the farmer. "But there are no elephants anywhere near here!" protests the guy. "I know," says the farmer, "it really works!" The point of the fable is clear: any preventative measure can only be determined to be ineffective (at least anecdotally... a rigorous scientific test can differentiate between effective and ineffective preventions well - but sometimes, like in the case of terrorism, double-blind rigorous studies are impossible, and in other cases narrative bias gets in the way anyways), since if nothing happens, it is not clear whether the prevention was responsible or if nothing was going to happen anyways.

This might explain the popularity of curative methods over preventative: we like being able to judge effectiveness. Thus we labor for a pill to cure obesity and a cure for cancer, yet preventative methods are underutilized, placing a huge strain on healthcare.

We thus have two fallacies that work in opposite ways: whether you assume the existence of no silent evidence or the existence of silent evidence, you expose yourself to risk of faulty analysis. So what's to be done? Heres a suggestion in anecdotal form:

In 1973, Israeli intelligence was observing major movements from the Egyptian Army. The military intelligence bureau, Aman, was certain that this was just an exercise. The Egyptians had done the same thing the previous year, and there had been no war, just as Aman had predicted, and nothing seemed different. The actual result is of course famous: Egypt pushed into the Sinai and for a few days it seemed as if Israel was going to lose a decisive ground confrontation to its Arab neighbors until it was able to remobilize its armies and push the invading forces back. However, here is an alternative to Aman's decision making method that to me appears far more rational than its modus operandi of trying to read Arab intentions: prepare for everything. In the case that there is a significant (>1% in my view, but I'm not a military analyst) probability of war, mobilize enough forces to repulse any initial offensive. "But you're only recommending this with hindsight!" I hear you say. "You knew there would be an attack! This tells us nothing!" No. That is not my methodology. I recommend a preparation for any movement capable of producing an attack. I think Israel should have put its active forces on alert and prepared to mobilize the reserves even in '72 when the Arabs didn't invade. The point is that sure, you didn't know if your preventative measures stopped an actual attack or if you were using elephant repellant, so to speak. But in this way you know that you were secure against any possible attack. The enemy is inherently unpredictable; ANY factor depending on human beings with free will is inherently unpredictable. "No battle plan survives contact with the enemy". However, given the level of intelligence available to the IDF, the capabilities of the enemy were known, so, prepare for any attack up to the level they are capable of. In this way, enemy intent is removed from the analysis and it is impossible for them to catch you off guard.

I will illustrate my point with another example:

A teacher tells his students that there will be a pop quiz next week, intended to surprise them. The students then go through the following logic: if the teacher puts the quiz on Friday, they will obviously be expecting it since it's the only day left after Thursday passes with no quiz. Thus, the last day the teacher could put the quiz on is Thursday. But if the teacher puts it on Thursday, thanks to this same logic, they'll be expecting it then too. And so on, so that the teacher cannot surprise the students, despite the students not knowing when the quiz will be. And indeed the teacher cannot surprise smart students. Why? Well, lets see what this logic prescribes: if Thursday passes with no quiz, prepare for it for it will be on Friday. If Wednesday passes with no quiz, prepare for it, for the teacher will not ruin the surprise by putting it on Friday so it will come on Thursday, and so on. The general advice: be prepared for the quiz EVERY DAY. You cannot surprise someone who is prepared every day.

And indeed this is my recommendation for users of 'elephant repellant': figure out how capable elephants are of coming into your fields and trampling your crops, and get the repellant accordingly. Do not try to read the minds of any elephants capable of coming in: 'they might not want to' is not good preparation. If there is a credible threat, and the costs of unpreparedness are high enough, as was the case of Israel in '73, you must prepare for it. Otherwise, you expose yourself to a (negative) Black Swan that may be not quite as unlikely as you think it is.

No comments:

Post a Comment