Signal and the Noise Ch07

Signal and the Noise chapter three opened with an examination of the “Swine Flu Fiasco.”  No, not the most recent one this decade, the first scare in the 70’s.  It pretty much went the same way as this one though; someone contracted H1N1, the news outlets exploded and predicted it to be the next “Spanish Influenza” event—which didn’t happen.  The virus didn’t become a pandemic, it died pretty uneventfully.  However, the man who most vigorously supported this prediction was President Gerald Ford, whose secretary of health predicted over one million American deaths as a result of the new strain of flu.  Ford pushed for a two-hundred million dollar budget for vaccination productions while the media buzz was starting, despite certain medical experts claiming that pandemic was between 2 and 35 percent chance of happening, and not cause for major alarm nor the hasty vaccination plans.  He also notes the more recent return of swine flu, in which it was also overestimated, just not by such a huge margin this time.  The two still spurred overwhelmingly wrong predictions about their outcomes.

Silver explains the failures in prediction by stating that some things are simply intrinsically very hard to predict, and that the failures trusted too much in extrapolation, the assumption that a trend will continue indefinitely into the future.  This is especially true in new situations, such as the introduction of the H1N1 virus into a community.  It was never studied before, so there was no existing data to draw upon and consider with any extrapolation that may have been carried out.  Silver drew upon the case of the AIDS virus breakout, in which the early data points did not allow for good predictions through extrapolation.

Silver also warns against “self-fulfilling” and “self-cancelling” predictions.  These are predictions which can actually have a hand in controlling the outcome, just by making them.  They affect human behavior, which can cause an outcome to be more likely to confirm or nullify a prediction.  This can be true in surveys; one might asking leading questions without meaning to when expecting a certain type of answer, and in turn skew the results so that people are more likely to confirm the surveyor’s prediction because of the bias of leading questions.

The main point of the chapter is very Socratic in nature.  Silver states how it is very often harmful to pretend to make a good prediction when you in fact, can’t—or in the words of that famous philosopher, pretending to be wise when you are not.  Instead, you should consider a wide realm of possibilities, and also consider the implications of making a poor prediction.  He brings ethics into the equation, expressing how medical professionals, in particular, have a responsibility to make accurate predictions or suffer very real repercussions.  Imagine if the swine flu situation were reversed, and the outbreak actually did happen, but we predicted it not to.  There would have been absolutely dire consequences.

Leave a Reply

Your email address will not be published. Required fields are marked *