Public Lecture| The Island of Knowledge: The Limits of Science and the Search for Meaning

Last month, I attended a talk at the American Museum of Natural History which featured a discussion on the limits of of science. Physicist Marco Gleiser, who had just published a book on the subject, The Island of Knowledge: The Limits of Science and the Search for Meaning, asked how much can we really know about the world given the limitations of our measuring instruments and current knowledge. For Gleiser, the boundaries of science will always expand with our increased understanding; each new discovery only brings more questions to answer which, in effect, renders  the scientific endeavor a sort of Sisyphean cycle of problem solving and problem creation. The blurb of his book captures the source of these ideas in a nutshell:

[The] limits to our knowledge arise both from our tools of exploration and from the nature of physical reality: the speed of light, the uncertainty principle, the impossibility of seeing beyond the cosmic horizon, the incompleteness theorem, and our own limitations as an intelligent species.

In other words, our ability to model natural laws is limited by the imperfections in the tools we use to measure them. Even as technology improves, new tools will still have a threshold after which they are no longer useful. In the 19th century, observing a molecule was beyond the capabilities of microscopes. Today, microscopes can, in fact, view single molecules, but they cannot view atoms in motion during a chemical reaction. Each iterative technological improvement presents us with new gaps in understanding to consider. In his talk, Gleiser likened these scientific boundaries to the coast of a volcanic island; in the same way the size of the island results in a larger coastline, as the body of science grows, so too does possibility for discovery. So, in effect, we can never truly know all there is to be known.

Gleiser’s ideas make sense, when taken in the context of his field of study. Physicists focus a great deal of energy on assessing the degree of uncertainty in their measurements; from significant figures, to standard uncertainty, graphing with error bars, organized methods of error quantification are necessary for a science that needs to mathematically predict the goings on of the world around it. This concern with error is much more pronounced in physics than, say, chemistry or pure mathematics. In mathematics, a theorem either is or is not valid. Likewise, in chemistry a compound is either cis, trans or neither.  So, when looked at through the lens of a physicist, where error is such a central focus, it might seem that there will always be a degree of uncertainty in scientific assertions.

Of course, the Idea sounds sensible when analyzed off-the-cuff, however when put through more rigorous analyzation (as we scientists are prone to do I’m afraid) Marco Gleiser’s argument does not stand to scrutiny. Firstly, never is a strong word to use in math and science. Never holds behind it the weight of eternity; it means not now, not in 100 years, not in 1 million years, not even in many billions of years. In science and math, never means absolutely, unequivocally, never. So saying “we will never know all there is to know”, is a huge statement to make. In his discussion and his book, Gleiser uses the trials of modern science, a 300 year old institution, to extrapolate a trend of how human understanding will persist into infinity. If ever there was an example of the fallacy of hasty generalization this would be it. In the context of infinity, or even the 5 billion years until the universe decays through entropy, our 300-year age of scientific reason pales in comparison. If 5 billion years could be scaled to the size of a meter stick, the 300 year period between the age of enlightenment and the 21st century would be a sliver of a human hair at the end of that stick. That is to say, it is an extremely small amount of time to make a judgement over.

Notwithstanding developments in other technologies, computation and artificial intelligence alone promise servers that can, not only think coherently, but think multiple ideas at once with a speed at the femtosecond level. This will occur within the next hundred years. If mankind survives the next billion years, there’s no telling what amazing tools might be at it’s disposal. The face of science might indeed change by then into a higher form of reasoning in the same way the mysticism of alchemy transitioned into chemistry. Perhaps then mankind will understand the universe to an extent that we can call “complete.”

Notice, that I use words like “might” and “perhaps” in the last paragraph. That is because I am not making my own assertion—a more “correct” idea that should stand to replace Gleiser’s. Rather, what I am saying is the far-off future is entirely up in the air. In the end, Gleiser might be right, I might, or some third party might. Simply because he uses a fallacy (hasty judgement) to justify his ideas, does not mean the ideas themselves are wrong. What I take issue with is not Gleiser’s theory, but his certainty in it. The only scientifically responsible statement one can make about humanity in the far-flung future billions of years from now is that we do not know what will happen. Gleiser has an idea, but really, his guess is as good as anyone’s. In a sense, Gleiser was right. He only neglected to take his argument to its ultimate logical end; there is an inherent uncertainty in everything, including whether uncertainty itself will persist indefinitely into the future.

Leave a Reply

Your email address will not be published. Required fields are marked *