Sunday, September 20, 2009
On Intelligent Design
Consider the following reasoning I witnessed a while back at a parent meeting. Some trees are so tall that there is no scientific explanation for the fact that these trees can pump water from its roots all the way up to its top (I actually seriously doubts that there is no explanation but let's ignore that for now.) Therefore there exists a upward force that pushes things in the opposite direction as gravity...
What is the problem here. I claim there are two fallacies in the logic. Firstly, the fact that we currently don't have an explanation for some phenomenon in terms of ordinary physics, doesn't mean there is none. A scientist should readily admit that many measurable phenomenon remain unexplained today. It doesn't mean that we have to introduce a new mysterious force. Secondly, the "upward force" feels like a valid explanation, however we have just given the unexplained phenomenon a new name. Instead of saying "I don't know the answer to this question" we say "I know the answer, it is X". It turns out this is just another way of saying the same thing. Giving names to things generates the illusion that we understand it. And people really don't like to not understand things. Our survival depends on it, so it to give us peace of mind to simply delude ourselves.
Isn't this is even true in real science? Apparently, when Feynman asked his father about the reason why objects tend to persist in their straight motion, his father said something of the sort. That's because momentum is conserved. But he warned the young Feynman that giving something a distinguished name isn't the same as explaining it.
When does something become an explanation then? Isn't all we are doing just giving names to things? Well no, the distinguishing factor is prediction. When you can genuinely predict a new phenomenon, then you have found useful structure in the world. Whatever your naming conventions, that predictive power is useful and real.
Now to intelligent design (ID). In ID, the most scientific concept seems to be "irreducible complexity (IC)". This idea should be given a chance. If someone can compute the IC of some complex system as the gap in complexity between the system under study the next simplest version of the system that still performs some useful function, than that seems like a genuine advance. Note though that this is extremely difficult because one would have to search over all possible complex systems that can lead to the system under study and one would have to have a notion of what it means to be useful. I believe these problems basically make the concept of irreducible complexity a non-starter, but we want to be extremely open minded here.
Where things go desperately wrong is when we think we found a system that has a high irreducible complexity and then exclaim that therefore it must be designed by an intelligent designer (aka God). This is like the tree example: we have shown that contemporary science cannot explain the phenomenon and therefore we give the problem another name, namely God. However, stating that God designed certain things in the world does not make us understand and thus predict the phenomenon any better. Our attitude should be: 1) maybe Darwin's version of evolution needs to be improved to properly explain structures with high irreducible complexity, or 2) perhaps our estimate of the irreducible complexity was wrong and we need to look harder for simpler functional structures that could evolve into the "problematic" structure.
Personally, I am very tolerant to religion. It brings some good and some bad things to people, but everyone should make up their own mind on these issues. It would be arrogant to claim there is no God. My hope is that we can teach religion in schools on an objective level. I favor teaching all religions to our children, not just one. However, I do not believe science and religion should be mixed. They live on different planes. Trying to prove the existence of God is a lost battle. Trying to prove that someone performed three miracles and then declaring that person saint is simply silly (at the level of Santa Claus). Declaring the existence of God based on the fact that you (supposedly) cannot explain some natural phenomenon using modern science is equally backward.
Saturday, September 19, 2009
Jared Diamond's Psychohistory of the Human Race
In his Foundation Trilogy, Isaac Asimov describes a new kind of science: "psychohistory". The idea of this discipline is to predict the course of human history through mathematical analysis. The fundamental assumption is that the impact of individuals is "washed out" due to the law of large numbers: if you have enough elements it is only their average behavior that counts. The same idea underlies thermodynamics and statistical mechanics: if we have a *very* large number of individual atoms behaving chaotically, we will have no chance in predicting properties of individual atoms. However, new emergent properties such as temperature and pressure are predictable.
Humans are no atoms of course, and although in Asimov's universe there are many more humans to average over than the 6 billion that live today, the appearance of "The Mule" does cause a breakdown of the predictions. Interestingly, one can relate this idea to recent insights in physics and mathematics that societies are in a state of "self criticality". At its core this means that small disturbances can have large consequences that propagate through the entire system. To give an example from human history: the invention of the atomic bomb by a few scientists changed history radically.
Despite these potential objections, Jared Diamond's book "Guns, Germs and Steel" is the best psychohistory of the human race I have read so far. In fact, personally I find this book the best popular scientific read on my list, right next to the "Selfish Gene" by Richard Dawkins.
What are Diamonds claims? He claims that despite the sometimes large impact of individuals there are also very important and predictive regularities of human history that depend on geography. Why did the European colonist basically wipe out the native Indian population in America and not the reverse? The core reason, so he argues, lies in the fact that massive food production was first invented in the Fertile Crescent (the Iraque, Iran region). The climate was ideal for many species of plant and animal to become domesticated. Moreover, the East-West orientation of Eurasia made it easy for inventions to spread to Europe (and as far as China, although China seemed to have invented food production around the same time). Food production made it possible to switch from a hunter-gatherer life style to a farming lifestyle which in turn made it possible for many people to specialize in other things than farming. This way, larger cities and states started to emerge with a specialized fighting caste. These "successful" states then spread either by conquest or by simply producing more offspring.
But surprisingly, even more important than having powerful new weapons was the fact that cattle generates diseases and thus living among cattle causes a population to become resistant to lethal diseases such as small pox, measles and so on. (of course the price paid for this resistance was a high death toll because for a population to become resistant an aweful lot of cruel "selection of the fittest" will have to take place first.) In the America's, food production was much tougher due to climate issues and due to the North-South axis which prevented effective spreading of inventions. Also, very few large animals were available for domestication (perhaps they were all killed when the first people started settling a long time ago). So when the Spanish arrived they did not only have superior weapons and administration skills, they carried many more lethal diseases with them as well.
Diamond goes on to tell the amazing story of many continents: how the Polynesians were former Chinese evicted from the main land, how the aboriginals from Australia were decimated by Europeans, how the Buntu people spread over much of Africa by using superior farming practices. If you need to remember one thing it is this: it is massive food production that has caused all the major migrations and colonizations of populations. And perhaps equally important, the fact that one civilization ended up dominating another has nothing to do with race, it has to do with geography.
So it seems some form of psychohistory is still possible, even though we are averaging over tiny numbers compared to the numbers Asimov had in mind. I warmly recommend this book to anyone who is even vaguely interested in how the world has become what it is today.
Friday, September 4, 2009
Quantum Mechanics is not the final answer.
The first thing I was told when starting my course on quantum mechanics (QM) was: "if you think you understand QM, you don't understand QM". It turned out to be true. I could master the rules of the game, but things always struck me as fundamentally weird. Information is carried by wave-functions that would collapse into definite states when you decide to look at it?? "Oh well, that is because we were not evolved to think about the very tiny" I kept telling myself.
For decades very few serious physicist would dare to propose something more interpretable and fundamental than QM. The main reason is the Bell inequalities that say that QM can not be explained by a deterministic hidden variable theory, i.e. a more fundamental, deterministic, causal and local theory at a smaller scale that would give rise to QM at a larger scale. It turned out that in order to do that one would need a non-local theory or a non-causal theory.
Now all of that seems to change. It takes a genius with a Nobel prize at the end of his career to stick out his neck. Gerard 't Hooft from the university Utrecht has started a lonely uphill battle to come up with, yes, a deterministic hidden variable theory of QM. So how does he propose to deal with the Bell paradox? The gist of the argument is that QM is an emergent theory, in a similar way as thermodynamics is an emergent theory of many particles that move chaotically. The concept of pressure, temperature, entropy etc. only make sense if we are talking about the average behavior of very many particles that move about in a way that is unpredictable at the level of an individual particle. However, treated as a group new structure emerges. That is what we call thermodynamics.
In a similar sense, QM can emerge from a more fundamental, deterministic and interpretable theory at a smaller scale. A first draft of such a theory is based on the idea of cellular automata. Basically, imagine one can split the world up in small boxes and every box can be in a certain state. Time evolution of a particular box is based on rules that generate a new state (deterministically!) as a function of the states in its direct neighborhood. However, there is a twist. New states are created on two-dimensional surfaces such as the horizon of a black hole. The theory of nonlinear dynamics states that "information" can be created using what is known as "chaos". This basically means that any information about a system, e.g. where the constituent particles are located, is lost very quickly under the evolution of the system. In 't Hooft's universe information is lost in the three dimensional interior. This happens because two different states can evolve into a single state. This idea, that the information of a universe is encoded at two-dimensional surfaces is known as the "holographic principle", another brain-child of 't Hooft.
So how then do we beat the Bell inequalities? Here is the intuition. The large scale theory known as QM defines variables that are functions of the more fundamental states. However, the way that this works out according to 't Hooft is that a QM state at time "t" is an aggregation of all the states that will eventually evolve into a single state. This definition however is non-causal because it involves knowing which states in the future will collapse into a single state. Hence while the fundamental theory is causal and local, the emerged theory at a larger scale can exhibit strange features because the variables defined by us mix up the present and the future.
I may have missed a point or two in trying to translate these ideas, but I certainly think this is a very exciting new development. Einstein may turn out to be right after all in saying that "God does not play dice". These bold ideas definitely give me a sense of relief that it was OK to feel dissatisfaction with the strangeness of QM. Between string theory and deterministic QM, I will bet on the latter. Has 't Hooft ever been wrong?
For decades very few serious physicist would dare to propose something more interpretable and fundamental than QM. The main reason is the Bell inequalities that say that QM can not be explained by a deterministic hidden variable theory, i.e. a more fundamental, deterministic, causal and local theory at a smaller scale that would give rise to QM at a larger scale. It turned out that in order to do that one would need a non-local theory or a non-causal theory.
Now all of that seems to change. It takes a genius with a Nobel prize at the end of his career to stick out his neck. Gerard 't Hooft from the university Utrecht has started a lonely uphill battle to come up with, yes, a deterministic hidden variable theory of QM. So how does he propose to deal with the Bell paradox? The gist of the argument is that QM is an emergent theory, in a similar way as thermodynamics is an emergent theory of many particles that move chaotically. The concept of pressure, temperature, entropy etc. only make sense if we are talking about the average behavior of very many particles that move about in a way that is unpredictable at the level of an individual particle. However, treated as a group new structure emerges. That is what we call thermodynamics.
In a similar sense, QM can emerge from a more fundamental, deterministic and interpretable theory at a smaller scale. A first draft of such a theory is based on the idea of cellular automata. Basically, imagine one can split the world up in small boxes and every box can be in a certain state. Time evolution of a particular box is based on rules that generate a new state (deterministically!) as a function of the states in its direct neighborhood. However, there is a twist. New states are created on two-dimensional surfaces such as the horizon of a black hole. The theory of nonlinear dynamics states that "information" can be created using what is known as "chaos". This basically means that any information about a system, e.g. where the constituent particles are located, is lost very quickly under the evolution of the system. In 't Hooft's universe information is lost in the three dimensional interior. This happens because two different states can evolve into a single state. This idea, that the information of a universe is encoded at two-dimensional surfaces is known as the "holographic principle", another brain-child of 't Hooft.
So how then do we beat the Bell inequalities? Here is the intuition. The large scale theory known as QM defines variables that are functions of the more fundamental states. However, the way that this works out according to 't Hooft is that a QM state at time "t" is an aggregation of all the states that will eventually evolve into a single state. This definition however is non-causal because it involves knowing which states in the future will collapse into a single state. Hence while the fundamental theory is causal and local, the emerged theory at a larger scale can exhibit strange features because the variables defined by us mix up the present and the future.
I may have missed a point or two in trying to translate these ideas, but I certainly think this is a very exciting new development. Einstein may turn out to be right after all in saying that "God does not play dice". These bold ideas definitely give me a sense of relief that it was OK to feel dissatisfaction with the strangeness of QM. Between string theory and deterministic QM, I will bet on the latter. Has 't Hooft ever been wrong?
Subscribe to:
Posts (Atom)