Saturday, January 10, 2009


I remember reading "The First Man in the Moon" by H.G. Wells quite a long time ago. Wells describes a machine with which a scientist travels to the moon. In this machine the scientist was pulled towards the rear end of the rocket (facing the earth) for the first two-thirds of the trip. At two-thirds he was momentarily weightless but after that he was pulled towards the front of the rocket (the side facing the moon). Now in this story, the rocket wasn't propelled by any real laws of physics, but it makes you wonder, what happened to Armstrong, Aldrin and Collins when they flew to the moon?

Well, clearly, when they took off and the engines were buzzing they were pulled hard towards the earth side of the rocket. But once in space, they turned off their engines. Was there a slight residual pull towards the rear of the rocket because the earth was much closer than the moon? Take a minute to think about this before you read on.

Gravitation pulls exactly as hard on the rocket as on the astronauts and therefore both their speeds decrease, but by exactly the same amount. So inside the rocket you won't notice a thing until you turn on your engines again (or hit an object in space.) The same thing happens to astronauts in the international space station. In this case, the space station is in an eternal fall towards earth. However, through its forward speed it keeps missing the earth! Since space station and astronauts fall with the same speed they are not noticing their fall.

But the story gets better. The gravitational force between two objects is F = -G M * m / R^2 where G is some constant, M the mass of the earth and m the mass of the space station or astronaut and R the distance between them. Thus, the force is stronger for the space station than for the astronaut because its mass (m) is much bigger. Now, when we compute the motion for the station or astronaut through space we need to equate the force with "m * a" where "m" is again the mass and "a" the acceleration. If this is hard to follow forget about the equations and remember this: the mass "m" drops from the equation and the orbit that one can now compute is independent of the mass m!

This magic cancellation is kind of coincidental or suspicious if you want. Perhaps there is a simpler principle at work here. Einstein thought so too. He imagined a version of the following thought experiment. Imagine you are knocked unconscious for a full year and wake up in an elevator. You feel a downward pull on your body and your company tells you this is because you have been abducted and now live on a bigger planet with more mass and thus more gravitational pull. He assures you the elevator is not moving. You are not so sure, because isn't a simpler explanation that you are accelerating upwards in a building? The point is that there is no way to tell the difference between gravitation and accelaretaion and Einstein concluded that if there is no measurement to tell the difference, well then there might be no difference... It's the Turing test for gravity!

In Einstein's general relativity there is no gravitational force and no mysterious cancellations. Mass and energy (remember E=M*c^2) bends space and time in such a way that what used to be straight lines become curved trajectories. Every object travels through this space in exactly the same way without any forces acting on it. The only thing that has changed is that a straight freeway became a roller coaster, but this change is identical for all objects traveling on it, big or small. You may only experience a "force" when you accelerate out of these "free fall geodesics". For instance, simply standing on the surface of the earth blocks the natural geodesic that would move to you the center of the earth. In this way, the earth accelerates you upward, as if you were firing the engines of a rocket. And so what you would classically call gravitational pull becomes upward acceleration, just as the man in the elevator.

Thursday, January 8, 2009


Alan Turing, the father of computer science, proposed a test to determine if an artificial system could be intelligent. He proposed to play a game where a human interrogator would ask questions to either another human or to a machine which would be hidden out of sight (behind a brick wall or on the other side of the earth). If the human interrogator would be fooled 30% of the time the system would pass the test and ought to be called intelligent.

Now, this is a somewhat arbitrary line in the sand. Why 30%? How many humans must it fool? Moreover, why would we be so concerned about human intelligence? Can't we imagine other types of intelligence that would not be able to fool us? I don't think we should take the claim too seriously that this ought to be the working definition of intelligence, but rather a very useful illustration of what may be needed to build robots that are behaving in ways similar to humans. Turing believed that by the year 2000 this machine would be reality. It didn't turn out that way.

The Turing test did spark a lot of heated debate about whether we could ever build machines that can think in the same way as humans do. Most importantly, can we ever build robots that have a conscious mind? John Searle was the most vocal opponent of this idea. He invented interesting thought experiments which, according to his reasoning, would make it clear that such a thing could not happen. He imagined a "Chinese Room" with a human in it that would receive questions in Chinese. S/He would also have access to an enormous library with a (Chinese) answer to every conceivable (Chinese) question. Mechanically, the person would look up the answer, perhaps randomly picking from a large set of possible answers, and return it to the interrogator. Clearly this person has no idea of Chinese even though s/he would pass this variant of the Turing test.

One objection could be that it is really somewhat unrealistic to ask for a library with answers to every possible question; it would require more atoms to store than the universe has to offer. So perhaps we should restrict ourselves to systems that are limited in their capabilities of storing data. This seems like a workable sophistication of the Turing test, but I am pretty sure someone else will come with a system that may beat the Turing test in another way which would then require addition changes because "surely that system could not be really intelligent"!

In another famous thought experiment (the "brain prosthesis experiment" by Clark Glymour) one imagines replacing the neurons in one's brain one by one with electrical circuits. Each circuit models the input output function of a neuron perfectly. By the time we are done we have replaced the entire brain by a computer. We can do it again in reverse order to rebuild the human. Assuming for a moment this can be technically achieved, would we then have altered the human mind in any way? Many people (among whom Searle) believe this is the case. During the process they believe that one will gradually loose any conscious experience of the world, yet one's behavior will remain unaltered.

How about this thought experiment then? In 3000 AC a smart scientist comes up with a really clever way to build an intelligent machine. S/He first assembles a long strand of amino-acids which s/he places in a small protein cover. The strand of amino-acids will copy itself and code for new proteins to be assembled. The proteins become a robotic body (made mostly of carbohydrates and water) with a brain. S/He then decides to raise this robot among her biological children. This robot has a normal life because it looks so much like a real human being. And some day, it starts referring to itself as "me". It looks in the mirror and understands who that person is. When published, people cry out: "this is not a real robot" it was just a copy of a human being grown in a test tube". So the scientist develops new techniques to copy the exact same process by now with new synthetic molecules that have the same properties as carbon, hydrogen, iron etc. People are still not satisfied until the same procedure is now repeated with electronic circuits. Now what? Intelligent machine or not?

For me these examples drive home the fact that a mind has nothing to do with the substrate it is made from. It's about computation and information. And therefore, if our goal is creating human intelligence, yes, we will be able to build intelligent machines that may have conscious minds, real emotions and all the other good and bad things that make us human. These are not magical, but rather functional properties. We evolved them for a purpose. They are fascinating, and utterly un-understood today, but perhaps not un-understandable or non-replicable. Our brain is extremely complex, propelled by principles that we don't understand today. It therefore seems far to premature to claim that a consciousness mind is something reserved uniquely for human beings.

Monday, January 5, 2009

Our Unconscious Mind

Everyone is faced with making difficult decisions, buying a house, changing your job, moving to a new place, another addition to the family perhaps. For a long time, the going theory was that we take these decisions consciously, in full control of our faculties. Logically weighting pros and cons. However, this "Mr. Spock view" of ourselves is shifting. Perhaps the most recent influential work in social psychology in this respect is a recent book by Timothy Wilson: Strangers to Ourselves where he argues that most decisions are actually taken unconsciously. In fact, the argument goes that our "gut feeling" is very good at taking these kinds of complex decisions involving many sources of information. A similar story can be read in Damasio's "Descarte's Error" where he describes a patient who has lost this ability with disastrous results.

In Wilson's book one can read a endless list examples of how we seem to perform much of our "reasoning" unconsiously. The one that stood out most to me was an experiment by Lewicki, Hill and Bizot. Here, subjects get to see the symbol "X" in one of four quadrants of a computer screen. Their task is report as quickly as they can where they spotted the X. The sequence of the X locations follows a complex but predictable pattern. Over time, these subjects learn these patterns and improve their response time. However, when they are asked why they improve their performance they have no clue. Better even, after a while the sequence reverts to a random sequence resulting in a fast drop in response time. The subjects noticed their drop in response time but can not figure out what caused it! They do not know that the sequence suddenly became unpredictable (or that there was a predictable sequence in the first place.)

What conclusion can be drawn from this? Wilson argues that the pattern was learned completely unconsciously. We constantly pick up on regularities in the world around us. As argued in an earlier post, prediction is crucial for our survival so we tend to be really good at it. As a child, we have learned thousands if not millions of regularities about our world: water makes me wet and moreover I cannot walk on it, fire is really hot and hurts my body, when someone frowns at me s/he is angry etc. Some of them have become conscious over time, but you just wonder how many of these rules remain unconscious for our entire lifetime.

I like the idea of our consciousness as a kind of "searchlight". We can monitor some decisions that are being taken unconsciously but have much less conscious influence over this process then we like to think. Our mind tricks us in thinking we are in control, but in reality we mostly observe. There are indeed also a number of experiments which confirm this point to view to some extend. One can measure skin conductance which apparently changes when a decision has been made. Skin conductance seemed to change before people reported to have taken a decision. I don't quite think our consciousness is useless as the searchlight hypothesis seems to suggest. Otherwise, why did it evolve? Perhaps it can be used to guide our long term goals in life.

The searchlight hypothesis may be disconcerning to some. If we take decisions unconsciously, can we still be held accountable for it? Emphatically: yes! Fear of punishment does change our behavior, conscious or unconscious. We don't need to understand the details of how our decision making process to understand that the Law works (to some extend) and will work even after we do understand it. Another disconcerning factor is the fact that we have become a little less "human". After the realization that God's creation ship earth is not in the middle of our universe (hell, not even in the middle of our solar system), and the fact that man evolved from apes only 1 million years ago, we now have to face the fact that our awake mind is mostly an observer with her/his hands tied down! Personally I don't feel that way. An experience is an experience and an emotion remains an emotions whatever the underlying mechanisms are that generate it.

Sunday, January 4, 2009

Adam and Eve

All humans descended from Adam and Eve. That's the biblical story at least. The scientific community has their own Adam and Eve: Y-Chromosome Adam and Mitochondrial Eve to be more precise. They did not live in the garden of Eden but probably in Africa. They didn't live 6000 years ago, but about 60,000 years ago (Adam) and 140,000 (Eve). Who are these people, or rather, how are they defined?

Let's focus on Eve. She is defined to be the most recent common female ancestor of all humans that live today if we only consider female descend. Ok, let's explain that a bit more. Consider all living humans today. Next consider all their mothers, then all their mothers etc. The crucial point is that we only consider women and moreover that everyone has precisely 1 mother so there can never be more mothers than children. However, mothers can have more than one child (and typically will) while other females will not have any children. So, we expect that the set of our grand-grand-etc. mothers will actually shrink when we extrapolate back in time, until there is only a single mother left.

This reasoning can easily lead to confusion. For instance, it doesn't mean that Eve lived alone, nor that she was the sole female in her times. In fact, many other women lived in her times. More importantly, they may have well been our ancestors through their son's lineages. Eve is defined throught her daughters alone: to be Eve, you need to be the only female from which a path of daughters exists all the way to modern times (in fact to all humans). In contrast, female paths from all other female contemporaries will end prematurely.

Mitochondrial DNA (mitochondria are the power plants of our cells) happens to be passed along the female line and so we can claim is that everyone today inherits the mitochondrial DNA from Eve (not so for the DNA residing in our genes.) A similar effect occurs for the Y-chromosome in a man which is passed along the male line.

Perhaps, more bizarrely, the honor of being Eve changes hand continuously. Imagine that Eve lived in 140,000 BC. In 130,000 BC there happened to be two female descendants of Eve that are the two mothers of all modern humans through female lineage. One day, the last descendant of mother A will die and at that point in time, mother B will take over the title "Eve" (she is then more recent than the former Eve living 140,000 BC). Thus, the honor is only bestowed retrospectively.

The mathematics of the proof depends on the fact that it is likely that some two mothers in one generation will be the daughters of a single mother in the next generation. It is not necessary. If two populations are well separated it will take many steps of going back in time to "coalesce" the two mothers to one mother. So the "mother-set" will remain stable at two for a long time. However, if we go far enough back in time, it seems very unlikely that two species of human evolved from fish! In the extreme case, to argue for two Eve's, we would have to argue that life has started twice on earth. Not impossible, but very unlikely if we consider how much we look alike.

Then finally, we should also not confuse Eve with the "most recent common ancestor" which has all humans as descendants. This person or persons (male or female) lived much later (perhaps even 3000 years ago.) because its descends are defined through both male and female lineages. Moreover, their contemporaries may also have many descendents today (just not everyone). Going back in time one can show that there must be a point in time (the "identical ancestors point") where each person who lived then will have either everyone or no-one as a descendent today.

Friday, January 2, 2009

The Baldwin Effect in Evolution

Evolution is the search for a genetic code that produces a phenotype that can produce lots of offspring in a particular environment. The disadvantage is that it can be quite slow. Variation in genotype are generated through crossover in chromosomes and to a lesser extend, mutations of the generatic code. Problem is, to come up with a good solution to a problem there needs to exist a series of very small changes that are all advantageous. Can we go faster?

Lamarck proposed that experience could directly affect the genotype and as such guide the evolution. It is now widely assumed that this is does not happen. But there is another way to ameliorate the negative effects of a mutation, and that is through phenotype plastcity or learning. Imagine a new adaptation is desperately needed to deal with new theats (e.g. new preditors). If available suboptimal mutations can be more easily adapted to the specific problem under selective pressure, then it will help the survival of those individuals who use it. In turn, once these indivuals have started to utilize this adaptation to their advantage (and perhaps teach their children how to use it as well), evolution can further improve it by selecting individuals who have more advanced adaptions and know how to to use it.

More generally, evolution is more effective in individuals who can learn. Thus, if learning is not too costly (in terms of energy consumption) evolution will probably select for those genotypes that develop brains. Hence evolution improves learning and leaning improves evolution. This positive feedback is what I understand as the Baldwin effect.

There seems to be a second phase in the Baldwin effect. Namely, that over time, when the environment remains stable, the learning element will be largely removed from the equation. Where the early versions of an adaptation relied on some learning (either by parents or by inventing the wheel every time), the later version is hardwired into the genes. The idea being that this solution is more robust, albeit less flexible.

Thursday, January 1, 2009

Simpson's Paradox

As I argued in my previous blog on three principles to learning, paradoxes are a great way to sharpen one's intuition. The following paradox was brought to my attention by Hans Welling in Evora, Portugal (incidentially, my brother.) Here is the most intuitive version I could think of.

Imagine two driving schools advertising by publishing their success rates. School A reports a success rate of 65% (720 of their 1100 students passe their driving exam), while school B reports 35% (only 380 out of 1100 students passed.) Clear evidence school A is better than school B right? Or not?

Turns out school A had 1000 students under the age of 30 of which 700 passed (70%) and only 100 old students over the age of 50 of which 20 passed (20%). School B on the other hand had only 100 young students of which 80 passed (80%) and 1000 old students of which 300 passed (30%). Here is the surprise: school B was performing better for both the young students (80% versus 70%) and the old students (30% versus 20%). Which school would you pick now? The paradox is resolved by realizing school B had to deal with so many more old students who on average have much lower passing results.

This seems harmless as long as you know which type of subgroups you are dealing with. But now imagine doing a drug test: does drug A or drug B work better? How do you know you didn't accidentally have a high percentage of subjects in group A that have gene X that makes them react much better to a drug A? You don't, and there seem to be an enormous number of possible subgroups that may randomly appear in your sample.

The best one can do is to make sure the subjects were chosen from the same population with no hidden biases to select subjects for drug A or for drug B. For instance, combining two results from the literature is dangerous, because one group me be English while the other American, or one group may have lived 20 years earlier than the other. But even so, to claim statistical significance for drug A to be better than drug B (or vice versa) one would have to correct for the possibility of randomly selecting unbalanced subgroups that react differently to one of the drugs. Seems a pretty daunting task and I am not so sure this is routinely done.