Friday, October 23, 2009

Leaf Blowers (Suck)



Saturday morning 8:00am. The leaf blowers come. Now, we sleep with our windows wide open. So we wake up. Because leaf blowers make a lot of noise. We call that "noise pollution". Especially Saturday morning at 8:00am.

Now let's review the leaf blower. It moves leafs from A to B. In fact it moves leafs away from places where they belong, namely on the ground between your plants. Once, blown on a pile, they can be shipped off. Now, leafs make humus (degraded organic material in soil). Yes, they rot, but the rotting doesn't smell bad. It's natural (really). And as an additional bonus: it makes your soil fertile.



Leaf blowing removes leafs. So now we have to ship-in degraded, other organic material (that actually does smell strongly and is probably made of your leafs in the first place). We call that mulch. In summary: we use a very noisy machine (at 8am) that uses gas to then remove organic material which is then turned into mulch that needs to be shipped back in from far away, once again, using gas. How stupid is that, given that nature has figured out ways to do that all by itself.

However, what I experienced recently went even further. The leaf blowing humanoid was directing his all-powerful blowing machine into a tree (he was a tree blower). The reason was presumably that he wanted the leafs that had turned a little brown to fall off the tree early. As it turns out, trees have evolved to figure this out all by themselves; they do not need help with that (certainly not at 8am).

One more example of how we have alienated ourselves from nature. Half a year ago or so it actually rained in Southern California. I picked up the following phrase when entering a restaurant: "I am going to move back to the desert". Rain is good for plants. In fact, rain is good for people too. So don't complain if it rains; you only have to endure it about twice per year. Try living in Sudan (no rain) or Ireland (always rain).

I worry: when my oldest child got (consciously) confronted with rain for the first time, and I explained that rain made plants grow she said in surprise: "no daddy, sprinklers make plants grow". Yes, that was actually correct. 60% of our water usage is on watering our lawns. That means that we have to drain the Colorado river to the last drop before it hits Mexico, just to water our lawns (and wash our SUVs). Actually, we also drain the west side of the Sierra's Nevada's resulting in "salt storms" in those regions. So let's get rid of those lawns now and replace them with native plants (almost don't need any watering). Additional advantage: lot's of hummingbirds in your garden!

Sunday, October 18, 2009

What Language Does Our Universe "Speak"?



Many profound physicist have come to think of physics at the very tiniest length scale (the Planck length) as a computer or information processor. To name a few: John A. Wheeler, Richard Feynman, Roger Penrose, Gerard 't Hooft. Wheeler expressed this view as "it from bit".

One of the main reasons for this view is the realization that physics at that scale will have to be discrete. If not, it becomes very hard to reconcile relativity and quantum mechanics into one theory. In the continuous domain calculation simply blow up: they cannot be re-normalized. In addition to that, the uncertainty principle of quantum mechanics demands that we can not even pinpoint things down to such precision without creating a black hole which would immediately render any measurement at a scale smaller than its horizon impossible....So these physicist think that physics at that scale is some sort of cellular automaton.

Around the end of every century some people seem the need to make the rather absurd claim that science is coming to an end (I believe we have barely started, but anyway). This century this view is expressed in the book: "The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age" by John Horgan. He argues that there are four recent theories that have shown the fundamental limitations of science:
1. Relativity: anything inside the horizon of a black hole will never get out. So we cannot study the inside of a black hole.
2. Quantum Mechanics: the world is irreducibly random.
3. Chaos: The dynamics of many real physical phenomenon displays extreme sensitivity to initial conditions.
4. Complexity Theory: Godel's theorem of incompleteness of formal systems.

Let's see how these theories would fare in the face of a fundamental theory of "Physics as Computation" (PAC). I think the black hole issue is already close to being resolved. A quantum mechanical treatment of BHs will involve BH-radiation (or Hawking radiation). As such, in-falling matter will cause disturbances on the surface of the BH-horizon that encodes the information of the in-falling matter and which will eventually be radiated out again. No information is lost in the process. (Every BH will eventually die in an explosion that is more violent than the most energetic supernova, but it takes a while..) For the observer that stays outside the BH, the BH horizon is the edge of the universe in a very real sense. It will see his colleague that falls into the BH freeze onto the horizon, get disintegrated and eventually be radiated out again in bits and pieces. For the in-falling observer the edge of the universe is not the BH horizon, but a singularity at the center of the BH. In this case we have to deal with a singularity but it seems evident to me that the final PAC theory will describe that singularity not as an infinitely dense point but rather a sensible finite object.

How the irreducibility of quantum mechanics may be resolved in terms of a cellular automaton was described in my previous blog on "Quantum Mechanics is not the Final Theory".

The phenomenon of chaos in nonlinear dynamical systems makes claims on unpredictability of a more every day nature: for instance the weather patterns are unpredictable because a small error in the initial conditions may result in large differences a few days later (except in California where we don't need weather forecasting). The canonical example is this: x[t+1]=2*x[t] mod 1. This means that at every iteration we move all digits one decimal place to the left and set the number to the left of the dot to 0: 0.12345...

Finally Godel's theorem. It says that within any sufficiently complex formal system there will be true theorems that cannot be proved. I am still thinking about these issues, but I seem to have an issue with the notion of "a true theorem". True can only acquire meaning as an interpretation of the formal system (say mapping sequences to mathematical or physical "truths"). But mathematics is itself a formal system. Truth does not exist outside any axiomatic system and the interpretation that Godel's theorem shows that truth is bigger than formal reasoning just doesn't sit well with me. Anyway, some future blogs will unquestionably be devoted to these deep issues.

It will be very interesting to be able to answer the question: "what is the complexity class of the sequences generated by the cellular automaton that governs our universe". Or phrased more informally: "What language does our universe speak". Here is my prediction: Dutch ;-) (or maybe a language of the same complexity). It seems that Dutch is more complex than context-free languages due to cross-referencing but still decidable in polynomial time. It represents a possible level of complexity where things are not too regular but also not too unwieldy. Anyway, my prediction here should be taken with a huge grain of salt of course.


Soooo, the universe is a huge computer that is computing "something". It is our task as scientists to figure what and how it is computing. Actually, we already know the answer: 42 ;-). But what was the original question? Let's leave that to religion.

Wednesday, October 7, 2009

Complexity



In the previous blog I argued that a truly random sequence can (should) be defined as an incompressible one. A sequence from which nothing can be learned. Moreover, it can be shown (mathematically) that certain deterministic chaotic dynamical systems can indeed generate these random sequences.

If randomness is a slippery subject, complexity is worse. Complexity has turned into an entire research field of its own but is being criticized for not even having a good definition for the term itself. In fact, there seem to be 31 definitions floating around. Complexity is one of those things for which we seem to have a clear intuition but is hard to capture in math.

Let's try to build some intuition first. Consider the tree-image above and compare that to say a completely white image and an image of white noise (your television screen if it receives no signal). We feel that the white image is boring because it is utterly predictable but the noise image is also boring because there is nothing to predict about it. The tree-image on the contrary seems to interesting. There is lots of structure. We want our complexity measure to peak at the tree-image but vanish at boring images and random images. Therefore, complexity should not be equated to randomness.

An attractive definition was given by Murray Gell-Mann (Nobel laureate and inventor of the quark). I will discuss a slightly adapted version of his story. We return to the MDL principle (minimum description length). Given a string of symbols we wish to optimally compress it. That means, we want to build a model with a bunch of parameters, P, and use that model to predict most of the symbols in the sequence correctly. The symbols that were wrongly predicted will have to be stored separately. One can imagine a communication game where I want to send the string to you. Instead of sending the string, I send the model and only those symbols which where predicted wrongly by the model. If you use my model on your end to generate the predictable symbols you have all you need to generate the original string (use the model at the locations where predictions are correct but use the separate list of corrections where the model was wrong). The trick is now to find the model that has the optimal complexity in the sense that the total information necessary to encode the model and the unpredictable symbols is minimized. In other words, I want the procedure that maximally compresses the original string.

There is something fundamentally important about this procedure because it guarantees that the model will be optimally predictive of future data. If your model was too small, there was more structure that you could have predicted in the test string. If on the other hand you use a more complex model, you will have "modeled" the randomness of the input string (called overfitting) and this is obviously not helpful either. Living things survive because they are able to predict the world (and leverage it). It is our business to compress the input stream of our senses!

We are now ready to define complexity: it is the information necessary to encode the model (parameters), excluding the residual random bits of the string that remained unpredictable. A white image has very little information to start with and even though it can all be predicted by our model (no random bits) so the total is still very small. On the other, a random string is completely unpredictable and even though it carries a lot of information (a la Shannon) the model part is zero. Both cases have vanishing complexity as we wanted to, and the tree-image will have lots of complexity.

Complexity is a relative quantity. For instance, it depends on factors such as the length of the string. For small strings it doesn't pay off to use a complicated model. This is not so bad in my opinion, because imagine a primordial world where all that has ever been created is the string "GOD". The string are just three symbols embedded in nothingness. It will only acquire its structure (meaning) after we read the bible.

Another more serious objection is perhaps the fact that the true complexity is in-computable because it can be shown that one can not decide whether the model you have is really the optimal one in general. Imagine the fractal image below.
The image looks wonderfully complex. Who would have thought that there is a really simple algorithm to produce it? So, practically speaking, complexity is defined relative to how good of a modeler you are.

Then there is the issue of scale. If we view the world at a large scale, it may look very predictable. However, the world is composed of quadrizillions of elementary particles that are moving in chaotic motion. At the small scale the world is random, but at a larger scale order emerges through averaging. This may be best illustrated by going back to the coin-toss. If we are asked to predict the next outcome of a coin-flip, we will be at a loss. However, if we are asked to predict the total number of heads over the next 100 coin tosses we can make a good guess (50 for a fair coin). This is because even though every sequence of 0's and 1's is equally likely, there are many more sequences with 50 heads and 50 tails than that there are sequences with only heads for instance.

Finally there is the issue of the time that it needs to compress and decompress the string. We may have found a great model that does a wonderful job at compressing the input string, but that requires a very long computation time to use for encoding and decoding. This seems important at least biologically. If I have a model of lions that takes 5 hours to use (i.e. make predictions with) then it is not very useful in terms of my chances of survival. As another example, consider the Mandelbrot fractal again. There is very small prescription to generate it but one needs considerable time to actually compute that image (in fact, this connects to the previous point because you would have to first specify the scale at which you want to stop otherwise it would take infinitely long).

In machine learning researchers have also started to realize the importance of computation in addition to prediction quality. They acknowledge that modern day problems are so large that a really good model may require too long to train up and use for predictions. So, the computational budget is factored into the total equation, favoring faster algorithms and models over slower ones even though the latter may be more accurate (it is in this sense that machine learning is really different from statistics and in fact the reason why machine learning is in the computer science department and not the mathematics department).

Should we adapt our definition of complexity to accommodate running time? Gell-Mann thinks so and he argues for a bound on the computation time. There are indeed many definitions of complexity, such as "logical depth" that do just that. But if we include computational complexity, shouldn't we also include the amount of memory we need during computation (since they can be traded off)? Uh-oh we are descending in the bottomless pit call complexity research with its 31 definitions. This may be a good point to stop.:-)

Sunday, October 4, 2009

Is A Coin Toss Random?



Randomness is a slippery concept. If you teach it in an AI class you are quick to point out that what we call random is really a matter of ignorance: because we don't know and/or model all the details of the world we lump together everything that is outside the scope of our models and call it a random effect. Under this reasoning the toss of a coin is *not* really random in a philosophical sense.

Unless of course, quantum mechanics has anything to do with it. Imagine we toss a coin through a quantum measurement. Now we are confident to claim true randomness. The quantum world is with a nice world "irreducibly random", meaning that there is no deeper theory from which quantum mechanics would emerge and where the coin toss is once again the result of our ignorance. However, as I reported in an earlier blog, there may even be ways out of the seemingly "irreducibly random" nature of quantum mechanics. In this view we are back to square one: what then is randomness?

There is a beautiful definition of randomness in a mathematical discipline called algebraic complexity. To understand this definition let's consider a very long sequence of coin flips. Now let's try to compress this sequence of "HTTTHHHTTHTTTTHHHTHTHTHHH" into a smaller sequence by building a model. If the sequence where just "HTHTHTHTH..." things would be easy: "I will simply state that head and tails alternate and the sequence starts with H (I will have to count the symbols in this sentence as the cost of encoding it). The sequence is not random because I could compress it. The definition of randomness is thus: a sequence is random if it cannot be described (encoded) using a smaller sequence. In terms of learning theory: one cannot learn anything from a random sequences.

Using this definition, a sequence can be random yet and at the same time be generated by a fully deterministic process. This process will have to be "chaotic" which we will not define here but has nothing to do with quantum mechanics. In fact, it snugly fits the definition of a random coin toss. For instance, if you toss a coin in a deep tank of moving water and record whether it landed head or tails on the bottom, then I am pretty sure that the details of the trajectory of the coin are chaotic. Hence, there is no model that can predict anything about the final outcome. This is not because we don't know how to model moving water in a tank, but because trajectories of coins in these tanks have something inherently unpredictable about them.

I therefore conclude that the flip of a coin toss can be truly random and deterministic at the same time. We don't need quantum mechanics for true randomness, we have something far better: chaotic dynamics. In fact, I predict that the so called "irreducible randomness" of quantum mechanics will in the end be reduced to randomness derived from chaotic dynamics.