Wednesday, December 25, 2013

My Xmas Dream

Xmas is a time of reflection, of meditation on how the world faring. But the situation for many is dire. In the United States one out of zeven people live in poverty, getting their food from foodbanks whose budgets are now being cut by the government. In Syria, hundreds of thousands if not millions live in tents, with hardly enough blankets and clothes to keep them warm. In Central Africa rebel bands kill and rape in ways beyond our worst imaginations. In the meantime, a few fortunate bankers, traders, investors and CEOs of mega-companies make millions or even billions. This unequal distribution of wealth is perhaps the most disturbing fact of all.

At Xmas time we buy off our conscious by giving to charity. But most big charities are run as full scale companies these days. For instance, Salil Shetty, the Secretary-General of Amnesty International earns $305,000 a year. I wonder, how do you justify to Joe the carpenter who earns a tenth of that amount per year to give generously? This world is deeply twisted.

So then here is my Xmas dream. Let's build a company with the brightest minds in the world. Everyone who joins can not earn more than a modest salary (or if you do this on the side, you will earn nothing at all). We will join forces to develop the most amazingly intelligent algorithms to suck the financial markets dry. (Our secret algorithms will of course based on deep learning.) We will wreck the large trading firms into bankruptcy. With the billions of dollars that we earn with trading we will refrain from buying ourselves that Ferrari, but instead we will save the rainforest, educate the world, provide food and shelter for those who need it.

That is my Xmas dream. Who wants to join?

Tuesday, December 17, 2013

The Role of Corporate Involvement at NIPS

I was conference chair for NIPS 2013 together with Zoubin Ghahramani (but I only speak for myself in what I write below). Mark Zuckerberg visited NIPS and created quite stir. And discussion about the role of companies at NIPS. Here is my perspective on the issue.

First, I think it's great to have this discussion, because it's clear that the field is rapidly changing. I view corporate involvement of this magnitude as an interesting experiment that we have to work through and reevaluate every year. As with everything there are advantages and disadvantages to corporate involvement. Here are a few I can see:

-Industry is an integrated part part of our ecosystem from which we hugely benefit. Our students have good jobs (with higher salaries than mine). I have never heard of anyone being worried that their students could not find a job. That's a huge blessing.

-Industry helps us with funding our fundamental research and our conferences. There are many faculty grants that support fundamental research that only needs to be tangentially related to the main mission of the company. Industry also pays for a good chunk of our registration fees, meals, student awards etc.

-Some companies, in particular Microsoft, operate as an op research lab, allowing their employees to serve the community as reviewers, area chairs, workshop organizers and even program chairs (see last years program chairs at NIPS 2012). Their research papers are just as interesting as papers coming out of academia.

-Industry labs may accelerate the pace at which our field develops. We may actually need their resources (data and computational infrastructure) to make the next leap in building truly intelligent AI systems. These successes may generate enthusiasm among our new students entering the field, further accelerating progress. 

- Reproducibility. There is a certain danger that more datasets and algorithm details will be locked behind the company's firewall. It's not good if talks have classified elements, or certain questions can not be answered. Mark Zuckerberg announced that Facebook has developed the best ever face recognition system in just under three months. That's super exciting, but will we have access to it (either to the parameters of the deep net, or to the dataset and algorithm used to train it)? In other words, can we reproduce their results?

-The Google question (soon the be renamed as the Facebook question). I am often reviewed by a mix of astronomers, mathematicians and computer scientists (The Netherlands is a small country..). I have been confronted more often than I like with what has become known as "the Google question": if this research can be done at Google with many more resources, why should we grant you this money? So here our close connection to industry fires back because it is not considered as "fundamental research" by many who don't understand how we operate. 

-Is it desirable that big companies lure our best students away to improve ad-placement, where they could also have contributed to curing some of the horrible diseases that plague mankind? Perhaps not, but people have the right to make their own decisions.

And as to Mark Zuckerberg's visit. I found it super exciting! I think it's a small price to pay that certain workshops were temporarily under-attended. But I agree that NIPS should not be turned into a publicity and recruitment event for big companies. We should acknowledge that our field is a complex ecosystem that involves both academia and industry. Our challenge is to find the right balance.

Monday, September 9, 2013

No-X Theorems

In physics there a few no-X theorems that seem rather suspicious in the sense that they point to cracks in our current understanding of nature. One of them is Penrose's cosmic censorship hypothesis that translates to a "no-naked-singularity" conjecture. The most famous examples are at the center of black holes where there are supposedly singularities. But we will never know about them because they are shielded by black hole horizons (unless you are willing to kill yourself and fly inside the black hole, however you will not be able to report back to us). That to me sounds a bit suspicious.

Here is another one. The no-communication theorem in quantum mechanics. When two particles are entangled they become correlated with each other over arbitrarily large distances. This implies that if Alice measures the spin of one of the particle here at earth, it collapses the joint wave function, say in a spin-up state, then *instantaneously* Bob's particle at the other end of the universe collapses into a spin-down state. Special relatively forbids anything moving faster then the speed of light, because if it does for some observers a signal can arrive before it was sent! Similarly here: for certain observers Bob first checks his spin before Alice has measured hers and therefore this observer concludes that Bob collapses the wave function and not Alice!

The only way out is that these two interpretations are actually coherent and this means that no information can be send between Alice and Bob. Alice can not force the particle in a up state (or a down state for that matter) so that's no use. But perhaps she can transmit information by simply collapsing a superposition into one of the two pure states (irrespective whether it's up or down)? Bob only has to determine if the wave function is in state A+B or in one of the pure states A or B. Alas, Bob can not do this with a single measurement because his measurement will collapse the wave function into either A or B. Now what if he could make N copies of his wave function? Then by measuring all N copies he finds either that all of them are in state A or B or he finds some of them in A and some in B indicating that the original wave function was still mixed. Unfortunately for Bob, the no-cloning theorem comes to quantum mechanic's rescue which says that you can not make copies of wave functions.

Feeling uneasy? To me this seems like a theory that is trying to rescue itself. Not really the most concise explanation. We need multiple no-X theorems to wiggle ourselves out of difficult questions. What this points to in my opinion is that the current theory (quantum mechanics) is an unnatural (but still accurate) theory of nature. We reach the right conclusion but through weird complicated reasonings. Very similar to Ptolemy's model of the universe that made the correct predictions but was complicated and difficult to interpret. The new theory replacing quantum mechanics will hopefully act as Ocam's razor and bring natural explanations for quantum weirdness.

Wednesday, July 24, 2013

Discrimination = Overgeneralization

Why is it that we get slightly nervous when we see a man with a long beard clothed in a dress sitting next to us in the plane? We overgeneralize. We associate terrorism with Islam and Islam with bearded people in dresses. But clearly, the number of peaceful, well willing muslims far exceeds the number of muslim terrorists.

If tomorrow a horrific terrorist attack from white Caucasians would take place, we would not suddenly get nervous with every white Caucasian sitting next to us in our plane. Perhaps the somewhat obvious reason could be that while we tend to ignore all the peaceful muslims in our society, we do have one important counter example to the hypothesis that all Caucasians are terrorists: ourself. Any viable hypothesis should not implicate ourself, so you start looking for more subtle attributes: does the person look like a slob? Or perhaps do they grow long hair etc.?

Our tendency to overgeneralize directly causes us to discriminate. Studies reveal that we (white people) are more afraid to be mugged by black people than by white people, even though we think we do not think so ourselves! In machine learning we call this "underfitting". Our theories about the world are too simple, our prediction about the world are poor and we draw conclusions too fast.

What can we do about this? Firstly, being aware of these inborn tendencies can help, at least at a conscious level. But it's not enough. My recommendation: build appreciation of other cultures by organizing multi-cultural parties in your community. Experience many positive counter examples to counteract your prejudices. Don't let those unfounded hypotheses based on a single negative example cloud your judgement.

Thursday, April 19, 2012

The Hunger Games

A group of 24 kids between 12 and 18 (I believe) from 12 districts are selected randomly each year to fight to the death in some large arena. The reason for these hunger games is some past uprising of these 12 poor districts against the almighty capitol. This book and the movie are extremely powerful, intriguing and generally well done. That is, for adults. Because seeing children kill other children is about as sad as it gets. Adults killing other adults is one thing, but a small girl being speared down by another small boy (and many other such instances) has a huge impact on anyone's mind.

But why on earth is this a book and movie for children? Do we really need to feed our kids' brains this kind of material? I think not, because children don't see the deeper layers of this brilliant book; they can not see it in the perspective that we can. They simply absorb the images and emotions that it conjures up and who knows how this gets processed. I dread the day we read in the newspaper about kids being inspired by this book/movie to act out its violence.

But the book also represents an extremely powerful reminder of who we once were. The ideas must have been inspired by the Roman empire that dragged slaves from conquered lands to their own little arena to fight to the death. But at a deeper inspection, one can argue that a version of this still continues today. The West exploits the developing countries with unfair trade agreements, forcing them to opening their markets. Products from the developing countries can by no means compete with the much cheaper, often heavily subsidized products from the West. As a result, many people die of hunger. Not in an arena, but in their own homes. I feel that the Hunger Games has these deeper layers and exposes them very powerfully. As such it is a brilliant book (and movie). But for adults, not for 12-year old kids.

Saturday, October 15, 2011

Microscopic Black Holes and Cosmic Censorship

There is nice an argument that combines general relativity and quantum mechanics to show that our ideas of space and time start to break apart at the Planck scale (1E-35 meter). Imagine I try to constrain anything (say an elementary particle) inside a box of size 1E-35m x 1E-35m x 1E-35m (a *very* small box I will call a Planck box).

According to quantum mechanics, if we want to contain anything inside a very tiny region of space, then its momentum (mass x velocity) becomes highly uncertain. The more we try to constrain space the more uncertain the momentum becomes (the Heisenberg uncertainty principle). This means that if I would actually measure the momentum, then the outcome could be extremely large. Since momentum carries energy the energy could become so large that, just like stars, they can form black holes. This happens exactly when the box is a Planck box.

What does this say? To me it seems a really nice argument to show that at this scale the concept of space starts to break down. We can simply not contain anything inside a Planck box, for if we do, nature closes the box and we will not be able to peek inside it! A black hole is in a real sense the boundary of the universe, so if we try to look at scales smaller than the Planck scale we will be looking at the end of the universe. (As a kid I always tried to imagine how the end of the universe looked like -- maybe its all around us).

Roger Penrose has introduced a name for this trick that nature plays with us: "cosmic censorship". This principle says that anywhere where there are singularities in our theories a horizon will form around it so that we may not see it! Maybe mother nature doesn't want us to see something :-)

Sunday, June 19, 2011

The Illusion of Free Will

We like to think that every decision we make is made out of free will. While the concept of free will seems to make some sense at an intuitive level, it seems rather slippery when trying to define it. What entity other than our brain is making the decision, and on what grounds, based on what input? And what are the laws that determine how that entity is making a decision?

It seems rather dreadful that decisions are made according to "an algorithm". In particular, a *deterministic* algorithm is rather unappealing because it implies that our genetic make-up, in combination with everything we have experienced
in our lifetime plus the environmental factors that are in play right now are input to a function that deterministically outputs a decision:
It seems to imply that we cannot be held responsible for our actions: they are simply a deterministic function of our history and the decision was predetermined anyway. We have no free will to change that outcome.

Despite its unattractive philosophical implications, I think this is exactly what is going on. We have no issue accepting this point of view for plants, in which case the FUNCTION is rather simple. Even lower animals such as fish or even crocodiles seem highly predictable in their responses to the environment. In the case of humans this is definitely not true. Our responses are (fortunately) partly predictable
but also partly unpredictable. There may be a good reason for a certain amount of unpredictability in nature. Imagine a cheetah chasing a gazelle. If the swerving movements of the gazelle were predictable for the cheetah then the cheetah could anticipate them and easily catch the gazelle. It is therefore likely that the gazelle has developed an algorithm that is very hard to predict for the cheetah, i.e. a seemingly random strategy to swerve left or right. Also apes living in large social communities were probably prone to similar evolutionary pressures: predictable responses can lead to exploitation and manipulation by others and thus have negative fitness value.

Seemingly random behavior does not mean this behavior is not deterministic. The decision process can become so complex that the tiniest changes in the environment can cause a completely different decision. This sensitivity or instability is the definition of "chaos" and according to some, unpredictability should be the correct definition of randomness, irrespective of whether something is deterministic or not. It is not even quite clear what true randomness means to be honest. Perhaps quantum mechanics is the only theory that claims randomness at a fundamental level (e.g. not caused by chaos), but even here the jury is still out. Even if our behavior is partly random, then what does that solve in terms of free will? A decision does not become more free if it is random.

To me the only logical conclusion is that our behavior is deterministic albeit in a very complex and unpredictable way. There is some interesting evidence for such a
theory. Experiments show that decisions are made in the brain even before we become aware of them. This means that at the very least a significant fraction of our decisions we make are made completely unconsciously and our body only fools ourselves into thinking that we made this decision consciously.

Predicting human behavior (decision making) may turn out to be impossible even with the fastest supercomputers. This feels like good news, because it would be
very unsettling to have a clone build after you that can perfectly predict what you will do 1 second from now. But ultimately we may have to accept that we can build
robots that can display equally complex behavior that are in no way inferior to us.

Finally a word on the legal implications of a theory of this kind. Does this mean we cannot send anyone to prison anymore because s/he committed a murder?
Of course not! Whether actions are predetermined or not has nothing to to do with this. The reason we send people to prison is because we don't want this person to
do it again and to scare other from doing it. These functions of punishment remain perfectly valid. We should never punish out of revenge or retribution. It is useless and serves no function to society.

Saturday, March 5, 2011

An Imagined World

Over the past centuries science has pushed humankind from its pedestal several times. First Copernicus showed that the earth is not at the center of the universe, then Darwin showed that humans are the product of evolution and direct descendants of the apes. What else awaits us? With the advance of fast and more intelligent computation we will see the realization that computers can be far smarter than humans. We passed a few thresholds: Deep Blue beating Kasparov in chess, and now Watson becoming the new champion in the television show Jeopardy. As time goes on, we will see many more of these landmarks happen. There will be a time when we will be second to computer systems in almost any imaginable task. Are there even stranger revolutions that await us?

Although it is exceedingly difficult to look in a crystal ball, there are signs that an even bigger philosophical shock awaits us. Physicist believe that the true ontological degrees of freedom are far fewer than the ones we usually entertain to describe our world. In fact, there are signs that we could pack all the degrees of freedom on a two-dimensional plane, instead of in a three dimensional world. What then are these surplus unphysical degrees of freedom? My claim is that they are imagined: they live in our head in order to make sense of our world. Remember that all our brain is concerned with is predicting the future. If you can predict better you have an edge in survival. Now imagine that the ontological degrees of freedom have very complicated laws of dynamics, i.e. their future is very hard to predict from the past. Then lets imagine that by introducing a bunch of auxiliary variables this prediction task may become easier. This is not such a far fetched thought. In fact in statistics people do it all the time. Adding variables can simplify the description of a problem. However, in physics this is also a well understood phenomenon. Almost any modern theory has so called "gauge symmetries". These are transformations that change one description of the world into another description of the world without changing the actual state of the world. For instance, Einsteins general relativity allows one to transform between two frames of references (observers) that accelerate relative to each other. One observer interprets the state of the world as "gravitational pull" while the other as acceleration.

These symmetries lead to conservation laws (Noether's famous insights). Conservation laws are constraints between variables. They simply express that we have have used too many variables to describe the state of the world and hence some variables can be solved from other variables (and in fact removed). So there are two types of variables in a theory: variables whose state can only be solved from the world state in the past and variables whose state can be solved from the state of other variables at the same time. The second type is redundant, but often very useful in writing down nice concise equations to describe our world. What I predict will happen is that we have completely underestimated the number of these spurious variables in our theories. I believe, supported by the holographic principle which states that all real degrees of freedom can be stored on a surface, that there are vastly more unphysical degrees of freedom in our theories than physical ones.

Now let's take this one step further. Our brain is also in the business of making models of our world. Everyone of us is a physicist, if you like it or not. I now propose the following leap of faith: the way we view the world is also largely made of unphysical degrees of freedom. We have evolved to use these over-parametrized models because they lead to easier prediction at the macroscopic scale in which we live and survive. But they are largely an illusion, a fantasy of our minds that we all share (like the ability to speak language this illusion has been hardwired in our brain through evolution). This is the new revolution that I anticipate: we will come to realize we live in a fantasy world.

What are the potential consequences, if what I propose is true? While the auxiliary variables may work well at the macroscopic level, they may not work all that well at the microscopic world. I believe the brain has introduced new variables that follow simple laws of dynamics themselves. In particular, together with the real degrees of freedom they make up a consistent system where (usually) cause precedes effect. However, for the unphysical degrees of freedom there is no reason why this should be enforced. In general, there may be glitches in this framework in situations that are not important to survival. These glitches in consistency may for instance involve apparent reversed causality for the unphysical degrees of freedom, but in such a way that they will not affect the strict causality necessary for the physical degrees of freedom. We should not be able to receive a message from our yet to be born daughter who instructs us to kill ourselves so she will not be born (unless all the degrees of freedom that govern this daughter are unphysical of course).

All of this is compete speculation, and I make no claims that there is evidence for it. But oftentimes, a half true story might help one to keep an open mind to explore or embrace new ideas.

Saturday, January 29, 2011

Salaries of Charity CEO's

Ever given the salaries of CEOs of well known charities a second thought? Well it came as a shocker to me. Here are a few almost random picks from "charity navigator":

Amnesty International: Larry Cox, Executive Director $210,000
American Red Cross: Gail J. McGovern, President, CEO $446,867
Food for the poor: Robin G. Mahfood, President, CEO $345,245
American Cancer Society: John Seffrin, Chief Executive Officer $685,884
:Donald Thomas, Deputy CEO $1,027,306
Children International: James Cook, Chief Executive Officer: $423,114

Do I need to say more? This maddens me. Why would I support these salaries with my gift? How can they ask people with small incomes to give when at the same time there are CEO's leading these organizations with these outrageous salaries. The figures heading these organizations should lead by example. Clearly, they do not invest their time because they care. Very disappointing. Next time they call you for a pledge first ask what their CEO earns.

Monday, November 1, 2010

The Critical Brain

We live in a world that is neither completely static and stable, nor completely noisy and unpredictable. As argued in previous blogs, we live a "complex" world between too stable and too random.

This is very similar to what is known as a "second order phase transition" in physics. Take ice, it's in a highly structured state with all atoms neatly organized in a lattice. When we heat it up, the molecules start to move around chaotically and break up the nice ordered structure: ice becomes water. The transition point is a phase transition and it is between order and chaos.

People have also argued that computation is best performed on the edge of chaos. A particularly outspoken figure in this respect is Stephen Wolfram. The idea here is that computation in the ordered regime can store patterns in memory but the system is so stable that it is is impossible to manipulate these patterns. On the other end of the spectrum there is large amounts of noise and/or chaos which simply prevents one to store any patterns stably. Again, we need something right on the edge.

Since brains are computing devices, one can ask if brains are also in a critical state. And indeed, evidence has been found that this is the case. In particular, if you take a patch of brain (from a dead animal but in a solution such that it still behaves somewhat "normally") and stimulate random neurons you will very often see very small groups of neighboring neurons respond. However, rarely you will also see the entire patch become active temporarily. It's just like earthquakes: there are enumerable small ones but rarely a really big one hits (note: a quake with magnitude 7 dissipates 10 times more energy as a quake of magnitude 6).

Researchers have argued that a critical brain is a wonderful thing to have. To name a few things: there are (optimally) many meta-stable states that it can represent. Moreover, this memory can be quickly accessed. Also, it maximizes the dynamic range of "senses", in the sense that it can respond to both very faint signals and signals that are many orders of magnitude larger. This "input gain control" is necessary because the world around is complex and thus in a critical state and therefore transmits signals with wildly varying magnitude. Finally, the brain needs to both integrate many parts of the brain but also allow for many different brain states (segregate).

A telltale signature of criticality is very long range interactions between units which are only locally connected. This is both in space (all regions of the brain are correlated with each other) as well as in time (very long memory). In fact, almost anything you measure, including these long range dependencies, follows a powerlaw distribution. Without technical details this means that there is no length scale that you can identify at which things are correlated. A good example of this the size of objects in an image. You will find many extremely small (perhaps even the size of 1 pixel) objects and few very large objects. You can't quite say: all objects have a size roughly between 90 and 100 pixels.

But for me, perhaps the most interesting point to make is this. By adapting to our environment we are forced to add new patterns in our brain and forget others. We are constantly maintaining the memory content of our brain. A brain that is sub-critical is too stable and it is very hard to erase memories and imprint others. A brain that is too chaotic and noisy will not hold memories at all. Moreover, this learning process is highly dynamic and needs to happen quickly. It seems our ability to adapt and learn and our need to predict the world around us is key to understanding why we have critical brains. A lot still needs to be understood here, but the outlook seems promising.