The first thing I was told when starting my course on quantum mechanics (QM) was: "if you think you understand QM, you don't understand QM". It turned out to be true. I could master the rules of the game, but things always struck me as fundamentally weird. Information is carried by wave-functions that would collapse into definite states when you decide to look at it?? "Oh well, that is because we were not evolved to think about the very tiny" I kept telling myself.
For decades very few serious physicist would dare to propose something more interpretable and fundamental than QM. The main reason is the Bell inequalities that say that QM can not be explained by a deterministic hidden variable theory, i.e. a more fundamental, deterministic, causal and local theory at a smaller scale that would give rise to QM at a larger scale. It turned out that in order to do that one would need a non-local theory or a non-causal theory.
Now all of that seems to change. It takes a genius with a Nobel prize at the end of his career to stick out his neck. Gerard 't Hooft from the university Utrecht has started a lonely uphill battle to come up with, yes, a deterministic hidden variable theory of QM. So how does he propose to deal with the Bell paradox? The gist of the argument is that QM is an emergent theory, in a similar way as thermodynamics is an emergent theory of many particles that move chaotically. The concept of pressure, temperature, entropy etc. only make sense if we are talking about the average behavior of very many particles that move about in a way that is unpredictable at the level of an individual particle. However, treated as a group new structure emerges. That is what we call thermodynamics.
In a similar sense, QM can emerge from a more fundamental, deterministic and interpretable theory at a smaller scale. A first draft of such a theory is based on the idea of cellular automata. Basically, imagine one can split the world up in small boxes and every box can be in a certain state. Time evolution of a particular box is based on rules that generate a new state (deterministically!) as a function of the states in its direct neighborhood. However, there is a twist. New states are created on two-dimensional surfaces such as the horizon of a black hole. The theory of nonlinear dynamics states that "information" can be created using what is known as "chaos". This basically means that any information about a system, e.g. where the constituent particles are located, is lost very quickly under the evolution of the system. In 't Hooft's universe information is lost in the three dimensional interior. This happens because two different states can evolve into a single state. This idea, that the information of a universe is encoded at two-dimensional surfaces is known as the "holographic principle", another brain-child of 't Hooft.
So how then do we beat the Bell inequalities? Here is the intuition. The large scale theory known as QM defines variables that are functions of the more fundamental states. However, the way that this works out according to 't Hooft is that a QM state at time "t" is an aggregation of all the states that will eventually evolve into a single state. This definition however is non-causal because it involves knowing which states in the future will collapse into a single state. Hence while the fundamental theory is causal and local, the emerged theory at a larger scale can exhibit strange features because the variables defined by us mix up the present and the future.
I may have missed a point or two in trying to translate these ideas, but I certainly think this is a very exciting new development. Einstein may turn out to be right after all in saying that "God does not play dice". These bold ideas definitely give me a sense of relief that it was OK to feel dissatisfaction with the strangeness of QM. Between string theory and deterministic QM, I will bet on the latter. Has 't Hooft ever been wrong?