Monday, November 1, 2010
The Critical Brain
We live in a world that is neither completely static and stable, nor completely noisy and unpredictable. As argued in previous blogs, we live a "complex" world between too stable and too random.
This is very similar to what is known as a "second order phase transition" in physics. Take ice, it's in a highly structured state with all atoms neatly organized in a lattice. When we heat it up, the molecules start to move around chaotically and break up the nice ordered structure: ice becomes water. The transition point is a phase transition and it is between order and chaos.
People have also argued that computation is best performed on the edge of chaos. A particularly outspoken figure in this respect is Stephen Wolfram. The idea here is that computation in the ordered regime can store patterns in memory but the system is so stable that it is is impossible to manipulate these patterns. On the other end of the spectrum there is large amounts of noise and/or chaos which simply prevents one to store any patterns stably. Again, we need something right on the edge.
Since brains are computing devices, one can ask if brains are also in a critical state. And indeed, evidence has been found that this is the case. In particular, if you take a patch of brain (from a dead animal but in a solution such that it still behaves somewhat "normally") and stimulate random neurons you will very often see very small groups of neighboring neurons respond. However, rarely you will also see the entire patch become active temporarily. It's just like earthquakes: there are enumerable small ones but rarely a really big one hits (note: a quake with magnitude 7 dissipates 10 times more energy as a quake of magnitude 6).
Researchers have argued that a critical brain is a wonderful thing to have. To name a few things: there are (optimally) many meta-stable states that it can represent. Moreover, this memory can be quickly accessed. Also, it maximizes the dynamic range of "senses", in the sense that it can respond to both very faint signals and signals that are many orders of magnitude larger. This "input gain control" is necessary because the world around is complex and thus in a critical state and therefore transmits signals with wildly varying magnitude. Finally, the brain needs to both integrate many parts of the brain but also allow for many different brain states (segregate).
A telltale signature of criticality is very long range interactions between units which are only locally connected. This is both in space (all regions of the brain are correlated with each other) as well as in time (very long memory). In fact, almost anything you measure, including these long range dependencies, follows a powerlaw distribution. Without technical details this means that there is no length scale that you can identify at which things are correlated. A good example of this the size of objects in an image. You will find many extremely small (perhaps even the size of 1 pixel) objects and few very large objects. You can't quite say: all objects have a size roughly between 90 and 100 pixels.
But for me, perhaps the most interesting point to make is this. By adapting to our environment we are forced to add new patterns in our brain and forget others. We are constantly maintaining the memory content of our brain. A brain that is sub-critical is too stable and it is very hard to erase memories and imprint others. A brain that is too chaotic and noisy will not hold memories at all. Moreover, this learning process is highly dynamic and needs to happen quickly. It seems our ability to adapt and learn and our need to predict the world around us is key to understanding why we have critical brains. A lot still needs to be understood here, but the outlook seems promising.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment