Neuron photos found on Clint Sprott's web server
in the Physics Department at the University of
Wisconsin - Madison.

Computational Neuroscience and its
Paradoxes

Fred Hapgood

Neuroscience is unlike every other science in that we cannot wish it success with an unqualified heart. There is no reason not to want to know the last secret of the earth's weather or geology, but it is easy to imagine that knowing exactly how our brains work might well bear with it some darker consequence. Such a piece of information might well end up altering our sense of ourselves and of each other, and not in the direction of increased comfort.

At the moment, of course, we cheer the team on. The scary part of this adventure seems far off because the job seems so immense. The brain has a hundred billion neurons, each with a thousand or so synaptic connections, all organized into a thousand organs. Not much is known about any of it. Right now repairing our ignorance mostly means working with live cells with all their cantankerous unpredictabilities. Concerns about what might happen when this knot is finally unwoven are easy to put off.

But it is worth noting that neuroscientists have a tool in their kit that just might drop this problem in our lap a lot sooner than we think. That tool is the computer simulation, or in this context, the brain model. Computer simulations are a tool of immense importance in almost every science, from cosmology to economics. They permit the testing of theories of operation comprising immense numbers of variables (a cosmology simulation might contain billions of objects) and the subjection of these theories to "what- if" experiments of a rate, variety, and economy that would be impossible in either vitro or vivo. They constitute the seeds of a new publication medium, integrating results from many laboratories into a coherent presentation that is more accessible, because more interactive, than print. (They are for the same reason a fabulous educational tool.) And best of all, "virtual" experiments are never derailed by a bit of dirt in a poorly washed vessel or a confusion over the exact sequence of steps in a procedure or an instrument breaking at the wrong moment (Granted, there are software bugs to deal with, but that swap is usually well worth making).

In no field are these virtues more relevant than in neuroscience. If the landscape were any more delicate, noisy, inaccessible, and complex, nobody would have the heart to even try to explore it. Simulations simplify access to such realities. In the case of neuroscience, the behavioral details of simulated neurons are infinitely easier to record, inspect, and test than anything in a Petri dish could ever be. To take just one instance, biological neurons run at only two speeds: their own and dead. Simulated neurons run at any speed that is convenient.

Right now simulations are used mostly to make high cost live-cell laboratory work more efficient. For instance, recently Professor Nathan Urban, Ph.D., of Carnegie Mellon, together with G. Brad Ermentrout, Ph.D., Professor of Computational Biology at the University of Pittsburgh and a specialist on the mathematics of biological synchrony, and Roberto Galan, Ph.D., a post- doctoral student at CMU, were in need of a way to test a theory they had developed about neural self-organization.

They might have tested this theory by going into the lab, plating out some cells, sticking electrodes in them, introducing signals, and recording their response. Such experiments are, as stated, major investments: frustrating, fragile, and very time- consuming. Typically they give up a farrago of data that is very hard to decode, especially when, as here, researchers had no idea what the hypothesis they were testing for would actually look like in real life.

So instead the group wrote a piece of virtual nervous tissue: a system of interacting formulae, each of which accepted inputs crafted to resemble real pulses from real synapses (on measures like timing, amplitude, frequency, and so on), processed them in ways that mimicked the behavior of real neurons, and passed the resulting outputs to other formulae/neurons. The hope of the enterprise was that if the virtual cells expressed the theory in question the group would have a little more confidence they were on to something real and they would know a little better what to look for.

The general problem under investigation was how neurons organize themselves, perhaps the fundamental question of the profession. The specific case of organized behavior chosen here was synchrony, a rare behavior in which a subpopulation of neurons learn to fire in step. Synchrony has been associated with a number of brain functions, including the coding of sensory information and the formation of short-term memory. Professor Robert Desimone, Ph.D., Director of the McGovern Institute for Brain Research at the Massachusetts Institute of Technology, has suggested that synchrony might also allow local regions in the brain to attract the interest of the whole organ, on the model of a group of fans chanting in a stadium. However, for all these speculations, the processes that initiate and maintain synchrony are still largely unknown.

Usually in nature (viz, the army), such behaviors are imposed by a central leader, a pace maker, that acts like a clock and is directly connected, in parallel, to each of the agents to be synchronized. Yet so far nobody has found any "Master Sargent" neurons organizing and maintaining synchrony in the brain. In most cases known to date, synchrony springs up from below, emerging from the interactions of cells. It was this bottom up behavior that made synchrony such a useful model of neural self- organization.

The theory they were testing posited that each neuron in a synchronizing group has the ability to measure and remember whether its nearest neighbor fired just before, just after, or synchronously with, its own signal. If the neighbor's signal arrived before, the cell adjusts its firing cycle backwards (the next spike is triggered a little sooner than usual, but thereafter the firing rate cycles as usual). If after, ahead. If the two events occurred together, nothing changes. Ermentrout had showed that given certain plausible assumptions about the nature of these corrections, a group of neurons would converge on a single clock, continuously adjusting in the direction of synchrony. The neurons would have organized by listening to each other, like an orchestra playing without an conductor.

The obvious next question was whether this theory could be found at work in the lives of real neurons. First, as stated above, the team tested the theory in a simulation. Sure enough, synchrony emerged. This success gave them the confidence to check the results against real tissue. They took a slice of tissue from the olfactory system of a mouse, applied the test stimulus, and looked for the resetting behavior called for by the theory and found in the simulation. It wasn't there.

Simulations fail such tests all the time and when they do investigators have to decide whether the wetwork, the software, or the governing concept was the problem. All candidates are plausible. The cells in the Urban simulations were built around the properties of "average" human neurons. Perhaps mouse neurons were didn't have enough in common with human neurons. The cycles generated by the computer model were all identical, with the firing appearing at exactly each point in that neuron's cycle (except when it was resetting), and with exactly the same number of spikes per unit of time. In biological reality, these cycles are only roughly identical. Perhaps that roughness mattered. There were many other possibilities.

However, Urban's experience led him to trust the simulation. He decided the signal was still in there somewhere, that the baby was still in the bath, but that the in vitro experiment had just missed it. "The (simulated) phenomenon had been really robust," he says. "You could change lots of things and get the same results. This gave us the confidence to press on." The team started to analyze the lab experiment, particularly the finer details of the physical impulse supplied to the test neurons: the shape of the charge, the size of the peak, etc. As part of this examination they passed a number of different frequencies for the stimulatory impulse through the model. Satisfyingly, some didn't work, predicting the failure that had emerged in the lab. They then redefined the real stimulation to express a more perfect model of the impulse frequencies that worked in the simulation. This time, the expected resetting emerged.

This episode shows the technique to advantage. Neuroscience advanced because one theory explaining the origin and management of a crucial behavior gained strength. When problems arose in the lab the simulation was used as a kind of flashlight to probe the complications. And there was a bonus: the finer detail about stimulatory impulses that emerged could be folded back into the model of the neuron, making the entire simulation a bit more accurate and therefore a bit more useful, not just for the Urban team, but anyone in the world working on those kinds of neurons. The lab work and the simulations worked symbiotically, each advancing the other.

Episodes like these, in which brain models are used to lower the cost, narrow the focus, and accelerate the pace of experimental work, are unfolding everywhere across neuroscience. Todd Braver, associate professor of psychology at Washington University, is using simulations to pursue his interest in the phenomenon of self- control. As Braver points out, the concept seems weirdly self- referential. Who is the 'we' that is controlling 'us'? What happens in our brain when we finally learn to refuse that second helping or budget properly or write thank-you notes? In a sense, Braver is looking for the physiological underpinning of the species of wisdom we call "hard- won".

Earlier research in the field had found that when a person faces a conflict between impulses -- for instance, in which the impulse to grab that slice of banana cake comes into conflict with the solemn vow to stay on a diet - - a certain specific region of the cortex (the anterior cingulate cortex, hereafter ACC) lights up. That research suggested that the ACC involves itself when two other regions are locked in a struggle over the control of behavior. The ACC attempts to resolve the conflict in line with the subject's "higher" wishes -- in the case of this example, to stick to the diet. (The ghost of Freud might claim the ACC as the seat of the the superego.) Braver was interested in those cases in which the ACC initially fails, but then gets progressively better at asserting itself until finally (for instance) the cake is refused.

One theory as to how this works is that the ACC learns by registering and remembering "failures" (where a failure means going off the diet), the conflicts associated with those failures, and the regions responsible for those conflicts. Whenever this organ detects a conflict it looks at the history of the regions involved and recruits resources accordingly; the more failures, the harder it knows it has to work.

Braver had another thought: maybe the ACC can monitor the ambient environment independently of the regions and learns to associate the presence of specific stimuli with a high probability of conflict. When it sees a cake swim into the subject's field of view, alarm bells go off. The theory is that over time the ACC's grip on the events associated with and preceding the conflict improves. The more it improves the earlier it can mobilize and the more effective it is. If a person can't learn to resist when the cake is right in front of him, maybe he can learn to avoid the situations where cake is likely to present itself, perhaps by telling the waiter to be sure to keep the dessert tray on the other side of the room.

Braver's thought sounds plausible but it has a defect: there is zero neuroanatomical evidence of an actual connection between the ACC and the sensory regions of the brain. While not fatal -- the brain has lots of ways of moving signals around -- this absence definitely weighs against the theory. If Braver had had to use in vitro methods to test his idea he might well have passed, since such tests are so expensive and difficult that only the best -founded hypotheses are likely to be deemed worth the risk and investment.

Fortunately they had a cheaper alternative. Braver and Joshua Brown, a post-doctoral student at Washington University, devised a grid of thirty-odd interconnected model neurons meant to represent the ACC. They then found ways of forcing that model to process conflicting tasks under limitations that insured that sometimes the model would fail. Finally they built two variants of the model, each representing one of the two competing theories: one variant could detect and keep track of failure histories, while the other could understood and react to (simulated) sensory cues that were strongly associated (but not perfectly so) with failure.

When they ran these models they found that the cells in the variant keyed to sensory data became significantly more active than the cells running in the competing model, even in runs where there was no failure or even any conflict (essentially because the processing of the sensory data drove increases in the "strength" of the simulated synapses). The researchers then tested this prediction with an fMRI study in which a number of human subjects performed the same tasks that had been presented to the models. The real ACCs showed roughly the same activity increases as the programs simulating the sensory cue theory. Among other implications, these results suggest that neuroanatomists now have a good reason to double- check the apparent absence of a connection between the ACC and the outside world.

In these examples Urban and Braver are using modelling as a cross- check on laboratory work, a cheap way of testing competing hypotheses against each other, and a source of provocative possibilities. This is pretty much how every other science uses the discipline. However the relation between neuroscience and simulations is in one respect sui generis, and that respect opens the door to a much more ambitious and conflicted application.

In no other science is the target of the simulation another simulation. But in this case, the paradigm of brain nature in which we all enmeshed holds that the organ is itself a computational model, an information- based simulator, a processor of symbols. This means that once a computer model is running the same symbols, the same codes, that model will have captured everything, the entire phenomenon. The explanation and the thing being explained will have become logically interchangeable. A simulation of a river or a cloud or a stream of traffic always runs off the tracks eventually, usually quite soon. A simulation of a brain would have captured the entire phenomenon.

This conflicts radically with our intuitions; we feel in our depths that a brain made out of salt, sugar, fat, and water must be fundamentally different than one composed of silicon, aluminum, copper, and gold. However, if you believe in the symbol processing theory of brain operation, and frankly it is hard to even dream up a coherent alternative, then our intuitions must be misplaced. It must be possible, at least in theory, to have a complete model of a human brain (and therefore a human mind) running in a computer.

Urban and Braver's research, with its hard, neuron-by-neuron, slog through the details of individual circuits makes this day seem very far off, and of course it could be. But the profession might get lucky, if lucky is the word we want. Thirty years ago the great neuroscientist Vernon Mountcastle (currently University Professor of Neuroscience, Emeritus, at the Zanvyl Krieger Mind/Brain Institute of The Johns Hopkins University) suggested that all the functionalities of the cortex - - thought, perception, memory -- as different as they seem to us, are really just variations on a single operational theme. Everywhere the cortex is doing the same thing. "All" that remained was to figure out what that is.

Mountcastle's proposal helped explain the functional flexibility of the cortex, in which the same regions can be taught to handle the inputs of several different senses, depending on need, and its visual uniformity, which makes it look as if it was doing the same thing everywhere. (The degree to which the idea of a common unit of operation can be extended to the non-cortical brain, with its distinct evolutionary history, is an open question.)

It also made the brain pleasingly compatible with the rest of nature. Modularity is just what you would expect to find in an evolved organ. Nature is immensely fond of designing a single common unit and reusing it with simple variations. It never reinvents the wheel if it can shrink or expand or add a little color to one already in inventory. Heredity rests on basically a single molecule (with variations), the universe of proteins is made by shuffling and reshuffling a few amino acids, animal cells are much alike, and the communications of cells and neurons are likewise stereotyped. Evolution just doesn't have time to indulge itself with complications. Usually, if it can't find a simple way to get somewhere it doesn't go.

Ever since Mountcastle published his conjecture there have been efforts to prove (or disprove) it. One conceptual approach is to try to match some candidate core function with an anatomical feature that seem to repeat throughout the cortex. Examples of the former might be pattern matching or predicting. It is provoking to reflect that while that neuroscience has been struggling to find a core function for the cortex, artificial intelligence has been trying equally hard, and failing equally egregiously, to find an algorithm for pattern matching that would allow computers to see as well as pigeons, or navigate obstacles as well as rats, or hear half as well as bats. Perhaps some day the same bright idea will light up both fields. (Jeff Hawkins argues exactly this point in his recent book On Intelligence, though he prefers prediction as the unit of cortical function.)

Recently Switzerland's Ecole Polytechnique Federale de Lausanne struck a deal with IBM to use its supercomputing platform "Blue Gene" to model cortical columns, which many but not all researchers think is a repeating organ in the sense discussed here. (Cortical columns are defined by the observation of relatively greater densities of vertical to horizontal connections.) There might be about a million of these columns in the human cortex, each composed of 50,000 neurons; the Lausanne researchers plan to start by modelling rat columns, which are much smaller. While the function of the cortical column is not known, the hope is that building a model and then exposing it to a realistic environment will instruct us on the point.

Modularity is only one of the tricks evolution uses to simplify its life. Another is creating hierarchies of emergent order: when the pancreas wants to tell the liver to store more glucose all it has to do is squirt a little insulin into the blood. It doesn't need to know anything about how the liver works. The communication is on the level of the organ, not the level of the cell, let alone the molecule. Each level in this hierarchy is insulated from the complexities of the one(s) below. If there are levels of emergent order in the brain -- and there almost certainly are -- then we would be able to simulate useful brain activity quite high up in the "stack," without worrying about neurons or circuits or perhaps even cortical columns.

In other words, the brain might not be anywhere near as complicated as we think it is. All we know for sure is that we are confused, and there is more than one explanation for that. Seventy years ago protein chemists were convinced the protein was the unit of inheritance and they wandered around in circles a lot too. Then one day Watson and Crick turned on the lights and molecular genetics took off. There is a popular joke among neuroscientists: "It's a good thing the brain is so complicated, otherwise we'd never figure out." Perhaps the joke is on us: maybe the brain really isn't that complicated, and that's why we haven't figured it out. But surely some day we will.

When and if large-scale brain simulations become possible they will raise quite intricate ethical questions. One use sometimes mentioned for full-brain simulations is as a testbed for brain treatments. Suppose you found a way to stimulate neural growth in ways that repaired memory deterioration (or just improved memory, period) in lab animals. The animals seem to tolerate the intervention just fine. In theory you should now be able to take your treatment into human testing, but in reality no human subjects or bioethics committee (and especially not the FDA) would ever approve experiments that introduced changes in a human brain without a lot of data about long term effects, data that cannot be collected until such experiments, or experiments that raise the same issues, are approved. This Catch 22 seems to stand between a very wide range of brain disorders and dysfunctions, from sensory loss to cognitive deterioration, from dementia to autism, and any hope of a cure. (Not to mention the even longer list of possible enhancements.)

In theory a fully functioning, validated, brain model could be used to advance the testing process, putting of the need to use real brains for as long as possible. But if the model runs the same logical processes as real brains do, don't you run into the same problem? And if you modify the model to avoid this problem by making it less "real," doesn't it lose its usefulness? Maybe not, but how can you ever know?

An even more difficult question will be the nature of our relations to these models, these entities. By definition the behavioral output of a successful simulation will respond to (simulated) situations the way real humans would. It would be defensive, sympathetic, whimsical, impatient, interested, and generous (or not). If they act just like humans -- asking us plaintively where they came from, developing interests and pursuing them, figuring out ways to prevent us from turning them off, and the like -- then it would seem they ought be treated as human. Among other implications, we would become as reluctant to experiment on the models as we would on real brains. In every other science, the more accurate and comprehensive simulations get, the better. Here, the more progress neuroscience makes, the closer it gets to shutting a door on itself.

On the other hand simulations by their nature are copyable entities. Push a button -- or maybe they would get on the internet and push the button themselves-- and you can get a million of them. That fact constitutes what seems like an uncrossable barrier to treating these entities as human, no matter much like us they might be in other respects. Given infinite reproducibility, there are certain kinds of relations we will not be able to enjoy with our models, no matter how human they seem. Giving them the right to vote is the least of it.

From this distance these questions seem uniquely difficult. It is hard to conceptualize even candidate answers to such questions, but even harder to see why we won't have to.