Neuroscience Goes Big
/Much has been made recently of President Obama’s announcement of the 100 million dollar BRAIN initiative to…well, to do what exactly? Some scientists exude optimism about the project, perhaps because they’re simply heartened to hear there’s money on the table for research. Other scientists are highly critical, citing the initiative’s lack of focus. Will the BRAIN initiative mean big-government intervention in the process of science? Will it scavenge resources from other important scientific initiatives? Will it produce vast mounds of data that we do not yet have a coherent way of processing and analyzing? Maybe. It all depends on what the BRAIN initiative really is. In an influential article in the journal Neuron, Paul Alivisatos and colleagues describe a “Brain Activity Map” (BAM) project that would allow “imaging every spike from every neuron.” They argue that such a technical capability would allow neuroscientists to observe complex interactions between neurons never before possible leading to better understanding of how these interactions give rise to the emergent properties of our brains that birth our minds. Measuring every spike from every neuron is an ambitious goal that will require some major technological advances, but supposing we could do it, should we? Partha Mitra has called the concept “ill-formed,” pointing out that not only would we struggle to interpret all the spike data without comprehensive information about inputs to the system (external stimuli) and outputs (behavior), but also we would not in principle be able to derive information about circuit function – how particular neurons are influencing the spiking patterns of other neurons – without detailed connectivity maps. In contrast, connectivity maps, combined with exhaustive information about the physiology of all neuronal cell types, could, in principle, predict spike patterns (though achieving that goal would be hard too).
Due in part to the fierce criticism of the BAM proposal by Mitra and others, discussion of the official BRAIN initiative’s goals has backed off the “every spike from every neuron” paradigm, though without really specifying a replacement goal. So far, descriptions of BRAIN have dealt mostly in generalities. The name BRAIN itself does not pin things down: BRAIN stands for “Brain Research through Advancing Innovative Neurotechnologies.” The only clue this acronym offers as to BRAIN’s goals is that it will likely emphasize technology development. It is highly possible that such technology development will actually be right along the lines of the BAM proposals – and that’s not a bad thing. Yes, Partha Mitra is right that we will need information about cell types and connectivity to make sense of spike data, but parallel projects addressing those issues – the Allen Brain Atlas, the GENSAT project, and others – are underway and can be further developed (with adequate financial support). We don’t need to wait for perfect connectivity and physiology data to get to work decoding spikes, the language of the brain.
However, we also don’t need to be able to measure every spike from every neuron at the same time in order to understand how they all work together. Separate studies of individual circuit elements can be tied together by fitting real neural data into a computer model. Enter the Blue Brain Project, Henry Markram’s brainchild (pun…intended?) recently funded by the European Commission to the tune of 1 billion euros. The Blue Brain Project is at least as insanely ambitious as BAM. Its ultimate goal is to create a “complete virtual brain,” a model of the brain in silico down to the last molecular detail. Like BAM, the Blue Brain Project has faced intense skepticism about its scientific value. The main criticism is that there is not yet adequate data to construct a reasonably realistic model. If the model isn’t accurate, then however pretty the neural patterns it produces are, it will lack the ability to generate meaningful new concepts of brain function. Additionally, without the ability to plug in proper inputs and measure outputs, it’s hard to evaluate what the model is doing. That’s true for the Blue Brain Project’s first attempt to model a rat cortical column.
Despite these problems, the Blue Brain Project constitutes an important step forward in neuroscience. The brain, even a rat brain or a bird brain or a fish brain, is far too complicated for our brains to comprehend, so we are going to need to rely heavily on computers to make progress. Many Blue Brain critics seem put off by the idea of a comprehensive brain model because the model isn’t “real.” Perhaps they would be more receptive to the idea if they were to think of it as a theory, a current best guess about how the brain works that could be constantly updated and compared to new incoming data from real brains. Without such a unifying theory, in fact, neuroscience may get quite lost in “big data.”
Models have always been an indispensable part of the scientific method. Without them, even in the presence of many, many empirical observations, no progress can be made. Empirical observations need to be fit into some kind of theoretical framework to be interpretable. Karl Popper, the great philosopher of science, famously argued that science progresses not by verifying facts, but by proposing falsifiable theories, which stand until they are proven false by disconfirming data. Science advances by an evolutionary process in which multiple hypotheses compete to explain a phenomenon, with those that best survive rigorous attempts at refutation surviving…at least until they are replaced by hypotheses that fit the data even better.
A “complete virtual brain” is simply an attempt to put a bunch of good neuroscience hypotheses together into a falsifiable theory of brain function. Doing so will not only organize our hypotheses for us, it will help us expose contradictions that may have gone unnoticed. If our best data about the synaptic and cellular physiology of a neuronal subtype can’t accurately predict our in vivo spike data, we’d like to know about that failure and figure out why it happens.
Additionally, a complete virtual brain could work to generate new falsifiable hypotheses that fit our empirical data better than what we have now. At the moment, because of our own limited working memory capacity, scientists tend to simplify our findings and models into terms in which humans can most easily understand them. Take the Basal Ganglia, for example. The Basal Ganglia are a set of highly interconnected subcortical brain nuclei that we think play some role in action selection and motor learning. Often times scientists refer to the Basal Ganglia as having a “direct” and an “indirect” pathway, represented like this:
The reality, though, is a lot messier. The real circuit diagram looks more like this:
The problem with the real (or closer-to-real) diagram is that it’s way too complicated for our working memories. With the simple Basal Ganglia diagram, after the one tricky step of working out the double negative in the indirect pathway, we can easily point out how the direct and indirect pathways have opposite effects on the output of the circuit. With the complicated Basal Ganglia diagram, it’s almost impossible to gain an intuitive understanding of what’s going on. Forgetting about the details of the complicated diagram is tempting (believe me, I’ve done it a lot), but that doesn’t change the reality that in order to truly understand Basal Ganglia circuit function, we’re going to need to get our hands dirty.
What are the tools we need to understand and test the function of the real Basal Ganglia? First, we are going to need BAM/BRAIN-like tools to separate out the various cell types (by gene expression or connectivity, or some combination thereof), track their spiking (preferably tracking multiple identified cell types in large numbers in the same test subject to increase the power of our experiments), and manipulate their spiking (to probe causal relations between the spiking of one cell or cell type, the spiking of all the other cells in the circuit, and the behavior of the test subject). Next, we are going to need a theoretical framework to fit all those empirical observations together. Say we learn that when an animal is performing a particular behavior, Basal Ganglia cell type A is spiking in pattern X, Basal Ganglia cell type B is spiking in pattern Y, and Basal Ganglia cell type C is spiking in pattern Z. Using a model of the complicated Basal Ganglia circuit, we can plug in patterns X, Y, and Z for cell types A, B, and C and ask: how would incoming spike patterns from cortex be transformed by such activity into outgoing spike patterns in SNr? We can then compare the predictions of the model to future manipulations of spiking in cortex and measurements in SNr. If the theoretical and experimental values are different, we can play with the model, tweaking some of the parameters we were not measuring during our initial experiments, to get a better fit. We can then go back and specifically try to measure those key parameters we think might need to be adjusted in the model….and so on. A very productive back and forth between theory and experiment can be established.
The ability to establish many such interchanges addressing different aspects of brain function – especially as collaborations across labs – depends critically on how BRAIN and the Blue Brain Project are handled. New tools should be shared as widely and rapidly as possible. The large, complicated datasets generated by these new tools should be shared in their raw form. The central Blue Brain Project model should be curated carefully, with the source code open and available for neuroscientists everywhere to use for their own specific research goals. Done right, BRAIN and the Blue Brain Project can work synergistically to accelerate progress in neuroscience, improving our understanding of ourselves and, just maybe, leading to tangible benefits for humans in the form of products like neural prosthetics and cures for the often heart-breaking afflictions of our mind, such as Alzheimer’s, autism and schizophrenia.