The basic unit of computation

© 2000 Nature America Inc. • http://neurosci.nature.com
history
The basic unit of computation
© 2000 Nature America Inc. • http://neurosci.nature.com
Anthony M. Zador
What is the basic computational unit of
the brain? The neuron? The cortical column? The gene? Although to a neuroscientist this question might seem poorly
formulated, to a computer scientist it is
well-defined. The essence of computation is nonlinearity. A cascade of linear
functions, no matter how deep, is just
another linear function—the product of
any two matrices is just another matrix—
so it is impossible to compute with a
purely linear system. A cascade of the
appropriate simple nonlinear functions,
by contrast, permits the synthesis of any
arbitrary nonlinear function, including
even that very complex function we use
to decide, based on the activation of
photoreceptors in our retinae, whether
we are looking at our grandmother. The
basic unit of any computational system,
then, is its simplest nonlinear element.
In a digital computer, the basic nonlinearity is of course the transistor. In the
brain, however, the answer is not as clear.
Among brain modelers, the conventional view, first enunciated by McCulloch
and Pitts1, is that the single neuron represents the basic unit. In these models, a
neuron is usually represented as a device
that computes a linear sum of the inputs
it receives from other neurons, weighted
perhaps by the strengths of synaptic connections, and then passes this sum
through a static nonlinearity (Fig. 1a).
From this early formulation, through the
first wave of neural models in the sixties2
and on through the neural network
renaissance in the eighties3, the saturating or sigmoidal relationship between
input and output firing rate has been
enshrined as the essential nonlinearity in
most formal models of brain computation4. Synapses are typically regarded as
simple linear elements whose essential
role is in learning and plasticity.
Brain modelers have made some
attempts to elaborate the basic formulation of McCulloch and Pitts. Many neurons have complex spine-studded
dendritic trees, which scientists have
speculated might provide a substrate for
further linear5 or nonlinear6,7 processing
(see Koch and Segev, this issue). The
The author is at Cold Spring Harbor
Laboratory, 1 Bungtown Road,
Cold Spring Harbor, New York 11724, USA.
e-mail: [email protected]
nature
neuroscience
a
b
Location of the essential nonlinearity.
(a) Standard model of processing. Inputs 1–n
from other neurons are multiplied by the corresponding passive synaptic weights w,
summed (Σ) and then passed through a nonlinearity (S). (b) An alternative model of processing in which the synapses themselves provide
the essential nonlinearity.
Fig. 1.
focus of even these theories nevertheless
remains on the postsynaptic element as
the locus of computation.
Experimentalists have recognized for
decades that a synapse is not merely a
passive device whose output is a linear
function of its input, but is instead a
dynamic element with complex nonlinear behavior8. The output of a synapse
depends on its input, because of a host
of presynaptic mechanisms, including
paired-pulse facilitation, depression, augmentation and post-tetanic potentiation.
In many physiological experiments
designed to study the properties of
synapses, stimulation parameters are
chosen specifically to minimize these
nonlinearities, but they can dominate the
synaptic responses to behaviorally relevant spike trains9. Quantitative models
were developed to describe these phenomena at the neuromuscular junction
more than two decades ago10 and at central synapses more recently11,12.
There has been growing recognition
that these synaptic nonlinearities may be
important in computation. Nonlinear
synapses have been postulated to underlie specific functions, such as gain control 12 or temporal responsiveness of
neurons in area V1 (ref. 13). They have
also been considered in the context of
supplement • volume 3 • november 2000
more general models of network computation14–16, and it has been rigorously proven that such networks can
implement a very rich class of computations17. Common to all these models is
the notion that synapses do more than
just provide a substrate for the long-lasting changes underlying learning and
memory; they are critical in the computation itself.
What is the basic unit of computation
in the brain? For over five decades since
McCulloch and Pitts, neural models have
focused on the single neuron, but it is
interesting to speculate whether this is a
historical accident. If McCulloch and
Pitts had happened to have offices down
the hall from the synaptic physiology laboratory of Bernard Katz, might their
basic formulation have emphasized the
nonlinearities of the synapse instead? The
challenge now is to figure out which, if
any, of the experimental discoveries
made since McCulloch and Pitts are
actually important to how we formulate
our models of the networks that underlie
neural computation.
1. McCulloch, W. S. & Pitts, W. Bull. Math.
Biophys. 5, 115–133 (1943).
2. Rosenblatt, F. Principles of Neurodynamics
(Spartan, New York, 1962).
3. Hopfield, J. J. Proc. Natl. Acad. Sci. USA 79,
2554–2558 (1982).
4. Hertz, J., Krogh, A. & Palmer, R. G.
Introduction to the Theory of Neural
Computation (Addison-Wesley, Redwood City,
California, 1991).
5. Rall, W. in The Handbook of Physiology, The
Nervous System Vol. 1, Cellular Biology of
Neurons (eds. Kandel, E. R., Brookhart, J. M. &
Mountcastle, V. B.) 39–97 (American Physiol.
Soc., Bethesda, Maryland, 1977).
6. Shepherd, G. M. et al. Proc. Natl. Acad. Sci. USA
82, 2192–2195 (1985).
7. Mel, B. W. Neural Comput. 6, 1031–1085
(1994).
8. del Castillo, J. & Katz, B. J. Physiol. (Lond.) 124,
574–585 (1954).
9. Dobrunz, L. E. & Stevens, C. F. Neuron 22,
157–166 (1999).
10. Magleby, K. L. Prog. Brain Res. 49, 175–182
(1979).
11. Tsodyks, M. V. & Markram, H. Proc. Natl. Acad.
Sci. USA 94, 719–723 (1997).
12. Abbott, L. F., Varela, J. A., Sen, K. & Nelson,
S. B. Science 275, 220–224 (1997).
13. Chance, F. S., Nelson, S. B. & Abbott, L. F.
J. Neurosci. 18, 4785–4799 (1998).
14. Maass, W. & Zador, A. M. Neural Comput. 11,
903–917 (1999).
15. Liaw, J. S. & Berger, T. W. Proc. IJCNN 3,
2175–2179 (1998).
16. Little, W. A. & Shaw, G. L. Behav. Biol. 14,
115–133 (1975).
17. Maass, W. & Sontag, E. D. Neural Comput. 12,
1743–1772 (2000).
1167
© 2000 Nature America Inc. • http://neurosci.nature.com
history
Computation by neural
networks
© 2000 Nature America Inc. • http://neurosci.nature.com
Geoffrey E. Hinton
Networks of neurons can perform computations that have proved very difficult
to emulate in conventional computers.
In trying to understand how real nervous systems achieve their remarkable
computational abilities, researchers have
been confronted with three major theoretical issues. How can we characterize
the dynamics of neural networks with
recurrent connections? How do the
time-varying activities of populations
of neurons represent things? How are
synapse strengths adjusted to learn these
representations? To gain insight into
these difficult theoretical issues, it has
proved necessary to study grossly idealized models that are as different from
real biological neural networks as apples
are from planets.
The 1980s saw major progress on all
three fronts. In a classic 1982 paper 1,
Hopfield showed that asynchronous
networks with symmetrically connected neurons would settle to locally stable
states, known as ‘point attractors’, which
could be viewed as content-addressable
memories. Although these networks
were both computationally inefficient
and biologically unrealistic, Hopfield’s
work inspired a new generation of
recurrent network models; one early
example was a learning algorithm that
could automatically construct efficient
and robust population codes in ‘hidden’
neurons whose activities were never
explicitly specified by the training environment2.
The 1980s also saw the widespread
use of the backpropagation algorithm
for training the synaptic weights in both
feedforward and recurrent neural networks. Backpropagation is simply an
efficient method for computing how
changing the weight of any given
synapse would affect the difference
between the way the network actually
behaves in response to a particular
training input and the way a teacher
desires it to behave3. Backpropagation
The author is in the Gatsby Computational
Neuroscience Unit, University College London,
17 Queen Square, London WC1N 3AR, UK.
e-mail: [email protected]
1170
is not a plausible model of how real
synapses learn, because it requires a
teacher to specify the desired behavior
of the network, it uses connections
backward, and it is very slow in large
networks. However, backpropagation
did demonstrate the impressive power
of adjusting synapses to optimize a performance measure. It also allowed psychologists to design neural networks
that could perform interesting computations in unexpected ways. For example, a recurrent network that is trained
to derive the meaning of words from
their spelling makes very surprising
errors when damaged, and these errors
are remarkably similar to those made by
adults with dyslexia4.
The practical success of backpropagation led researchers to look for an
alternative performance measure that
did not involve a teacher and that could
easily be optimized using information
that was locally available at a synapse. A
measure with all the right properties
emerges from thinking about perception
in a peculiar way: the widespread existence of top-down connections in the
brain, coupled with our ability to generate mental images, suggests that the
perceptual system may literally contain
a generative model of sensory data. A
generative model stands in the same
relationship to perception as do computer graphics to computer vision. It
allows the sensory data to be generated
from a high-level description of the
scene. Perception can be seen as the
process of inverting the generative
model—inferring a high-level description from sensory data under the
assumption that the data were produced
by the generative model. Learning then
is the process of updating the parameters of the generative model so as to
maximize the likelihood that it would
generate the observed sensory data.
Many neuroscientists find this way
of thinking unappealing because the
obvious function of the perceptual system is to go from the sensory data to a
high-level representation, not vice versa.
But to understand how we extract the
causes from a particular image
nature
sequence, or how we learn the classes of
things that might be causes, it is very
helpful to think in terms of a top-down,
stochastic, generative model. This is
exactly the approach that statisticians
take to modeling data, and recent
advances in the complexity of such statistical models5 provide a rich source of
ideas for understanding neural computation. All the best speech recognition
programs now work by fitting a probabilistic generative model.
If the generative model is linear, the
fitting is relatively straightforward but
can nevertheless lead to impressive
results6,7. There is good empirical evidence that the brain uses generative
models with temporal dynamics for
motor control 8 (see also ref. 9, this
issue). If the generative model is nonlinear and allows multiple causes, it can
be very difficult to compute the likely
causes of a pattern of sensory inputs.
When exact inference is unfeasible, it is
possible to use bottom-up, feedforward
connections to activate approximately
the right causes, and this leads to a
learning algorithm for fitting hierarchical nonlinear models that requires only
information that is locally available at
synapses10. So far, theoretical neuroscientists have considered only a few simple types of nonlinear generative model.
Although these have produced impressive results, it seems likely that more
sophisticated models and better fitting
techniques will be required to make
detailed contact with neural reality.
1. Hopfield, J. J. Proc. Natl. Acad. Sci. USA 79,
2554–2558 (1982).
2. Hinton, G. E. & Sejnowski, T. J. in Parallel
Distributed Processing: Explorations in the
Microstructure of Cognition. Vol. 1
Foundations (eds. Rumelhart, D. E. &
McClelland, J. L.) 282–317 (MIT Press,
Cambridge, Massachusetts, 1986).
3. Rumelhart, D. E., Hinton, G. E. & Williams,
R. J. Nature 323, 533–536 (1986).
4. Plaut, D. C. & Shallice, T. Cognit.
Neuropsychol. 10, 377–500 (1993).
5. Cowell, R. G., Dawid, A. P., Lauritzen, S. L.
& Spiegelhalter, D. J. Probabilistic Networks
and Expert Systems (Springer, New York,
1999).
6. Bell, A. J. & Sejnowski, T. J. Neural Comput.
7, 1129–1159 (1995).
7. Olshausen, B. A. & Field. D. J. Nature 381,
607–609 (1996).
8. Wolpert, D. M., Ghahramani Z. & Jordan,
M. I. Science 269, 1880–1882 (1995).
9. Wolpert, D. M. & Ghahramani, Z. Nat.
Neurosci. 3, 1212–1217 (2000).
10. Hinton, G. E., Dayan, P., Frey, B. J. & Neal,
R. Science 268, 1158–1161 (1995).
neuroscience
supplement • volume 3 • november 2000
© 2000 Nature America Inc. • http://neurosci.nature.com
© 2000 Nature America Inc. • http://neurosci.nature.com
review
dendritic spines. Proc. Natl. Acad. Sci. USA 82, 2192–2195 (1985).
29. Rall, W. & Segev, I. in Synaptic Function (eds. Edelman, G. M., Gall, W. E. &
Cowan, W. M.) 605–636 (Wiley, New York, 1987).
30. Schiller, J., Schiller, Y., Stuart, G. & Sakmann, B. Calcium action potentials
restricted to distal apical dendrites of rat neocortical pyramidal neurons.
J. Physiol. (Lond.) 505, 605–616 (1997).
31. Larkum, M. E., Zhu, J. J. & Sakmann, B. A new cellular mechanism for coupling
inputs arriving at different cortical layers. Nature 398, 338–341 (1999).
32. Svoboda, K., Helmchen, F., Denk, W. & Tank, D. W. Spread of dendritic
excitation in layer 2/3 pyramidal neurons in rat barrel cortex in vivo. Nat.
Neurosci. 2, 65–73 (1999).
33. Markram, H., Lübke, J., Frotscher, M. & Sakmann, B. Regulation of synaptic
efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215
(1997).
34. Bi, G.-Q. & Poo, M.-M. Synaptic modifications in cultured hippocampal
neurons: dependence on spike timing, synaptic strength, and postsynaptic cell
type. J. Neurosci. 18, 10464–10472 (1998).
35. Debanne, D., Gähwiler, B. H. & Thompson, S. M. Long-term synaptic plasticity
between pairs of individual CA3 pyramidal cells in rat hippocampal slice
cultures. J. Physiol. (Lond.) 507, 237–247 (1998).
36. Magee, J. C. & Johnston. D. A synaptically controlled, associative signal for
Hebbian plasticity in hippocampal neurons. Science. 275, 209–213 (1997).
37. Kistler, W. M. & van Hemmen, J. L. Modeling synaptic plasticity in conjunction
with the timing of pre- and postsynaptic action potentials. Neural Comput. 12,
385–405 (2000).
38. Abbott, L. F. & Nelson, S. B. Synaptic plasticity: taming the beast. Nat. Neurosci.
3, 1178–1183 (2000).
39. Segev, I. & Rall, W. Computational study of an excitable dendritic spine.
J. Neurophysiol. 60, 499–523 (1988).
40. Softky, W. R. Sub-millisecond coincidence detection in active dendritic trees.
Neuroscience 58, 15–41 (1994).
41. Berman, N. J. & Maler, L. Neural architecture of the electrosensory lateral line
lobe: adaptations for coincidence detection, a sensory searchlight and
frequency-dependent adaptive filtering. J. Exp. Biol. 202, 1243–1253 (1999).
42. Siegel, M., Körding, K. P. & König, P. Integrating top-down and bottom-up
sensory processing by somato-dendritic interactions. J. Comput. Neurosci. 8,
161–173 (2000).
43. Salinas, E. & Thier, P. Gain modulation: a major computational principle of the
Viewpoint •
central nervous system. Neuron 27, 15–21 (2000).
44. Mel, B. W. Synaptic integration in an excitable dendritic tree. J. Neurophysiol. 70,
1086–1101 (1993).
45. Mel, B. W. Information processing in dendritic trees. Neural Comput. 6,
1031–1085 (1994).
46. Mel, B. W. in Dendrites (eds. Stuart, G., Spruston, N. & Häusser, M.) 271–289
(Oxford Univ. Press, Oxford, 1999).
47. Mel, B. W., Ruderman, D. L. & Archie, K. A. Translation-invariant orientationtuning in visual “complex” cells could derive from intradendritic computations.
J. Neurosci. 18, 4325–4334 (1998).
48. Hubel, D. & Wiesel, T. Receptive fields, binocular interaction and functional
architecture in the cat’s visual cortex. J. Physiol. (Lond.) 160, 106–154 (1962).
49. Bell, A. J. Self-organization in real neurons: Anti-Hebb in “channel space”.
Neural Information Processing Systems 4, 59–67 (1992).
50. LeMasson, W., Marder, E. & Abbott, L. F. Activity-dependent regulation of
conductances in model neurons. Science 259, 1915–1917 (1993).
51. Stemmler, M. & Koch, C. How voltage-dependent conductances can adapt to
maximize the information encoded by neuronal firing rate. Nat. Neurosci. 2,
521–527 (1999).
52. Laughlin, S. B., van Steveninck, R. R. D. & Anderson, J. C. The metabolic cost of
neural information. Nat. Neurosci. 1, 36–41 (1998).
53. Turrigiano, G. G. & Nelson, S. B. Hebb and homestasis in neuronal plasticity.
Curr. Opin. Neurobiol 10, 358–364 (2000).
54. Riesenhuber, M. & Poggio, T. Models of object recognition. Nat. Neurosci. 3,
1199–1204 (2000).
55. Segev, I. Sound grounds for computing dendrites. Nature 393, 207–208 (1998).
56. Schlotterer, G. Responses of the locust descending movement detector neuron
to rapidly approaching and withdrawing visual stimuli. Can. J. Zool. 55,
1372–1376 (1977).
57. Rowell, C. H. F., O’Shea, M. & Williams, J. L. D. The neuronal basis of a sensory
analyser, the acridid movement detector system. IV. The preference for small
field stimuli. J. Exp. Biol. 68, 157–185 (1977).
58. Hatsopoulos, N., Gabbiani, F. & Laurent, G. Elementary computation of object
approach by a wide field visual neuron. Science 270, 1000–1003 (1995).
59. Gabbiani, F., Krapp, H. G. & Laurent, G. Computation of object approach by a
wide-field, motion-sensitive neuron. J. Neurosci. 19, 1122–1141 (1999).
60. Koch, C., Bernander, Ö. & Douglas, R. J. Do neurons have a voltage or a current
threshold for action potential initiation. J. Comput. Neurosci. 2, 63–82 (1995).
Models are common; good theories are scarce
I like to draw a distinction between models and theories, and, although the dividing line can be fuzzy, I still think the difference is a real one.
Models describe a particular phenomenon or process, and theories deal with a larger range of issues and identify general organizing
principles. For example, one might make a model of some aspect of synaptic transmission and use this model to connect observations
(fluorescence intensity as a function of time in an imaging experiment, for example) to some mechanistic aspect of synaptic function (such as
vesicle recycling). A theory of synaptic transmission, by contrast, would have to account for many properties of synapse function, and relate
these properties to principles of information processing. Such a theory might unify models of various forms of short-term plasticity
(facilitation, depletion, augmentation and so on) and describe how dynamic filtering characteristics resulting from this plasticity optimize
some aspect of information transfer. Models have a long history in neurobiology, from cable theory through the Hodgkin-Huxley equations,
and at least some models are recognized as having been essential for the development of our subject. Theories, on the other hand, are
scarce, and I cannot think of one that has made a really significant contribution to neurobiology.
Even so, I still believe that theories will be important—indeed vital—for further advances in the field. The reason for this belief is my
observation that many areas of biology have progressed pretty much as far as they can by the current techniques of systematically changing one
variable at a time to determine what causes what. For example, we have a pretty good idea about what V1 and MT do (although not how the
neural circuits do it), but little notion about the function of the other three dozen visual areas. The approach that has been successful for
understanding V1 and MT—noticing that certain stimulus properties induce firing of cortical neurons and then systematically characterizing
those stimulus properties—may work for a few more visual areas, but I believe the parameter space that must be explored is too large for this
approach to be successful for all visual areas. The stimulus parameters needed to describe V1 receptive fields are simple, but we do not even
know how to characterize the complex receptive fields in inferotemporal cortex. We will need to develop theories of vision to guide experiments.
The development of theoretical neurobiology will come slowly, though, for at least two reasons. The first is that theory in biology is hard.
In physics, everyone knows the important questions (how do you explain high-temperature superconductivity?), and the trick is to get an
answer. In biology, however, one must simultaneously figure out the question to ask and how to answer it; this makes things both more
difficult and more interesting. A second, related problem is that neurobiology lacks general laws (like the second law of thermodynamics)
that can give traction in a problem; in biology, we must not only identify questions, but we need to formulate principles that can serve as the
basis for general statements.
In discussions with colleagues, I detect an easing of the hostility toward theory that was common among experimental neurobiologists
in the past, and I find a general acceptance of the notion that we must have theory in neurobiology. This atmosphere of acceptance is an
essential ingredient for a theoretical neurobiology. Now the theorists must actually produce something of use.
CHARLES F. STEVENS
Molecular Neurobiology Laboratory, The Salk Institute, 10010 N. Torrey Pines Road, La Jolla, California 92037, USA
e-mail: [email protected]
nature
neuroscience
supplement • volume 3 • november 2000
1177
© 2000 Nature America Inc. • http://neurosci.nature.com
© 2000 Nature America Inc. • http://neurosci.nature.com
review
29. Abbott, L. F. & Blum, K. I. Functional significance of long-term potentiation for
sequence learning and prediction. Cereb. Cortex 6, 406–416 (1996).
30. Roberts, P. D. Computational consequences of temporally asymmetric learning
rules, I. Differential Hebbian learning. J. Comput. Neurosci. 7, 235–246 (1999).
31. Blum, K. I. & Abbott, L. F. A model of spatial map formation in the hippocampus
of the rat. Neural Comput. 8, 85–93 (1996).
32. Mehta, M. R., Quirk, M. C. & Wilson, M. Experience dependent asymmetric
shape of hippocampal receptive fields. Neuron 25, 707–715 (2000).
33. Rao, R. & Sejnowski, T. J. in Advances in Neural Information Processing Systems 12
(eds. Solla, S. A., Leen, T. K. & Muller K.-B.) 164–170 (MIT Press, Cambridge,
Massachusetts, 2000).
34. Thomson, A. M. & Deuchars, J. Temporal and spatial properties of local circuits
in neocortex. Trends Neurosci. 17, 119–126 (1994).
35. Grossberg, S. in Brain and Information, Event Related Potentials (eds. Karrer, R.,
Cohen, J. & Tueting, P.) 58–142 (New York Academy of Science, New York, 1984).
36. Liaw, J. S. & Berger, T. W. Dynamic synapses, a new concept of neural
representation and computation. Hippocampus 6, 591–600 (1996).
37. Abbott, L. F., Sen, K., Varela, J. A. & Nelson, S. B. Synaptic depression and cortical
gain control. Science 275, 220–222 (1997).
38. Tsodyks, M. V. & Markram, H. The neural code between neocortical pyramidal
neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci.
USA 94, 719–723 (1997).
39. Chance, F. S., Nelson, S. B. & Abbott, L. F. Synaptic depression and the temporal
response characteristics of V1 simple cells. J. Neurosci. 18, 4785–4799 (1998).
40. Markram, H. & Tsodyks, M. V. Redistribution of synaptic efficacy between
neocortical pyramidal neurones. Nature 382, 807–809 (1996).
Viewpoint •
41. Volgushev, M., Voronin, L. L., Chistiakova, M. & Singer, W. Relations between
long-term synaptic modifications and paired-pulse interactions in the rat
neocortex. Eur. J. Neurosci. 9, 1656–1665 (1997).
42. O’Donovan, M. J. & Rinzel, J. Synaptic depression, a dynamic regulator of
synaptic communication with varied functional roles. Trends Neurosci. 20,
431–433 (1997).
43. Tsodyks, M. V., Uziel, A. & Markram, H. Synchrony generation in recurrent
networks with frequency-dependent synapses. J. Neurosci. 20, RC50 (2000).
44. Artun, O. B., Shouval, H. Z. & Cooper, L. N. The effect of dynamic synapses on
spatiotemporal receptive fields in visual cortex. Proc. Natl. Acad. Sci. USA 95,
11999–12003 (1998).
45. Mehta, M. R., Barnes, C. A. & McNaughton, B. L. Experience-dependent,
asymmetric expansion of hippocampal place fields. Proc. Natl Acad. Sci. USA 94,
8918–8921 (1997).
46. Finnerty, G. T., Roberts, L. & Connors, B. W. Sensory experience modifies shortterm dynamics of neocortical synapses. Nature 400, 367–371 (1999).
47. Van Rossum, M. C., Bi, B. & Turrigiano, G. G. Learning rules that generate stable
synaptic weight distributions. J. Neurosci. (in press).
48. Rubin, J., Lee, D. D. & Sompolinsky, H. Equilibrium properties of temporally
asymetric Hebbian plasticity. Phys Rev. lett. (in press).
48. Selig, D. K., Nicoll, R. A. & Malenka, R. C. Hippocampal long-term potentiation
preserves the fidelity of postsynaptic responses to presynaptic bursts. J. Neurosci.
19, 1236–1246 (1999).
49. Buonomano, D. V. Distinct functional types of associative long-term
potentiation in neocortical and hippocampal pyramidal neurons.
J. Neurosci. 19, 6748–6754 (1999).
In the brain, the model is the goal
Both computational and empirical studies use models of neural tissue to make inferences about the intact system. Their aims and scope
are complementary, however, and their methods have different strengths and weaknesses. For example, much of our knowledge of
slices. These slices, which finish out their brief lives in man-made extracellular fluid, are crude
synaptic integration comes from
models of the intact brain, with deeper resting potentials, lower background firing rates, higher input resistances, severed inputs, and so
.
on. Test pulses delivered to a nerve or puffs of glutamate to a dendritic branch are crude models of synaptic stimulation
Recordings of one or two voltages within a spatially extended neuron provide a highly reduced model of the cell s electrical state.
Similarly, long-term potentiation is a simplified model for learning, and high-contrast bars on a gray background are simplified models for
visual stimulation. Yet many things have been learned from experiments on such simplified empirical models, the results of which often
called data underlie our current primitive understanding of brain function.
In contrast, computer studies use models whose elements and principles of operation are explicit, usually encoded in terms of
differential equations or other kinds of laws. These models are extremely flexible, and subject only to the limitations of available
computational power: any stimulus that can be conceptualized can be delivered, any measurement made, and any hypothesis tested. In a
model of a single neuron, for example, it is simple to deliver separate impulse trains to 1,000 different synapses, controlling the rate,
temporal pattern of spikes within each train (periodic, random, bursty), degree of correlation between trains, spatial distribution of
activated synaptic contacts (clustered, distributed, apical or basal, branch tips, trunks), spatiotemporal mix of excitation and inhibition, and
so on. Furthermore, every voltage, current, conductance, chemical concentration, phosphorylation state or other relevant variable can be
recorded at every location within the cell simultaneously. And if necessary, the experiment can be
reproduced ten years later.
Nor are such experiments confined to reality: computers permit exploration of pure hypotheticals. Models can contrast a system s
behavior in different states, some of which do not exist. For example, several spatial distributions of voltage-dependent channels could be
compared within the same dendritic morphology to help an investigator dissect the dastardly complex interactions between channel
properties and dendritic structure, and to tease apart their separate and combined contributions to synaptic integration. This sort of
of neural
hands-on manipulation gives the computer experimentalist insight into general principles governing the surrounding
systems, in addition to the particular system under study.
The need for modeling in neuroscience is particularly intense because what most neuroscientists ultimately want to know about the brain
the model that is, the laws governing the brain s information processing functions. The brain as an electrical system, or a chemical system,
is simply not the point. In general, the model as a research tool is more important when the system under study is more complex. In the
the brain through empirical
extreme case of the brain, the most complicated machine known, the importance of gathering more facts
studies must give way to efforts to relate brain facts to each other, which requires models matched to the complexity of the brain itself. There
is no escaping this: imagine a neuroscientist assigned to fully describe the workings of a modern computer (which has only 1010 transistors to
the brain s 1015 synapses). The investigator is allowed only to inject currents and measure voltages, even a million voltages at once, and then
is told to simply
about what the data mean. The task is clearly impossible. Many levels of organization, from electron to web server, or
from ion channel to consciousness each governed by its own set of rules lie between the end of the experimentalist s probe and a deep
understanding of the abstract computing system at hand. A true understanding of the brain implies the capacity to build a working replica in
any medium that can incorporate the same principles of operation silicon wafers, strands of DNA, computer programs or even plumbing
fixtures. This highly elevated practioner s form of understanding must be our ultimate goal, since it will not only allow us to explain the brain s
current form and function, but will help us to fix broken brains, or build better brains, or adapt the brain to altogether different uses.
in vitro
in vivo
’
—
‘
’—
exactly
’
class
—
is
’
about
’
think
—
—
’
—
‘
’ ’
’
BARTLETT W. MEL
University of Southern California, Los Angeles, California 90089-1451, USA
e-mail: [email protected]
nature
neuroscience
supplement • volume 3 • november 2000
1183
© 2000 Nature America Inc. • http://neurosci.nature.com
review
© 2000 Nature America Inc. • http://neurosci.nature.com
62. Diesmann, M., Gewaltig, M. O. & Aertsen, A. Stable propagation of
synchronous spiking in cortical neural networks. Nature 402, 529–533
(1999).
63. Riehle, A., Grün, S., Diesmann, M. & Aertsen, A. Spike synchronization and
rate modulation differentially involved in motor cortical function. Science
278, 1950–1953 (1997).
64. Domjan, M. & Burkhard, B. The Principles of Learning and Behavior 3rd ed.
(Brooks/Cole, Pacific Grove, California, 1993).
65. Lisman, J. E. & Idiart, A. P. Storage of 7 ± 2 short-term memories in
oscillatory subcycles. Science 267, 1512–1515 (1995).
66. O’Reilly R. C., Braver T. S. & Cohen J. D. in Models of Working Memory:
Mechanisms of Active Maintenance and Executive Control (eds. Miyake, A. &
Shah, P.) 375–411 (Cambridge Univ. Press, Cambridge, 1999).
67. Haj-Dahmane, S. & Andrade, R. Ionic mechanism of the slow
afterdepolarization induced by muscarinic receptor activation in rat
prefrontal cortex. J. Neurophysiol. 80, 1197–1210 (1998).
68. Broersen, L. M. et al. Effects of local application of dopaminergic drugs into
the dorsal part of the medial prefrontal cortex of rats in a delayed matching
to position task: comparison with local cholinergic blockade. Brain Res.
645, 113–122 (1994).
69. Wilson, H. R. & Cowan, J. D. A mathematical theory of the functional
dynamics of cortical and thalamic nervous tissue. Kybernetik 13, 55–80
(1973).
70. Amari, S. Dynamics of pattern formation in lateral-inhibition type neural
fields. Biol. Cybern. 27, 77–87 (1977).
71. Goodridge, J. P., Dudchenko, P. A., Worboys, K. A., Golob, E. J. & Taube,
J. S. Cue control and head direction cells. Behav. Neurosci. 112, 749–761
(1998).
72. Zhang, K. Representation of spatial orientation by the intrinsic dynamics of
the head-direction cell ensemble: a theory. J. Neurosci. 16, 2112–2126
(1996).
73. Camperi, M. & Wang, X.-J. A model of visuospatial working memory in
prefrontal cortex: recurrent network and cellular bistability. J. Comput.
Neurosci. 5, 383–405 (1998).
74. Romo, R., Brody, C. D., Hernández, A. & Lemus, L. Neuronal correlates of
parametric working memory in the prefrontal cortex. Nature 399, 470–473
(1999).
75. Seung, H. S., Lee, D. D., Reis, B. Y. & Tank, D. W. Stability of the memory of
eye position in a recurrent network of conductance-based model neurons.
Neuron 26, 259–271 (2000).
76. Milner, B. & Petrides, M. Behavioural effects of frontal-lobe lesions in man.
Trends Neurosci. 7, 403–407 (1984).
77. Funahashi, S., & Kubota, K. Working memory and prefrontal cortex.
Neurosci Res. 21,1–11 (1994).
Viewpoint • Facilitating the science in computational neuroscience
‘Computational neuroscience’ means different things to different people, but to me, a defining feature of the computational approach is
that the two-way bridge between data and theory is emphasized from the beginning. All science, of course, depends on a symbiosis
between observation and interpretation, but achieving the right balance has been particularly challenging for neuroscience. Here I
discuss some of the difficulties facing the field, and suggest how they might be overcome.
The first problem is that quantitative experiments are generally difficult and time consuming, and it is simply not possible to do all the
experiments that one might think of. Nor is it possible to publish all the data that any given experiment generates. Given that so much
must be excluded, it is essential that the experiments should be guided by theory, if they are to yield more than an arbitrary collection of
unfocused facts. Conversely, theory needs to be informed by experimental data: too many theoretical papers present hypotheses that are
incompatible with known facts about biology, and this problem is exacerbated by the difficulty theorists face in keeping up with a large
and ever-expanding experimental literature.
How might the situation be improved? One step would be to ensure that theoretical papers are reviewed by experimentalists. This
would help theoreticians not only to keep current with the experimental literature, but also to develop a better appreciation of how data
are presented. Theoreticians are often tempted, for example, to extract quantitative information from representative examples of ‘raw’
data, failing to realize that ‘representative’ usually means ‘best typical’, thus compromising any practical utility.
Theoreticians also need to improve the presentation of their own models. It is taken for granted that experimental papers should
contain sufficient information for others to replicate the results, but unfortunately, much theoretical work neglects this basic principle.
Attempts to reproduce published computer models often fail, and it is difficult to know whether such failures reflect something profound,
or whether they arise simply because the documentation of models with many parameters is naturally prone to error.
Experimental neuroscientists are not likely to pay serious attention to theoretical models until this problem is resolved. One solution is to
develop a standard format for expressing model structure and parameters, and indeed this goal is evident in various neuroscience database
projects currently underway. Supplying model source code is usually not enough. The format should be efficient and concise, yet allow a
level of generic expression readable by humans and readily translatable for different simulation and evaluation tools. These requirements
suggest exploiting programming languages oriented toward symbolic as well as numeric relations. It will be encouraging if such a standard
is adopted at the publication level, because this will facilitate a more thorough review process as well as provide an accessible database for
the reader. Eventually, this approach can contribute to a seamless database covering the entire field of neuroscience.
Finally, it is vital for this young field that the scientific and funding environment allow many interdisciplinary flowers to bloom.
Support is needed for the marriage of theory and experiment at all levels of neuroscience, ranging from the biophysical basis of neural
computation, to the neural coding of the organism’s external and internal worlds, all the way up to the mysterious but (we assume)
concrete link between brain and mind. Progress at the first level in particular will be essential if any rational medical therapeutics are to
emerge from all this work. Core neuroscience courses should include a theoretical component, demonstrating its fundamental relevance
to experimental neuroscience. At the same time, an ongoing critical examination of this marriage is necessary for the evolution of
computational neuroscience. Perhaps we could learn lessons from physics, in which there is a more mature liaison between theory and
application. As neuroscientists we may not avoid the occasional wild goose chase, but we can at least hope that a theory or two may be
falsified in the process, clearing the path a bit for the next go-around and making it all worthwhile.
LYLE BORG-GRAHAM
Unité de Neurosciences Intégratives et Computationnelles,
Institut Federatif de Neurobiologie Alfred Fessard, CNRS,
Avenue de la Terrasse, 91198 Gif-sur-Yvette, France
e-mail: [email protected]
nature
neuroscience
supplement • volume 3 • november 2000
1191
© 2000 Nature America Inc. • http://neurosci.nature.com
© 2000 Nature America Inc. • http://neurosci.nature.com
review
9. Zipser, D. & Andersen, R. A back-propagation programmed network that
stimulates reponse properties of a subset of posterior parietal neurons. Nature
331, 679–684 (1988).
10. Andersen, R., Essick, G. & Siegel, R. Encoding of spatial location by posterior
parietal neurons. Science 230, 456–458 (1985).
11. Trotter, Y., Celebrini, S., Stricanne, B., Thorpe, S. & Imbert, M. Neural processing
of stereopsis as a function of viewing distance in primate visual area V1.
J. Neurophysiol. 76, 2872–2885 (1997).
12. Trotter, Y. & Celebrini, S. Gaze direction controls response gain in primary visualcortex neurons. Nature 398, 239–242 (1999).
13. Galletti, C. & Battaglini, P. Gaze-dependent visual neurons in area {V3a} of
monkey prestriate cortex. J. Neurosci. 9, 1112–1125 (1989).
14. Bremmer, F., Ilg, U., Thiele, A., Distler, C. & Hoffman, K. Eye position effects in
monkey cortex. I: Visual and pursuit-related activity in extrastriate areas MT and
MST. J. Neurophysiol. 77, 944–961 (1997).
15. Cumming, B. & Parker, A. Binocular neurons in V1 of awake monkeys are
selective for absolute, not relative, disparity. J. Neurosci. 19, 5602–5618 (1999).
16. Boussaoud, D., Barth, T. & Wise, S. Effects of gaze on apparent visual responses of
frontal cortex neurons. Exp. Brain Res. 93, 423–434 (1993).
17. Squatrito, S. & Maioli, M. Gaze field properties of eye position neurones in areas
MST and 7a of macaque monkey. Vis. Neurosci. 13, 385–398 (1996).
18. Vallar, G. Spatial hemineglect in humans. Trends Cogn. Sci. 2, 87–97 (1998).
19. Pouget, A., Deneve, S. & Sejnowski, T. Frames of reference in hemineglect: a
computational approach. Prog. Brain Res. 121, 81–97 (1999).
20. Piaget, J. The Origins of Intelligence in Children (The Norton Library, New York,
1952).
21. Kuperstein, M. Neural model of adaptative hand-eye coordination for single
postures. Science 239, 1308–1311 (1988).
22. Widrow, B. & Hoff, M. E. in Conference proceedings of WESCON, 96–104 (1960).
23. Moody, J. & Darken, C. Fast learning in networks of locally-tuned processing
units. Neural Comput. 1, 281–294 (1989).
24. Hinton, G. & Brown, A. in Neural Information Processing Systems vol. 12, 122–128
(MIT Press, Cambridge Massachusetts, 2000).
25. Olshausen, B. A. & Field, D. J. Sparse coding with an overcomplete basis set: a
strategy employed by V1? Vision Res. 37, 3311–3325 (1997).
26. Jordan, M. & Rumelhart, D. Forward models: supervised learning with a distal
teacher. Cognit. Sci. 16, 307–354 (1990).
27. Rumelhart, D., Hinton, G. & Williams, R. in Parallel Distributed Processing (eds.
Rumelhart, D., McClelland, J. & Group, P. R.) 318–362 (MIT Press, Cambridge,
Massachusetts, 1986).
28. Desmurget, M. et al. Role of the posterior parietal cortex in updating reaching
movements to a visual target. Nat. Neurosci. 2, 563–567 (1999).
29. Wolpert, D. M., Goodbody, S. J. & Husain, M. Maintaining internal
representations: the role of the human superior parietal lobe. Nat. Neurosci. 1,
Viewpoint •
529–533 (1998).
30. Amit, D. The hebbian paradigm reintegrated — local reverberations as internal
representations. Behav. Brain Sci. 18, 617–626 (1995).
31. Fuster, J. Memory in the Cerebral Cortex: An Empirical Approach to Neural
Networks in the Human and Nonhuman Primate (MIT Press, Cambridge,
Massachusetts, 1995).
32. Goldberg, M. & Bruce, C. Primate frontal eye fields. III. Maintenance of a
spatially accurate saccade signal. J. Neurophysiol. 64, 489–508 (1990).
33. Gnadt, J. & Mays, L. Neurons in monkey parietal area LIP are tuned for eyemovement parameters in three-dimensional space. J. Neurophysiol. 73, 280–297
(1995).
34. Funahashi, S., Bruce, C. & Goldman-Rakic, P. Dorsolateral prefrontal lesions and
oculomotor delayed response performance: evidence for mnemonic “scotomas”.
J. Neurosci. 13, 1479–1497 (1993).
35. Zhang, K. Representation of spatial orientation by the intrinsic dynamics of the
head-direction cell ensemble: a theory. J. Neurosci. 16, 2112–2126 (1996).
36. Somers, D. C., Nelson, S. B. & Sur, M. An emergent model of orientation
selectivity in cat visual cortical simple cells. J. Neurosci. 15, 5448–5465 (1995).
37. Salinas, E. & Abbott, L. F. A model of multiplicative neural responses in parietal
cortex. Proc. Natl. Acad. Sci. USA 93, 11956–11961 (1996).
38. Deneve, S., Latham, P. & Pouget, A. Reading population codes: A neural
implementation of ideal observers. Nat. Neurosci. 2, 740–745 (1999).
39. Walker, M., Fitzgibbon, E. & Goldberg, M. Neurons in the monkey superior
colliculus predict the visual result of impending saccadic eye movements.
J. Neurophysiol. 73, 1988–2003 (1995).
40. Mazzoni, P., Bracewell, R., Barash, S. & Andersen, R. Motor intention activity in
the macaque’s lateral intraparietal area. I. Dissociation of motor plan from
sensory memory. J. Neurophysiol. 76, 1439–1456 (1996).
41. Graziano, M., Hu, X. & Gross, C. Coding the locations of objects in the dark.
Science 277, 239–241 (1997).
42. Duhamel, J. R., Colby, C. L. & Goldberg, M. E. The updating of the
representation of visual space in parietal cortex by intended eye movements.
Science 255, 90–92 (1992).
43. Droulez, J. & Berthoz, A. A neural model of sensoritopic maps with predictive
short-term memory properties. Proc. Natl. Acad. Sci. USA 88, 9653–9657 (1991).
44. Dominey, P. & Arbib, M. A cortico-subcortical model for the generation of
spatially accurate sequential saccades. Cereb. Cortex 2, 153–175 (1992).
45. Seung, H. How the brain keeps the eyes still. Proc. Natl. Acad. Sci. USA 93,
13339–13344 (1996).
46. Snyder, L., Batista, A. & Andersen, R. Coding of intention in the posterior parietal
cortex. Nature 386, 167–170 (1997).
47. Snyder, L., Grieve, K., Brotchie, P. & Andersen, R. Separate body- and worldreferenced representations of visual space in parietal cortex. Nature 394, 887–891
(1998).
Models identify hidden assumptions
It is not only theorists who make models. All biologists work with explicit or implicit ‘word models’ that describe their vision of how a system
works. One of the most important functions of theoretical and computational neuroscience is to translate these word models into more
rigorous statements that can be checked for consistency, robustness and generalization through calculations and/or numerical simulations.
The process of turning a word model into a formal mathematical model invariably forces the experimentalist to confront his or her hidden
assumptions. I have often found that I have ‘skipped steps’ in my thinking that were only revealed when we sat down to construct a formal
model. It is easy to tell ‘just so stories’ about cells, circuits and behavior, and discussion sections of journal articles are filled with them, but the
exercise of trying to instantiate the assertions in those stories makes the missing links in all of our data and understanding pop into view.
Models offer a solution to one of the hardest problems in experimental biology: how far to generalize from the data one has collected.
Neuroscientists work on an array of cells and circuits in lobsters, flies, fish, birds, rats, mice, monkeys and humans. Many of the ‘mistakes’ in
neuroscience come from inappropriate generalizations from observations made in one system, or under a given set of conditions.
Experimental work I did with Scott Hooper showed that when an oscillatory neuron was electrically coupled to a non-oscillatory cell, the twocell network had a lower frequency than the isolated oscillator. We initially assumed that this was a general statement, but later learned from
theoretical work that, depending on the properties of the oscillator, either an increase or decrease in frequency could be obtained. We had
correctly understood our data, but we were unaware that the other case was possible because it did not occur in the particular system we were
studying. This is at the core of the usefulness of theory for an experimentalist: it helps us know when we have found only a piece of the answer,
and when we have understood the full set of possible outcomes from a given set of conditions.
Finally, theory is legitimized dreaming. We all became neuroscientists out of a deep desire to explore the mysteries of how the brain works.
Most of us who do experimental work spend our days preoccupied with the myriad and mundane details that are so crucial to doing
experiments and analyzing data. I came of age as a scientist believing that my career would be over if I were ever wrong. For me, participating
in the development of highly speculative models was akin to learning to drive as a teenager. In both cases, I remember the thrill of the freedom
of the open road (and some of the trepidation of getting lost or getting a flat tire). Speculative models suggest possibilities beyond those found
in one’s laboratory, and can produce just that altered outlook that can send one on a new and exciting path.
EVE MARDER
Volen Center, MS 013, Brandeis University, Waltham, Massachusetts 02454, USA
e-mail: [email protected]
1198
nature
neuroscience
supplement • volume 3 • november 2000
© 2000 Nature America Inc. • http://neurosci.nature.com
review
© 2000 Nature America Inc. • http://neurosci.nature.com
43. Wallis, G. & Rolls, E. A model of invariant object recognition in the visual
system. Prog. Neurobiol. 51, 167–194 (1997).
44. Riesenhuber, M. & Poggio, T. Are cortical models really bound by the
“binding problem”? Neuron 24, 87–93 (1999).
45. Amit, Y. & Geman, D. A computational model for visual selection. Neural
Comput. 11, 1691–1715 (1999).
46. Bülthoff, H. & Edelman, S. Psychophysical support for a two-dimensional
view interpolation theory of object recognition. Proc. Natl. Acad. Sci. USA
89, 60–64 (1992).
47. Riesenhuber, M. & Poggio, T. The individual is nothing, the class
everything: Psychophysics and modeling of recognition in object classes.
AI Memo 1682, CBCL Paper 185 (MIT AI Lab and CBCL, Cambridge,
Massachusetts, 2000).
48. Edelman, S. Class similarity and viewpoint invariance in the recognition
of 3D objects. Biol. Cybern. 72, 207–220 (1995).
49. Moses, Y., Ullman, S. & Edelman, S. Generalization to novel images in
upright and inverted faces. Perception 25, 443–462 (1996).
50. Riesenhuber, M. & Poggio, T. A note on object class representation and
categorical perception. AI Memo 1679, CBCL Paper 183 (MIT AI Lab and
CBCL, Cambridge, Massachusetts, 1999).
51. Hinton, G., Dayan, P., Frey, B. & Neal, R. The wake-sleep algorithm for
unsupervised neural networks. Science 268, 1158–1160 (1995).
Viewpoint •
52. Chelazzi, L., Duncan, J., Miller, E. & Desimone, R. Responses of neurons
in inferior temporal cortex during memory-guided visual search.
J. Neurophysiol. 80, 2918–2940 (1998).
53. Haenny, P., Maunsell, J. & Schiller, P. State dependent activity in monkey
visual cortex. II. Retinal and extraretinal factors in V4. Exp. Brain Res. 69,
245–259 (1988).
54. Miller, E., Erickson, C. & Desimone, R. Neural mechanism of visual
working memory in prefrontal cortex of the macaque. J. Neurosci. 16,
5154–5167 (1996).
55. Motter, B. Neural correlates of feature selective memory and pop-out in
extrastriate area V4. J. Neurosci. 14, 2190–2199 (1994).
56. Olshausen, B. & Field, D. Emergence of simple-cell receptive field
properties by learning a sparse code for natural images. Nature 381,
607–609 (1996).
57. Hyvärinen, A. & Hoyer, P. Emergence of phase and shift invariant features
by decomposition of natural images into independent feature subspaces.
Neural Comput. 12, 1705–1720 (2000).
58. Földiák, P. Learning invariance from transformation sequences. Neural
Comput. 3, 194–200 (1991).
59. Weber, M., Welling, W. & Perona, P. Towards automatic discovery of
object categories. in IEEE Conference on Computer Vision and Pattern
Recognition (in press).
On theorists and data in computational neuroscience
A diversity of activities in neuroscience are labeled ‘theory’. Developing Bayesian spike sorting algorithms, making a theory of
consciousness, attractor neural network dynamics, constructing multi-compartment simulations of neurons, these and many other
activities have a theoretical component. So of course there is a role for theory in neuroscience.
A question about the future of computational neuroscience can be bluntly put. Is understanding how the brain works going to be an
enterprise in which pure theorists, scientists without experimental laboratories and not mere subsidiary parts of an experimentalist’s
laboratory, make essential contributions? Are independent theorists important to neuroscience? Important enough, say, to merit
independent faculty positions in universities? Or will researchers doing experiments (or at least controlling experimental laboratories)
make all the significant contributions, and be the only appropriate occupants of professorial positions in neuroscience?
The history of chemistry is the closest parallel. It is a subject in which both qualitative theory (the periodic table, the chemical bond)
and quantitative theory (statistical mechanics, quantum mechanics) have been important. Modern quantitative theory and its impact on
chemistry was brought forward by people who did not themselves do experiments, such as chemistry Nobelists Onsager and Kohn,
whose ability in mathematics was key to understanding how to make new predictions and how to ground in understanding concepts
that came qualitatively from experiments (in the areas of chemical bonding and irreversible thermodynamics).
Physics, geology, chemistry and astronomy have developed independent theorists when the breadth of these subjects exceeded the
span of talents of a single individual. Within neuroscience I know no one who is both outstandingly able to perform inventive rat brain
surgery and able to cogently describe modern artificial intelligence theories of learning and learnability. These are such different
dimensions of expertise! Having both the talent and the time to span such a range is now impossible. Computational neuroscience is
therefore in the process of bifurcating into theorists and experimentalists.
Sensible theory in science is rooted in facts, be they general or specific, so theory and experiment must interact. In physical science
the development of a theoretical branch was at the time made easier because the relatively small number of essential experimental facts
were all available in scientific journals. Now, in the more complex parts of these subjects, large data sets are only summarized in
publications, and sharing of the extensive data sets themselves has become commonplace. Two forces have pushed this accessibility. One
is the genuine wish to advance science rapidly. The other is pragmatic: doing experimental science is expensive. Science is chiefly paid for
from the public purse, either directly by government or indirectly by the tax-free subsidization of charitable foundations. In appealing for
publicly based support for a science, it is important that resources are seen to be used effectively.
Good experimentalists excel in the art of knowing which parts of their own unpublished data should be ignored, so not all data ought
be shared. But certain sharing should become common practice. For example, neuroscientists understand that the (partial) publication of
data only through summaries such as post-stimulus time histograms can conceal what is actually happening. In these days of web sites,
it would be trivial to make available all spike rasters from which summaries are published.
Some of my friends lament “we will fail to get credit for our work.” But most scientists know that it was the careful measurements of
Tycho Brahe that led Kepler to his three laws of planetary motion. Reputations of experimentalists are only enhanced by having their data
cited as significant by others in the motivation or testing of ideas.
J. J. HOPFIELD
Princeton University, Princeton, New Jersey 08544, USA
e-mail: [email protected]
1204
nature
neuroscience
supplement • volume 3 • november 2000
© 2000 Nature America Inc. • http://neurosci.nature.com
© 2000 Nature America Inc. • http://neurosci.nature.com
review
rabbit eyelid response with mossy fiber stimulation as the conditioned stimulus.
Bull. Psychonom. Soc. 28, 245–248 (1985).
21. Steinmetz, J. E., Logan, C. G. & Thompson, R. F. in Cellular Mechanisms of
Conditioning and Behavioral Plasticity (eds. Woody, D. L., Alkon, D. L. &
McGaugh, J. L.) 143–148 (Plenum, New York, 1988).
22. Lewis, J. L., LoTurco, J. J. & Solomon, P. R. Lesions of the middle cerebellar
peduncle disrupt acquisition and retention of the rabbit’s classically conditioned
nictitating membrane response. Behav. Neurosci. 101, 151–157 (1987).
23. Sears, L. L. & Steinmetz, J. E. Dorsal accessory inferior olive activity diminishes
during acquisition of the rabbit classically conditioned eyelid response. Brain
Res. 545, 114–122 (1991).
24. Mauk, M. D., Steinmetz, J. E. & Thompson, R. F. Classical conditioning using
stimulation of the inferior olive as the unconditioned stimulus. Proc. Natl. Acad.
Sci. USA 83, 5349–5353 (1986).
25. McCormick, D. A., Steinmetz, J. E. & Thompson, R. F. Lesions of the inferior
olivary complex cause extinction of the classically conditioned eyeblink
response. Brain Res. 359, 120–130 (1985).
26. McCormick, D. A., Clark, G. A., Lavond, D. G. & Thompson, R. F. Initial
localization of the memory trace for a basic form of learning. Proc. Natl. Acad.
Sci. USA 79, 2731–2735 (1982).
27. McCormick, D. A. & Thompson, R. F. Cerebellum: essential involvement in the
classically conditioned eyelid response. Science 223, 296–299 (1984).
28. Schneiderman, N. & Gormezano, I. Conditioning of the nictitating membrane
of the rabbit as a function of CS–US interval. J. Comp. Physiol. Psychol. 57,
188–195 (1964).
29. Smith, M. C., Coleman, S. R. & Gormezano, I. Classical conditioning of the
rabbit’s nictitating membrane response at backward, simultaneous, and forward
CS–US intervals. J. Comp. Physiol. Psychol. 69, 226–231 (1969).
30. Mauk, M. D. & Ruiz, B. P. Learning-dependent timing of Pavlovian eyelid
responses: differential conditioning using multiple interstimulus intervals.
Behav. Neurosci. 106, 666–681 (1992).
Viewpoint •
31. Millenson, J. R., Kehoe, E. J. & Gormezano, I. Classical conditioning of the
rabbit’s nictitating membrane response under fixed and mixed CS–US intervals.
Learn. Motiv. 8, 351–366 (1977).
32. Moore, J. W. & Choi, J. S. Conditioned response timing and integration in the
cerebellum. Learn. Mem. 4, 116–129 (1997).
33. Hesslow, G. & Ivarsson, M. Suppression of cerebellar Purkinje cells during
conditioned responses in ferrets. Neuroreport 5, 649–652 (1994).
34. Moore, J. W., Desmond, J. E. & Berthier, N. E. Adaptively timed conditioned
responses and the cerebellum: a neural network approach. Biol. Cybern. 62,
17–28 (1989).
35. Bullock, D., Fiala, J. C. & Grossberg, S. A neural model of timed response
learning in the cerebellum. Neural Networks 7, 1101–1114 (1994).
36. Fiala, J. C., Grossberg, S. & Bullock, D. Metabotropic glutamate receptor
activation in cerebellar Purkinje cells as substrate for adaptive timing of the
classically conditioned eye-blink response. J. Neurosci. 16, 3760–3774 (1996).
37. Gluck, M. A., Reifsnider, E. S. & Thompson, R. F. in Neuroscience and
Connectionist Theory (eds. Gluck, M. A. & Rumelhart, D. E.) 131–185 (Lawrence
Erlbaum, Hillsdale, New Jersey, 1990).
38. Hull, C. L. Principles of Behavior, an Introduction to Behavior Theory (AppletonCentury-Crofts, New York, 1943).
39. Medina, J. F., Garcia, K. S., Nores, W. L., Taylor, N. M. & Mauk, M. D. Timing
mechanisms in the cerebellum: testing predictions of a large-scale computer
simulation. J. Neurosci. 20, 5516–5525 (2000).
40. Koch, C. & Segev, I. Single neurons and their role in information processing.
Nat. Neurosci. 3, 1171–1177 (2000).
41. Buonomano, D. V. & Mauk, M. D. Neural network model of the cerebellum:
temporal discrimination and the timing of motor responses. Neural Comput. 6,
38–55 (1994).
42. Aizenman, C. D. & Linden, D. J. Regulation of the rebound depolarization and
spontaneous firing patterns of deep nuclear neurons in slices of rat cerebellum.
J. Neurophysiol. 82, 1697–1709 (1999).
What does ‘understanding’ mean?
When Ed Lewis in my department won a Nobel Prize a few years ago, our chair organized a party. On my way there, I overheard an
illustrious chemist offer, “Hey, at least one smart biologist,” making his colleagues chuckle. Nothing new in academia, the land of the highminded yet curiously parochial primate. Why bring this up? Because science starts with human interactions: if we want theory and
experimental neuroscience to strengthen each other, we must hope for people with different cultures, expertise, perspectives and footwear
to leave their prejudices at the door and learn to better appreciate each other’s strengths. This is not easy to achieve when human nature
makes us shun the unfamiliar, when the structure of academic institutions imposes borders between disciplines, and when reductionist
approaches alone undeniably produce so much concrete knowledge. So, if reductionism works so well—as it has in the history of
neuroscience—why should we care about bringing theory (and theorists) into the kitchen? It all boils down, it seems to me, to a classical
philosophical question: what does ‘understanding’ mean? Upon reflection, it is depressing, if not scandalous, to realize how rarely I ask
myself this. As an experimentalist, I would consider most of what my lab does as descriptive; at best, we try to tie one observation to
another through some causal link. Most of what we try to explain has a mechanistic underpinning; if not, a manuscript reviewer, editor or
grant manager usually reminds us that is what this game is about. And we all go our merry way filling in the blanks. This is, in my view,
where theorists most enrich what we do. Theorists, through their training, bring a different view of explanatory power. Causal links
established by conventional, reductionist neurobiology are usually pretty short and linear, even when experiments to establish those links
are horrendously complex: molecule M phosphorylates molecule N, which causes O; neuron A inhibits neuron B, ‘sharpening’ its response
characteristics. This beautiful simplicity is the strength of reductionism and its weakness. To understand the brain, we will, in the end, have
to understand a system of interacting elements of befuddling size and combinatorial complexity. Describing these elements alone, or even
these elements and all the links between them, is obviously necessary but, many would say, not satisfyingly explanatory. More precisely,
this kind of approach can only explain those phenomena that reductionism is designed to get at. It is the classical case of the lost key and
the street lamp; we often forget that the answers to many fundamental questions lie outside of the cone of light shed by pure analysis (in
its etymological sense). I am interested in neuronal systems. In most cases, a system’s collective behavior is very difficult to deduce from
knowledge of its components. Experience with many systems of neurons under varied regimes could, in theory, eventually give me a good
intuitive knowledge of their behavior: I could predict how system S should behave under certain conditions. Yet my understanding of it
would be minimal, in the sense that I could not convey it to someone else, except by conveying all my past experience. This is one of the
many places where theorists can help me. Much of what we need to provide a deeper understanding of these distributed phenomena may
already exist in some corner of the theory of dynamical systems, developed by mathematicians, physicists or chemists to understand or
describe other features of nature. If it does not, maybe it can be derived. But the first step is to map my biological system onto the existing
theoretical landscape. This is where the challenge (and fun) lies—and where sociological forces must be tamed. In brief, neuroscience is, to
me, a science of systems in which first-order and local explanatory schemata are needed but not sufficient. Reductionism, by its nature,
takes away the distributed interactions that underlie the global properties of systems. Theoretical approaches provide different means to
simplify. We must thus learn to understand, rather than avoid complexity: simplicity and complexity often characterize less the object of
study than our understanding of it. Maybe one day, neuroscience textbooks will finally start slimming down….
GILLES LAURENT
Division of Biology, California Institute of Technology, Pasadena, California 91125, USA
e-mail: [email protected]
nature
neuroscience
supplement • volume 3 • november 2000
1211