Blue Brain

The beginning of a biologically accurate, neuron-by-neuron, silicon-based brain?

12 Comments

  1. I would love to know what is meant by the following:

    “There are lots of models out there, but this is the only one that is totally biologically accurate. We began with the most basic facts about the brain and just worked from there.”

    What do they consider the most basic facts about the brain? They certainly can’t be simulating individual ion channels or receptors. I think projects like this are great, but “totally biologically accurate” is a bit much.

  2. kenneth aizawa

    This is such a bizarre story, it is hard to believe.  For example, “Markram estimates that simulating the brain on a supercomputer with
    existing microchips would generate an annual electrical bill of about
    $3 billion . But if computing speeds continue to develop at their
    current exponential pace, and energy efficiency improves, Markram
    believes that he’ll be able to model a complete human brain on a single
    machine in ten years or less.”  Doesn’t the calculation of the energy costs include the increase in computing speeds and energy speeds?  This is weird.  But, even in ten years, wouldn’t the project still costs tens of millions?  Can they really have pockets that deep? 

  3. Eric Thomson

    Hype aside, this kind of project is the future of computational neuroscience. The models are biologically realistic (relative to, say connectionist networks) in that they build in the geometry of individual neurons, channel densities (channels/surface area as a function of location on the neuron), receptor densities, data-driven learning rules (i.e., spike-timing dependent plasticity), and all that. This is what neuroscientists mean by ‘biologically realistic modelling.’ By ‘most basic facts about the brain’ they likely mean the facts that we know to be relevant for determining the electrical activity of a neuron, which ultimately determines the behavior of the brain (once you add in synapses, which are also in the model).


    There is a contrasting super-large scale approach that started with a somewhat less detailed model of the entire cortex and will work from there, adding realism where needed, that I discussed here. It also is where things are headed, and ultimately the two should meet, and the large-scale should meet with psychology and behavior, and we will live in naturalistic wonderland.

    Of course there is a lot of hype in that article, but what they are doing is where theoretical/computational neuroscience is headed. To really get the science, I find that skipping the intro, discussion, and press releases associated with a work makes for a much better feel for what is happening qua science. Unfortunately, those are the bits that philosophers tend to focus on, picking apart (justifiably to a degree) the sloppy language. The thing is, the scientists look at their real contributions as happening in the methods and results section. The rest is an afterthought usually written last and quickly just to make it ready for publication as they realize few people in the field pay attention to such padding. Most of us just read the figure legends and check out the figures. There are so many interesting neuro papers every month that it is hard to do much else, so the papers are written with that in mind.
  4. Eric,

    You’re right, I think these are all good points to keep in mind. However, although it may be true that other scientists will only look at the results and methods, I think the attitude expressed (partially by the quotation I gave above) is worth paying attention to. I mean, it seems important to attend to what scientists do and what they believe as well as the published results and methods.

    My concern with efforts like these–particularly if they constitute the future of computational neuroscience–is that they seem to be biting a particular bullet about how to do neuroscience. It would be nice if there were more macroscopic theories about how neural systems work, where we can abstract away from some of the biological details. Efforts like this, however, seem to suggest that that’s not the way to go: to understand brains, we have to keep on increasing the resolution of the simulations. For some things, that’s absolutely right: to understand nuclear explosions, or the effects of supersonic flight on an airplane design, the best approach seems to rely on simulating as many particles in as much detail as possible–there is no “macro” theory that will do the work. But to understand how flight or nuclear fission works generally, you don’t need to resort to such a low level. It’s not clear to me that we’re at the point in neuroscience where we should think that increasingly detailed simulations are the best way to understand things. Granted, for many purposes it’s going to be much better to have detailed simulations: it’s certainly easier to do experimental manipulations on a simulation than the real thing. But sometimes the attitude is sort of reminiscent of early AI researchers, who thought they pretty much knew how minds worked, and all that was required to complete the project was the addition of more computing power. It’s just not clear to me that throwing more processors at more detailed simulations will, by itself, increase our understanding.

    So this is not meant as a criticism; I don’t think this kind of work necessarily excludes the concerns I have. I just think it’s worth mentioning here.

  5. kenneth aizawa

    What information are they using to set up the model of connectivity?  I can believe there’s an abundance of data on channel types, numbers, and their distributions, but from the review or two I’ve read about the organization of cells in V1, things sounds a bit sketchy.  It sounds, roughly, as though neuroscientists know roughly what layers project where, but that the precise details are not yet in.  So, I worry about there being a huge fudge factor here.  By that, I mean it is something that is just filled in to get the job done, something you know that has to be there, although you do not have direct data of connectivity to drive it.  Maybe back-prop from PDP is the same kind of fudge factor.

    And, in truth, this is just the kind of thing that one might expect to vary a lot across brain regions.  Isn’t it variations of this sort that one finds, or expects to find, in the different Brodmann’s areas?

  6. Eric Thomson

    Kenneth:
    There are a lot more details than about what layers project where! There is tons of data on this from dual cell recordings (example), continued anterograde/retrograde tracing studies (example below), imaging of individual synapses (example), filling two cells with different colors to see the area of overap (example), diffusion tensor MRI (example). For some more cutting edge methods, this chapter is good. Plus, the new photon-gated sodium channels promise to be extremely helpful (example). The latter is one of the most exciting recent developments in systems neuroscience: we can control neuronal firing patterns using light. It will explode in the next decade.

    So even though we have a lot to learn, we have already learned a lot, enough to make educated guesses to put into large-scale models. For one particularly good overall example if you want an idea of the level of detail it has reached, see this paper.

    That said, the gold standard is still scanning electron microscopy, which was used to visualize C elegans. I wrote a bit about that project underway now here (which includes a gorgeous movie of a retinal reconstruction).

    Corey:
    They aren’t biting any bullets, but working within the most successful, fruitful, and well-established perspective that exists in computational neuroscience. The conductance-based models form the backbone for all of physiology, and there is really little question that they are the right types of models to use. There are details to be worked out, but they are what we mean by ‘biologically realistic’ models.

    The more interesting question is whether we need all that detail. The nice thing is, they probably don’t, but it is much easier to go from a too-detailed model to a less detailed model in a rational way, than the opposite. For instance, once we have the detailed model, we can simplify it by finding mathematical equivalences between complicated structures and more simple structures (as is already done in modelling dendritic morphology as shown by Rall  and discussed here). In that case, if certain assumptions are met, you can safely replace a complicated dendritic tree with a single cylinder.

    In addition to such elegant procedures, you can also vary parameters to see how much it influences the behavior, thereby “empirically” determining how important it is to be precise about various features of the model.

    Also, if there are different parameters that change on very different time scales, sometimes you can safely ignore the dynamics of the fast change and treat it as a instantaneous change (this is kind of technical, but computationally it really helps–it is discussed on page 137 of the wonderful book Spikes Decisions and Actions by Hugh Wilson).

    The types of methods I’m talking about have even been shown to connect connectionist and conductance-based models when you make certain assumptions about the conductance-based models (Ermentrout-B,
    Reduction of conductance based models with slow synapses to
    neural nets,
    Neural Computation (1994)6:679-695). Now, most neuroscientists look at PDP models as a fad that contributed little at all to neuroscience (especially given that we have had much better models for over 50 years), but it could turn out that for a small subset of neuronal populations, PDP models actually work.

    For some reason, philosophers haven’t really caught on to how central the conductance-based (i.e., Hodgkin-Huxely derivative) models are in neuroscience, especially systems and computational neuroscience. I think it may be partly because of the connectionism fad in philosophy, which gave people a feeling that they were studying neuroscience when they were really studying speculative psychological models with units that look enough like neurons to make one feel warm fuzzy and biological. Also, it is only recently (e.g., past 10 years) that large-scale conductance-based models have been used, and only recently could you run relatively large-scale simulations in a lab with average funding.

  7. Eric Thomson

    I left out the link to visualizing individual synapses (here). I didn’t mention a bunch of techniques that have been filling in a lot of details the past 20 years, there are just too many. I mentioned the most well-known (by me), and those that will be the most important in the future.

  8. General Information, extracted from “Man by Nature: The Hidden Programming Controlling Human Behavior”:

    … it has been found that the “newest” portion of our brain (the neocortex) has six layers (distinguished by the type of neurons predominant in each layer) and that vertical columns of these neurons, crossing all six layers, are tightly interconnected to form microscopic circuits. These “microcircuits” are now viewed as the fundamental processing units in the neocortex. Each microcircuit acts like a “parallel processor” (in computer terminology) that can handle multiple programming tasks simultaneously, so the neocortex acts like a massive array of general-purpose parallel processors, groups of which can be customized for particular functions and – most remarkably – can be rewired on the fly (within limits) to adapt to changing requirements. Adjacent columns in a given area of the neocortex work in parallel – literally as well as functionally – on different aspects of complex tasks.

    Clearly, the complexity of the brain is mind boggling, and so is the brain’s computational size. Neuroscientists are fond of quoting facts to impress the latter upon us: a cubic millimeter of cortex may contain three to four kilometers of axons, fifty thousand neurons, three hundred million interconnections; the entire cortex may contain ten to fifteen billion neurons, with sixty trillion connections; etc. – I think we get the point. As I said, human computer designers can only salivate with envy. Emulating the brain’s methods of programming may prove more than daunting, it may prove impossible. Nonetheless, challenges are what humans like best, and the work is afoot.

  9. gualtiero

    Eric,

    Thank you so much for this little tutorial. I think you are dead on about philosophers getting captivated by PDP connectionism, thereby missing the genuine computational neuroscience that had been developing since way before the 1980s.

  10. Hi all. You can’t wait for inspiration. You have to go after it with a club.
    I am from Vietnam and , too, and now am writing in English, please tell me right I wrote the following sentence: “The oxford carved top oak wall the vienna regulator dark cherry traditional style wall clock.”

    Thank you so much for your future answers :-D. Josephine.

Comments are closed.

Back to Top