That’s the title of Larry Abbott’s recent perspective piece in Neuron (you can find the full article here).

It starts with an excellent discussion of the roll of theory in

neuroscience, and proceeds with a selective overview of the main

insights gained from modeling over the last 20 years.

Note by ‘theoretical neuroscience’ he isn’t talking about neurophilosophy, but mathematical modeling. He discusses the by now common distinction between word models and mathematical models:

What has a theoretical component brought to the field of neuroscience? Neuroscience has always had models (how would it be

possible to contemplate experimental results in such complex systems

without a model in one’s head?), but prior to the invasion of the

theorists, these were often word models. There are several advantages

of expressing a model in equations rather than words. Equations force a

model to be precise, complete, and self-consistent, and they allow its

full implications to be worked out. It is not difficult to find word

models in the conclusions sections of older neuroscience papers that

sound reasonable but, when expressed as mathematical models, turn out

to be inconsistent and unworkable. Mathematical formulation of a

model forces it to be self-consistent and, although self-consistency is

not necessarily truth, self-inconsistency is certainly falsehood.

It isn’t clear to me that equations force you to be consistent (a>0 and a<0 are both equations, after all). But inconsistencies in your equations will typically be discovered quite fast, especially if you try to use them for simulations. It would be interesting to see examples of word models that turned out to be inconsistent that Abbott mentions.

I think what makes

mathematical models so useful is not our ability to spot

inconsistencies (it isn’t trivial to find inconsistencies in a complicated mathematical model after all!). Rather, it is their ability to make

precise predictions that can be tested. It is also that they make all

assumptions and relevant variables explicit. It is harder to hide

bullshit in a set of differential equations than in a natural language

treatise.

Note this topic of the difference between natural language analysis and mathematical analysis came up before, over at Brain Hammer in the post Math vs Natural Language.

*Related*

Eric,

Would you consider the circuit diagram of a radio receiver to be a mathematical model?

Arnold: no, but it has a natural and obvious (for the engineer) conversion into a mathematical model because of V=IR and such.

The problem is that the you need more than V=IR and more than all the other equations relating to the components of the radio receiver as a functioning system in order to understand how the thing works. The circuit diagram constrains the functional organization in a formal way. As a formal descriptions, circuit diagrams can be objectively tested for their implications (by construction or computer simulation). It seems to me that minimal connectivity diagrams (circuit diagrams) are extremely important in theoretical systems neuroscience. They are better than verbal descriptions and are a necessary adjunct to the equations that describe the activity of individual neurons.

Speaking as a math modeler for neuroscience applications, the equations do force a certain kind of self-consistency (which is a weaker point), but more broadly they facilitate consistency checking against data and other assumptions. I think Larry could have been clearer there. It’s surprisingly easy to hide bullshit in differential equations, as many of my colleagues will probably agree. Consistency against data and against other logical or qualitative constraints is one tool to purge the BS.

Simulation and analysis of explicit math models provide a great way to test out the model assumptions as computational hypotheses, as you state. Increasingly, computers are needed to store the large number of inter-connected constraints to ensure that self-consistent models are developed, especially in reference to experimental data. This is unfeasible without equations.

Arnold: sure. Abbott discusses recent work on modelling connectivity in brains. Connectivity is necessary but not sufficient for understanding neuronal function.

Good point about hiding BS in equations. When I am confronted with having to set 100 parameters in a conductance-based model, there is a lot of wiggle room there for me to do crappy work (e.g., not looking at how sensitive the model is to parameter selection). Right now, for instance, there is no consensus on how to set channel densities in neurites. We all want to be “biologically realistic” (a term Abbott wonderfully denotes a sociological, not natural, category), but as realism in network models increases, fudging tends to increase and steps must be taken to be sure your model isn’t brittle or worse.

I didn’t discuss it much here, but your point about simulations is very important, and we discussed it some at the linked page of Pete’s (Brain Hammer). This, for me, has been the biggest practical benefit of modelling. I get to check intuitions against the model, and vice-versa.

Eric, thanks for the nice post. As I’ve pointed out before, I’m a big fan of theoretical neuroscience a la Abbott. I think philosophers have not paid nearly enough attention to it (preferring to pay attention to connectionism a la Rumelhart and McClelland 1986 or to ignore the brain completely).

Mathematical modeling has many advantages, and there is a large philosophical literature about that. One important advantage, hinted at by Abbott, is that it allows the application of various mathematical techniques for deriving consequences of the model. Another advantage is that they allow the use of results from other sciences (e.g., physics). And a particularly huge advantage is that they can be the basis for the construction of computational simulations (not to be confused with the theoretical modesl themselves).

Abbott has a great quote about physicists pouring into neuroscience:

Abbott, of course, entered neuroscience through theoretical physics.

I think The Society of Neuroscience gives tribute to Wilfrid Rall this year becuase of his contribution to cable theory or the physics of electricity and simulation of how dendrites work.

But, mathematical modelling of the brain is not going redundant.

I mean, this is what Giuseppe Tenti, Siv Sivaloganathan and James Drake MATHEMATICAL MODELING OF THE BRAIN: PRINCIPLES AND CHALLENGES, “Neurosurgery” Volume 62(5), May 2008, p 1146–1157, said:

“There is a great deal of confusion as to the precise meaning of

the phrase “mathematical model.” The qualifier “mathematical”

imparts an aura of extreme accuracy and precision to the

word “model,” leading many people to accept conclusions

drawn from the model as the truth, especially if those conclusions

were arrived at with the help of a digital computer. The reality is quite different. A mathematical model of a phenomenon or a system is simply a mathematical description of

one aspect of nature, and, as such, it extracts and exaggerates some facets of the truth. It cannot possibly be the whole truth, because the mathematical model is not a duplicate of nature with all the minute details taken into account. If it were, then the model would be isomorphic to

nature itself and hence useless, a mere repetition of all

the complexity which nature presents to us, that very

complexity we frame theories to penetrate and set aside. If a theory were not simpler than the phenomenon

it was designed to model, it would serve no purpose. Like a portrait, it can represent only a part of the

subject it pictures. This part it exaggerates, if only

because it leaves out the rest. Its simplicity is its

virtue, provided the aspect it portrays be that which

we wish to study. (Truesdell C, and Muncaster RG 1980)

That’s a great quote. Abbott has something in a similar vein:

The interplay between conceptual models, word models, and mathematical models is an interesting one. They tend to follow in that direction (idea, words, then math) at first, but then things get really cool because the math lets you explore possibility space quickly and rigorously, which perturbs your conceptual intuitions about the world and helps refine the word and concept models.

On the other hand, I think it is far better to err on the side of too much detail in a model than not enough. It is much easier to go from a complicated model to a simplified one in a principled way, than from a simple one to a more complex one. The latter takes much more work!

On the third hand, I think the most potent engine for true conceptual innovation is novel data. That drives pretty much everything.