Neural Computation and the Computational Theory of Cognition

This paper (co-authored with theoretical and experimental neuroscientist Sonya Bahar) is what I’ve been aiming at during all these years.  This is why I made this big fuss over developing an adequate non-semantic account of computation.

I think the paper is finally ready to submit, but I’d love to get some feedback if anyone has enough interest and time.

 

Abstract:
We argue that neural processes are neither analog nor digital computations; they constitute a third kind of computation.  Analog computation is the processing of continuous signals; digital computation is the processing of strings of digits.  But current neuroscientific evidence indicates that typical neural signals, such as spike rates, are graded like continuous signals but are constituted by discrete elements (spikes); thus typical neural signals are neither continuous signals nor strings of digits.  It follows that neural computation is sui generis.  This has three important consequences.  First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation.  Second, several popular views about neural computation turn out to be incorrect.  Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.

11 Comments

  1. Glenn Carruthers

    Hi Gualtiero,

    Here are some thoughts on your paper. Overall very interesting, just a couple of issues of scope.

    You define generic computation in this way: “Computation in the generic sense is the processing of vehicles according to rules that are sensitive to certain vehicle properties and, specifically, to differences between different portions (i.e., spatiotemporal parts) of the vehicles.”. Now, the ‘according to rules’ clause of the definition I think limits the scope of your paper more than is ideal for your argument. I take it that you want an examination of the kind of computation that could be done by the brain, but the ‘according to rules’ clause rules out some possibilities a priori. Specifically it excludes a class of computers which compute over representing vehicles with their content grounded in resemblance (cf. Horgan and Tiensen 1989; Copeland 2006; O’Brien and Opie 2006). That is computers which are analogue in the more traditional, or perhaps you would prefer ‘old fashioned’, sense (this notion of analogue representation is mentioned briefly on pp14, but put to one side as it is not relevant for your notion of analogue computers). Now I recognise that you want a non-semantic notion of computation (such that some computers compute over representations others compute over meaningless vehicles) so you probably are not overly fond of definitions of (any particular kind of) computation involving reference to representing vehicles; but I worry that by making this move you limit your analysis to too small a set of kinds of computation and thereby consider too few options as to what kind of computer the brain may be. Now it might be that the hypothesis that the brain is this sort of computer is simply not of interest here; which would be fine you can only talk about so much and you are perfectly clear on what you are talking about; but then I think you are limiting the impact your argument can have, i.e. if I’m already convinced that the brain is an analogue computer in this sense (not yours) why do I care about this argument?

    In arguing that the brain is a computer in the generic sense you say that “Current evidence suggests that the primary vehicles manipulated by neural processes are neuronal spikes (action potentials) and that the functionally relevant aspects of neural processes depend on dynamical aspects of the spikes such as spike rates and spike timing.” As it stands I think this needs a bit more argument or at least some clarification. The way this is written could be read as claiming that action potentials from individual neurons and not patterns of neural firing across networks which are computed. If this is the claim then it is controversial and needs to be argued for. If on the other hand you mean that patterns of firing are the vehicles of the computation, but these are made of action potentials from individual neurons (thus making the latter primary) then it would be helpful to clarify this. (cont. next comment)

  2. Glenn Carruthers

    Hi Gualtiero,

    Here are some thoughts on your paper.

    You define generic computation in this way: “Computation in the generic sense is the processing of vehicles according to rules that are sensitive to certain vehicle properties and, specifically, to differences between different portions (i.e., spatiotemporal parts) of the vehicles.”. Now, the ‘according to rules’ clause of the definition I think limits the scope of your paper more than is ideal for your argument. I take it that you want an examination of the kind of computation that could be done by the brain, but the ‘according to rules’ clause rules out some possibilities a priori. Specifically it excludes a class of computers which compute over representing vehicles with their content grounded in resemblance (cf. Horgan and Tiensen 1989; Copeland 2006; O’Brien and Opie 2006). That is computers which are analogue in the more traditional, or perhaps you would prefer ‘old fashioned’, sense (this notion of analogue representation is mentioned briefly on pp14, but put to one side as it is not relevant for your notion of analogue computers). Now I recognise that you want a non-semantic notion of computation (such that some computers compute over representations others compute over meaningless vehicles) so you probably are not overly fond of definitions of (any particular kind of) computation involving reference to representing vehicles; but I worry that by making this move you limit your analysis to too small a set of kinds of computation and thereby consider too few options as to what kind of computer the brain may be. Now it might be that the hypothesis that the brain is this sort of computer is simply not of interest here; which would be fine you can only talk about so much and you are perfectly clear on what you are talking about; but then I think you are limiting the impact your argument can have, i.e. if I’m already convinced that the brain is an analogue computer in this sense (not yours) why do I care about this argument?

    In arguing that the brain is a computer in the generic sense you say that “Current evidence suggests that the primary vehicles manipulated by neural processes are neuronal spikes (action potentials) and that the functionally relevant aspects of neural processes depend on dynamical aspects of the spikes such as spike rates and spike timing.” As it stands I think this needs a bit more argument or at least some clarification. The way this is written could be read as claiming that action potentials from individual neurons and not patterns of neural firing across networks which are computed. If this is the claim then it is controversial and needs to be argued for. If on the other hand you mean that patterns of firing are the vehicles of the computation, but these are made of action potentials from individual neurons (thus making the latter primary) then it would be helpful to clarify this (cont next comment)

  3. Glenn Carruthers

    (I suspect you mean the latter as it is more consistent with some of the claims on pp 20).

    Toward the end you stress that computers are a subset of mechanisms and that the two ought not be conflated. This is true. However, on a ‘non-semantic’ notion of computation what exactly is the distinguishing feature of computers? This should be made explicit.

    Finally, I suspect that the claim that work on digital computation was begun by Turing (p 7) is a little bit harsh on Babbage and Lovelace.

    Copeland, J. (2006). the modern history of computing. the Stanford Encyclopedia of Philosophy. E. N. Zalta.
    Horgan, T. and J. Tiensen (1989). “Representation Without Rules.” Philosophical Topics XVII(1).
    O’Brien, G. and J. Opie (2006). “How do Connectionist Networks Compute?” Cognitive Processing 7.

    also we might want to talk about what computationalism in psyshology is and what justifies it (e.g. whether it is committed to digital computation as you describe). But this will involve some broader issues.

  4. gualtiero

    Glenn,

    thanks so much for these helpful comments, which will help me in revising the paper.

    just to clarify a few issues:

    i have no problem with the claim that the brain processes “analog models”; we even say something to this effect in the paper.

    i probably use “rule” in a broader sense than Horgan and Tienson, O’Brien and Opie, etc., so that what they call computers (or at least the non-semantic aspect of what they call computers) is in fact covered by our argument. i need to check and clarify that in the paper.

    as to what distinguishes computers from other mechanisms, i have written at length about that elsewhere (e.g., in my paper “Computing Mechanisms”), although it might be good to insert a brief reminder.

    as to the vehicles of neural processes, i do mean “the latter”, as you correctly guessed.

  5. Joshua Stern

    well, I must disagree with the third point in your summary, that “computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.” I don’t see how this can be established as necessary, even if every word you write about neural computation is true, you cannot prove the negative hypothesis that some other form of computation would not have equivalent results. Of course most theories of computation claim the opposite, that any Turing-complete system is exactly of equal power.

    Along those lines I won’t doubt for a moment that you can find aspects of neural computing which are sui generis, just as aspects of computing with electronics is one form, and a computer made of brass and ivory could have all sorts of unique behaviors. The question comes back rather to what the all have in common.

    I’ve speculated about computers made of plutonium, in certain configurations they explode, a sui generis behavior not seen in Intel products to date. But I don’t see that the unique behavior has much to do with computation, rather more to do with explosions.

  6. gualtiero

    JS, thanks for your comments. Two things: first, our thesis is precisely that neural computation is a sui generis type of computation, different from digital computation which gives rise to the notion of Turing-completeness. second, we are talking about theories of natural cognition, not theories of how cognition could take place. because of this, your point about Turing-completeness does not apply.

  7. Joshua Stern

    I think you will find that you can run but you cannot hide, Turing-completeness will find you anywhere you go, and you will be more tired for the effort.

  8. Joshua Stern

    The point is whether things are as complete as Turing seems to think.

    I believe Turing saw right through the sui generis arguments right from the beginning. Now, maybe this is not obvious to a modern reader. We tend to associate Turing with digital computers and wonder about other types. Well, even Turing did that, his PhD includes an attempt at greater generality and considers oracle machines that can make any single-step jumps, just in case there exists such things that would somehow escape the deterministic machines he wrote about in OCN.

    But, Turing for me is just the two papers, On Computable Numbers, and Computing Machinery and Intelligence. I suggest this is for reasons that even Turing did not fully realize – though (and this is an offer at a major digression) perhaps Wittgenstein did. Or perhaps Turing realized it himself, partially, in CMAI in 1950. And that is that computation, besides being conveniently performed digitally, has other essential characteristics, and some degree of determinism is one of them, that this has several aspects, the determinism of the computational process, and the determinism of the ontological world. There is simply no place to put an oracle without leaving a crack in the universe for events that would simply fail to correspond to any systemic computation – the same power that separates the oracle from deterministic computation, defeats the oracle at the same time. And such, I suggest, will be the outcome of any sui generis theory of computation. Either it reduces to Turing computation after all, or it simply assumes its own success in an unfalsifiable, which is to say an unsupportable, manner.

    Now, this is not to say that you might not find a sui generis explanation that is extremely powerful and tractable. But in that case it will reduce to Turing computation.

    This is good news, actually, that it’s not something you can break. At least that’s how I see it.

  9. gualtiero

    JS, you are raising a lot of important issues. i’ve written extensively about these issues and unfortunately i don’t have enough time to repeat all the relevant points here. i invite you to study some of the recent literature more carefully.

  10. Joshua Stern

    My focus is on the one issue of what computation is, in the first place. The arguments are very strong, it seems to me, that there is simply no place there for sui generis arguments. I hold this same complaint against Jerry Fodor, btw, who asserts certain arguments one might see as sui generis in regards to burdens he places on “concepts”, before he engages his computational engine to see where it goes. This just plain misses the point about Turing, a fortiori about Hilbert and the Entscheidungsproblem.

    The problem with answers to date is that nobody seems able to bridge from that level of generality to specific issues, eg brain function. Searle is of course the poster boy for arguing that the general case is not conclusive, that it was not for him. And Searle manages to make some valid points about what a general computationalist theory cannot claim, or more positively what it must be. And so might other sui generis theories, even if ultimately they must be rewritten back down in terms of basic Turing computation.

    Which is not to say that Turing (or anyone, including me) has told the story completely, if even someone like Fodor doesn’t see it working. I hope to get something written and submitted along these lines one of these days … I just like to “represent” here for the moment that you might want to consider this as a possibility, and that someone (me) for better or worse, really does hold this position.

Comments are closed.

Back to Top