In a previous thread, Ken Aizawa suggests that I’m insufficiently pluralistic about computation in cognitive science and to substantiate his criticism he points to his forthcoming article “Computation in Cognitive Systems; It’s not al about Turing-Equivalent Computation” (available on his website).

Having read Ken’s nice paper, I only have time for a few quick comments.

1. Ken correctly points out that there are several notions of computation. (I make the same point in a paper that he refers to.)

2. Ken correctly points out that many people including myself think there is something special and theoretically deep about what he calls Turing-equivalent computation, by which he seems to mean the kind of computation that can be performed by Turing machines (computation of Turing-computable functions). They’re right, because in fact this is the core theoretical notion of computation, with lots of deep results about it.

3. Ken correctly points out that the notion of Turing-computable functions can be generalized to study functions of natural numbers (or equivalently of strings of letters from a finite alphabet) that are not computable by Turing machines. This enterprise was started by Turing himself and is a large branch of computability theory. (Anyone who takes a nontrivial course in computability theory knows this.) But contrary to what Ken seems to suggest, the study of functions that are uncomputable by Turing machines is not based on a different notion of computation from that of Turing machines–it’s the very same notion; in fact, the whole subject matter is defined in terms of functions that are like those computable by Turing machines but cannot be computed by Turing machines.

4. Ken persuasively argues that Turing machines and the related notion of computabiltiy probably played only a minor role in McCulloch’s thinking at the time he wrote his 1943 paper with Pitts. But Ken seems to underestimate the theoretical significance of computation-theoretic results in characterizing the power of McCulloch-Pitts nets and other neural networks. (The latter obviously is not discussed in the 1943 paper.) Ken also seems to underestimate the important role that the connection between McCulloch-Pitts nets and Turing-computability played in the history of cybernetics and cognitive science. For the beginning of an account of that history, based on extensive archival research, see Chapters 5 and 6 of my Ph.D. dissertation.

5. Ken asserts that the notion of “computed vs. uncomputed cortical maps” deployed by some neuroscientists “is not a Turing–equivalent form of computation” (p. 17). But I didn’t notice anything in Ken’s paper that determines what relationship there is or isn’t between the notion of computation deployed in this area of neuroscience and Turing-computabilitiy, so I don’t know why Ken is so confident in his assertion.

6. Ken also points out that often cognitive scientists talk about computation as symbol manipulation or digital symbol manipulation, without mentioning the kind of “finitude constraints” that are important to Turing-computability. This is true but doesn’t mean that the finiteness constraints are not implicitly assumed to be in place (except by people like Jack Copeland); after all the brain has only a finite number of neurons etc.

7. Ken’s pluralism seems to be based on something like the following argument: if scientist A uses “computation” in pursuit of goal X and scientist B uses “computation” in pursuit of goal Y and X is different than Y, than scientists A and B use two different notions of computation. This is a fallacy. Maybe there are two different notions of computation, maybe they aren’t. It takes a lot more than this to show that two notions of computation are the same or different. More generally, it takes a lot of theoretical work to compare and contrast different notions of computation and see how they relate to one another. That’s why, contrary to what Ken suggests, it’s very helpful to have an umbrella notion of computation, of which other notions (including all those mentioned by Ken) are species.

8. In conclusion, reading Ken’s paper convinced me that I have just

I didn’t take Ken to be claiming that if A uses “computation” in pursuit of goal X and B uses “computation” in pursuit of goal Y (and X and Y are different), then A and B are using two different notions of computation. Rather, I took his point to be that if the development of A’s use of “computation” is largely independent of the development of B’s use of “computation,” then there is no reason to think that there will be–or need be–anything scientifically important that is shared by the two uses.

On this point, I take it that there is some disagreement between Gualtiero and Ken about the history of the development of these uses of “computation.”

But one could, of course, acknowledge that the developmental histories were largely independent of each other and still argue that there is something important in common between the two uses of “computation.” If, on the surface, the two uses seem rather different, and if there are good reasons for thinking that their respective developments were largely independent from each other, then it may be plausible to suppose that nothing common and important will be found. I don’t see Gualtiero as denying this, but rather as saying that, in point of fact, there is something common and important.

What do you think?

Martin, thanks for your insightful comment.

On one hand, there is definitely something in common between the variuos uses of “computation” in cognitive science. Otherwise, they wouldn’t use the same word. (Well, ok, people can use the same word for different things, but hopefully no one thinks that’s the case here; in any case, there is no evidence of that and plenty of contrary evidence.) So it would be helpful to know what these uses have in common, and I don’t know why Ken or anyone else would disagree with that.

On the other hand, there are substantive debates about whether cognition is some kind of computation and what kind of computation it is. Are we just supposed to resolve these debates by saying that everyone can use “computation” however they want? I don’t think so. So we need a principled and theory-neutral framework within which different claims about computation can be compared and contrasted to figure out who’s right about what. That’s my project, and I don’t see that Ken has found any flaw in what I’m doing.

Gualtiero,

Thanks for you comments. I have in the back of my mind the idea to write more on this computational pluralism, which will likely discuss some of your views, and stuff by BC Smith, and Gerard O’Brien and Jon Opie.

But, here is the kind of thing that begs for clarification.

“2. Ken correctly points out that many people including myself think there is something special and theoretically deep about what he calls Turing-equivalent computation, by which he seems to mean the kind of computation that can be performed by Turing machines (computation of Turing-computable functions). They’re right, because in fact this is the core theoretical notion of computation, with lots of deep results about it.”

We agree that the class of Turing-computable functions is a set about which there are deep theoretical results. But, I think that claiming that the notion is special or that it is “the core notion” is misleading. These descriptions are far from precise, but they suggest to me a unique significance and pride of place for one notion of computation, over others. It is probably an accident of history that the Turing-equivalent forms of computation were developed before, say, real-valued computation.

3. Ken correctly points out that the notion of Turing-computable functions can be generalized to study functions of natural numbers (or equivalently of strings of letters from a finite alphabet) that are not computable by Turing machines. This enterprise was started by Turing himself and is a large branch of computability theory. (Anyone who takes a nontrivial course in computability theory knows this.) But contrary to what Ken seems to suggest, the study of functions that are uncomputable by Turing machines is not based on a different notion of computation from that of Turing machines–it’s the very same notion; in fact, the whole subject matter is defined in terms of functions that are like those computable by Turing machines but cannot be computed by Turing machines.

I think this difference here reflects a difference Gualtiero and I see in what is “Turing-equivalent computation”. If you see this as digital symbol manipulation, as I think Gualteiro does, then Turing machines and Turing’s 0-machines will seem to be exactly the same. If, however, you see “Turing-equivalent computation” as bound up with certain “finitude constraints” or “resources restraints”, then insofar as Turing machines and Turing’s o-machines have different finitude or resources constraints, you will see them as different. The o-machines have that oracle resource that regulart Turing machines do not.

4. … Ken seems to underestimate the theoretical significance of computation-theoretic results in characterizing the power of McCulloch-Pitts nets and other neural networks. (The latter obviously is not discussed in the 1943 paper.) Ken also seems to underestimate the important role that the connection between McCulloch-Pitts nets and Turing-computability played in the history of cybernetics and cognitive science. For the beginning of an account of that history, based on extensive archival research, see Chapters 5 and 6 of my Ph.D. dissertation.

I don’t have any estimates of these things. My discussion in my paper was pretty narrowly limited to what McCulloch wrote in his papers and lectures from n the period of about 1943-1953.

See Wittgenstein on “games”. But, I’m not really going for as strong a position as W’s. Instead, the idea is that you want an account of computation that is theoretically interesting. Going for a kind of “least common denominator” approach threatens to distort what those working on the specific theories had in mind. As a case in point, I think you get Turing’s work wrong by looking as a weak tea theory of computation as digital symbol manipulation.

There are substantive debates about whether cognition is some kind of computation and what kind of computation it is. But, how does insisting on a special, core notion of computation help here? There could be eight completely unrelated conceptions of computation and what one would apparently have to do is wade through each of them to see what cognition is any one of those eight conceptions.

And, I’m not saying that people should be able to use “computation” however they want. I’m saying that people actually use “computation” in a variety of ways, other than as just a term to describe the body of theory Turing came up with, and they use it for a variety of purposes other than those Turing adopted.

Re: paragraph 1. I think that’s pretty much spot on.

Re: paragraph 2. I think there are some differences between G and me.

Re: paragraph 3. I think that Gualtiero is saying that there is something common and important. I think he is wrong here, in large part because of the neuroscience case. But, maybe more importantly, I think that Gualtiero’s approach of trying to find a “least common denominator” for what is computation obscures the important differences among the species. The species are where the scientific action is.

But, how does insisting on a special, core notion of computation help here? There could be eight completely unrelated conceptions of computation and what one would apparently have to do is wade through each of them to see what cognition is any one of those eight conceptions.Eight? Why not eight million? … that’s rather why it’s worthwhile to try for one. The eight million is, after all, somewhat unlikely to be useful.

I rather think that, to make any progress, a single definition of computation is required. And lacking. If anyone here, or BCS, or me, can come up with one, I believe that would be something extremely useful. Note that any single definition does not have to be applied directly, but mutatis mutandis, with potentially a wide spectrum of intertranslatable versions.

On further reflection, this, I think, marks a big difference between how I proceed and how Gualtiero and O’Brien and Opie proceed. I draw attention to the goals X and Y, where Gualtiero and O’Brien and Opie (it seems) do not. It seems to me that attention to these goals is an important part of understanding the history and philosophy of computation.

So, Martin’s explication is really helpful here. I can see that this reference to goals may not have been as explicit in my paper as it could have been.

Ohhhhh. Burn that cranial-chauvinist good!

If you want to adjudicate whether cognition is computation, you need a minimal notion of computation that every computationalist is committed to and then investigate whether cognition is like that. So you need a general notion of computation.

If you have different notions of computation and you want to adjudicate which one is right for explaining cognition (or one aspect of cognition), you need to establish what is similar and different between the several notions of computation and then see which way cognition is such that it satisfies one notion of computation but not the others. Perhaps here you can get away without a general notion of computation (just do it in terms of family resemblances), but you still need a lot more than just a list of notions of computation based on the goals pursued by each researcher. (Not to mention the question, at what fineness of grain are you individuating goals?) You need an account of each notion phrased in a common vocabulary such that you can tell what each notion is committed to sufficiently precisely that you can then decide which way cognition is.

It’s most definitely not an accident of history. If you tell me what you mean by “real-valued computation,” maybe I can tell you why.

Whether hypercomputers (i.e., machines that can compute functions that are not computable by Turing machines) are physically possible and what finitude constraints they satisfy are open empirical questions. But the type of function being computed is the same; functions from strings of symbols to strings of symbols. There are similarities and differences, but the notion of computation involved is very similar. In fact, all these functions are studied within the same mathematical theory.

Some very cool stuff by Lenore Blum, Michael Shub, and Steve Smale:

ON A THEORY OF COMPUTATION AND COMPLEXITY OVER THE REAL NUMBERS: NP-COMPLETENESS, RECURSIVE FUNCTIONS AND UNIVERSAL MACHINES

and

Complexity and Real Computation.

But, here’s what I think happens. This goal and Turing’s goal are different, right? During the 1930’s, Turing wasn’t trying to adjudicate whether cognition is computation. He was working on questions of decidability in logic. Now, by going for a minimal notion of computation for the particular goal you have, you come up with something different than what Turing did. Turing implicitly cared a lot about various “finitude constraints”. Such constraints are at best muted in contemporary discussions of cognition as computation. (Does Fodor, for example, ever mention such things?) The finitude constraints were essential to the computation theoretic results. Change them and you get different computability results. There is a clear sense in which Turing was interested in digital symbol manipulation, but to leave it at that is misleading. He wasn’t interested in just any old digital symbol manipulation; he was interested it in along side certain other resource constraints.

Yes, articulating the various implications of various sorts of resource constraints is an important theoretical matter. It is something that has to be figured out. (Maybe a priori if computation theory is a bit of logic, maybe a posteriori if computation theory is an empirical science.)

Yes, you can see similarities between o-machines, Turing machines, finite state automata, and push-down automata by thinking of them in terms of symbol manipulation. But, what divides them into what one might call “natural kinds” in computation theory and the theory of formal languages is the differences in “resources.”

By contrast, attention only to symbol manipulation or information processing in cognitive science does get on to a major issue in the history of 20th Century studies of the mind. It’s what divides (some) behaviorists and (some) Gibsonians from cognitivists like Fodor and Pylyshyn.

So, again, attention to the goals of cognitive science seems to me to lead you to project back onto Turing a misleading, weak tea conception of cognition.

Yes but the “finitude” constraints are not arbitrary. They are those of a computing machine whose instructions you can write down. The results that follow turn out to be very general; they apply to any computing machine that has ever been built. Furthermore, Turing himself figured you could build a machine as intelligent as a human being within those constraints. Why? Because presumably the human mind operates within those very same “finitude” constraints. Every serious theorist of cognition grapples with these issues, including Fodor. That’s why he insists the cognitive architecture must be classical: because it must be able to produce an indefinite number of thoughts/sentences using finite resources, and Fodor doesn’t see any other way of doing so other than with a classical computational architecture.

Ok, so: The Blum et al program is an extension of classical computability theory that develops by defining primitive computational operations (addition, subtraction, etc.) over real numbers. Blum et al take computability theory formalisms and extend them in certain ways. Blum et al themselves cite a remark of von Neumann as motivation for their program. von Neumann, of course, was one of early designers of digital computers and was directly influenced by Turing’s work. But von Neumann thought that classical computability theory was limited for certain theoretical purposes and Blum et al decided to try to overcome those limits by developing a kind of computability theory based on real numbers. Regardless of what you think of the value of Blum et al’s approach, it’s important to note that you can’t build concrete machines that do what Blum et al’s “machines” do. So our computers are still all governed by the principles of classical computability theory. A lot more could be said here, but the bottom line is that it’s definitely not an accident of history that Turing’s work came before Blum et al’s.

Ken, leaving aside your last statement (which I don’t fully understand), I think we are in complete agreement here. Resource constraints are super-important (and sometimes underappreciated) to the theory of computation as well as the theory of cognition.