As announced a few weeks ago, I wrote a paper in which I survey the state of the art on computationalism. Thanks a lot to those who suggested topics I should cover. I touched on most of the suggested topics.

The paper is due at *Philosophy Compass *by the end of the month. If anyone is interested in commenting on it, here it is. Needless to say, your comments would be greatly appreciated.

Update [9/17/08]: And here is the abstract:

Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind as well as psychology and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done.

I like it, does a nice review of the problems. I’d probably say that the mapping account (especially Scheutz) can be incorporated into mechanical account of computation to make explicit how components of the system play roles in implementing the computation (as specified with a formal algorithm/machine etc.). These two don’t have to be opposed to each other but this is a just a side issue.

What about the conditional: if humans are superTuring computers, what follows? Just saying there isn’t evidence that we are, so we only have to worry about Turing seems to avoid the (interesting?) conceptual issues that a philosopher would be helpful with.

You said:

But at any rate, there is no hard evidence that human beings can hypercompute: no one has ever shown how to solve a genuinely Turing-uncomputable problem, such as the halting problem, by using human cognitive faculties.I can solve equations defined over real numbers. Turing machines are not defined over reals. Voila! Did I just commit some basic fallacy?

You also say:

In a sense, this is still a version of computationalism—it just says that the computations performed by minds are more powerful than those of Turing machines.This is interesting, but it would suggest a whole class of symbolic models may be wrong-headed, that perhaps analog models (such as ANNs) are better at capturing the operations of minds. To what extent do the traditional problems (e.g., ‘symbol grounding’, frame problem) exist in nonturing computational modelling, and if they do exist can alternative (e.g., ANN) frameworks better solve them?

In practice, we model continuous systems with discrete approximations anyway, so ultimately it isn’t all that big a deal for modellers, but in terms of thinking about how the mind works, it makes a difference.

Eric,

Good questions; here are a couple of considerations.

First, any sense in which you can solve equations defined over real numbers is a sense in which a TM can solve the same equations. Mathematica tells me that the integral of cos(x) from 0 to Pi/4 is Sqrt(2)/2. In a sense, that function is defined over the real numbers, but the algorithm for computing such a function is TM-computable. So that doesn’t count as a genuinely Turing-uncomputable problem. Gualtiero’s point still holds.

As for your second question, it seems that all of these problems (symbol grounding and the halting problem) exist with continuous computations (depending on how computing is defined for continuous processes), and for non-Turing computations generally. I don’t know to what extent theoretical computer scientists have worked on this, but I know that there is something of a hierarchy: if I’m not mistaken, Turing himself showed that oracle machines (which can compute the answers to problems that regular TMs cannot) have their own version of the halting problem, but at a “higher level”. However, there is a question about what to do with processes that don’t seem to be solved algorithmically at all. The best example I know of is protein folding in molecular biology: it’s been shown that folding a one-dimensional protein into a 3D structure is NP hard, yet proteins are folded very rapidly. This might suggest that the actual protein folding process is non-algorithmic, and therefore, that modeling that process algorithmically tells us nothing about how it’s actually done in biological systems. If there are similar examples in the mind/brain, then one would need to specify in what sense such processes still count as computations (I think this is what you’re getting at in the last paragraph).

Cory,

Good poing that my proofs about real-valued functions are things a TM could reproduce. And since that is sort of built into the notion of proof, it is hard to see a way around that. At least in terms of things I can prove.

As for the two problems I mentioned (symbol grounding and frame).

I agree that there is some kind of “symbol” grounding problem within an

ANN framework, though the term may be anachronistic. That is, unless someone wants to be an eliminativist about representational content. But that would be kooky.

The frame problem seems much more amenable to solution within ANNs than within symbolic models, though I am just parroting Churchland here. Fodor acts like it is ridiculously hard in traditional models, while for some time Churchland has been saying how great ANNs are at abductive inference.

My last paragraph was more vanilla than thinking some things may be nonalgorithmic, making another more Churchlandian point that whether we look at mental states as following propositional versus real-valued vectorial dynamics makes a big difference in how we look at the mind. So I was working within a broadly computational framework there. The two frameworks conjure different pictures of the mind (though they don’t have to).

But getting back to Cory’s last point. Assume when falling from a cliff, a baseball’s height is a real value. Does this have implications for whether the process of a ball falling is ‘algorithmic’?

Guys, thanks for the comments.

Eric, here are some points in addition to what Corey said.

I draw a distinction between being a hypercomputational system and being a non-computational system. A hypercomputational system is a computing system more powerful than Turing machines. A non-computational system is a system whose activity is not correctly described, strictly speaking, as any kind of computation. I have no reason to believe that Siegelmann’s hypercomputational networks have anything to do with the brain. But it doesn’t follow from this that the brain computes. In fact, if you are suggesting that our best models of neural activity have little in common with computing systems and there is no obvious or useful way to describe such systems as computational (in a strict sense), then I agree.

I don’t think Churchland’s point about the frame problem being easily solvable with neural networks is sound. The frame problem is only a well defined problem when we have a classical computational system. When we have a neural network (of the kind that Churchland considers), there simply is no frame problem to solve. But that doesn’t mean that we have no problems to solve. First, it remains to be seen whether and how such neural networks can do the same tasks as classical computational systems. Second, there is at least one problem about neural networks whose difficulty compares to that of the frame problem: the scaling problem. I.e., how do you build and train a network of realistic size using the principles that work for small networks? No one seems to know.

The important question in my mind is:

<i>If humans (or human brains) are Superturing computers, what follows?</i>

Certainly not nothing. It would be nice to see a positive answer to the question, perhaps in addition to problems with others’ perspectives.