Is the mind is a Turing machine? How could we tell?

I have just finished writing a draft of paper on Turing machines being equivalent (or not) to human minds. This is an expanded (but still quite brief in many respects) version of my talk from the last year’s Studia Logica conference on Church’s Thesis. I defend the mechanist account of implementation of computation, and show that it can be used to make sense of Fodor’s 1968 distinction between the weak and strong equivalence of computer simulations to their explanatory targets. I think some of the cross-talk in discussions concerning the multiple realization is due to conflating these two kinds of equivalence.

The paper is available here, and all comments are welcome. It is one of the papers that I started writing when working on my book Explaining the Computational Mind (the draft is available here).

1. Joshua Stern

Marcin, I like your draft paper. In regards to “whether the mind is a Turing Machine”, there is the question whether my PC “is a Turing Machine”, and if not, why not, and how it should matter.

Let’s say the issue is one of finiteness. What if we simply choose a large finite number and talk about that as a limit? I just read William Goldbloom Block’s book, “The Unimaginable Mathematics of Borges’ Library of Babel”, which is of finite size, on the order of 10^10^20, which “finite” number of books is far too large to fit into the known universe. The consideration tends to deflate arguments between finite and infinite, though to just what extent is debatable.

This is yet another old chestnut of an argument about finite and infinite computations, but I wonder if it would fit somewhere into your arguments.

2. Joshua, yes, finiteness / infinity distinction is misleading. It is quite obvious that tractability of the mental computation narrows down the set of plausible computations far more than finiteness. But then, again, there are people who sincerely believe in hypercomputation that relies on some forms of actual infinity. If it is real, then our traditionals notions of tractability are wrong.

Thanks for the comment! I think the distinction is not so important myself so I will need to add something to that effect.

3. Joshua Stern

But here’s the thing, if the distinction is not important, then are you (or I) free to then say, well then throw out hypercomputation, and if we’re talking finite, then let’s talk about just how finite we can go, and how one decides how finite is too finite?

That’s a rhetorical question, actually, it’s a path I *do* choose to take, for my part!

4. Well, infinity is just one way to achieve hypercomputation (randomness being the other important option). You might also go Bringsjord way – namely, to say that people are able to decide which story is interesting and which isn’t, while there isn’t any algorithm for doing this (this is, imho, fallacy ad ignorantiam, but it is not relying on infinity). Or you could say that accelerating TMs (the ones that do compute uncomputable functions, see the recent paper by Copeland and Shagrir in Minds and Machines, 2010) are possible physically in Malament-Hogarth spacetimes. All such options make hypercomputation not so easy to discard as absurd, though they cannot establish that the mind is hypercomputational either.

5. Joshua Stern

It’s not that I seek to discard hypercomputation, so much as to see where we can get without it.

When the problem is explaining the content of “the cat is on the mat”, I’m willing to have a go at it with limited resources!

6. Right. I also do think the Occam’s Razor is in the air for hypercomputational explanation of such mundane cognitive processes.

There aren’t really hypercomputational explanations (at least not intended as hypercomputational!) in cogsci. Some traditional explanations are rather too expensive computationally (as Van Rooij shows repeatedly) but this was not planned as their feature. It is a bug.