Theoretical Computer Science has a broader import than its name suggests. To appreciate it, remember what Turing proved: that a certain hypothetical machine would be able to compute every recursively definable function in a finite amount of time. If we supplement that theorem with a plausible assumption about physics then we can arrive at a remarkable result: One which is widely known, although without being so widely celebrated as it should be.
Whatever the laws of physics are, it seems plausible that the function taking us from how things are at one time to how they are the next will be describable in mathematically well-behaved language. That leads to a version of the Church Turing Thesis—(the version which is sometimes called the Physical Church Turing Thesis)—saying that the behaviour of any system whatsoever can be modelled by one of Turing’s machines (to an arbitrary degree of precision), provided only that the system to be modelled is governed by the laws of physics.
To believe this thesis — which we should — is to believe that the study of Turing Machines provides a route by which to establish some quite general limits on physical explicability. The thesis entails that, if something cannot possibly be modelled by a Turing Machine in a finite amount of time, then there is no physically explicable system that could possibly do that thing. When Turing discovered that there are mathematical questions that it is logically impossible for one of his Machines to solve, he thereby discovered that such questions could not possibly be answered by any physically well-behaved system.
The above results are familiar. What may be slightly less familiar is that Church and Turing’s claim about logical impossibility can be extended to a claim about certain sources of physical impossibility. This extension is known as the Cobham Edmonds thesis.
To see the force of the Cobham Edmonds thesis (the foundations of which are only slightly more contentious than those of the Church Turing Thesis) notice that there are two ways in which one might show that a Turing Machine will fail to do something: One can show that it is logically impossible for a Turing Machine to do it; or can show that, although the thing isn’t logically impossible for a Turing Machine to do, it is nonetheless physically impossible for a Turing Machine to do it, since the universe does not provide sufficient time or space in which to get the thing done.
Turing himself gave an example of the first sort, by showing that a contradiction follows from the supposition that a Turing Machine could answer the halting problem. For an example of the second sort, consider the task of printing out every possible way of colouring in a twenty by twenty grid, using one colour per square, with black and white as one’s only colours.
Each square in the grid doubles the number of possible colourings, and so there are 2400 ways in which a twenty-by-twenty grid can be coloured. There is no contradiction entailed by the supposition that some machine prints out all of these grids. We can nonetheless be certain that it is physically impossible for any actual machine to do it. It would be physically impossible to print them all since 2400 is too large a number. The crucial point here is that it is far too large: It is many times greater than the number of nanoseconds since the beginning of the universe.
The Cobham Edmonds Thesis tells us that, when considerations of this latter sort make it impossible for a Turing Machine to model some process, no other physical system could complete that process. If the completion of the process by a Turing Machine requires an amount of time or space that grows exponentially, and so outstrips the time or space available in our universe, then the amount of time or space that would be required by any other physically well-behaved system will grow almost as fast, and so will get to be just as impossible almost as quickly.
What these theses show is that computer science can prove things to be inexplicable: If getting a Turing Machine to do something requires the truth of a contradiction, or if it requires vastly too much time, then the doing of that thing by any physical means would be inexplicable. This is true, even if those physical means incorporate sources of randomness. The only possible exceptions are systems that make use of effects allowed by quantum mechanics. Setting these quantum mechanical cases aside we can say that, if something cannot be done in a feasible amount of time by a Turing Machine, then it would be miraculous if we found that thing being done.
Various complications are incurred when we apply the above considerations to the explanation of intelligence (especially because human intelligence is an imperfectly achieved thing), but the remarks above already reveal something that philosophers of mind should note. This is that the theory of computability has the potential to tell us something about the explicability of intelligence. It does not tell us only about the viability of one particular explanatory approach.
If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would absolutely inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.
In the next post, I’ll try to indicate where those constraints come from, before saying more about how they can be negotiated.
(Readers should note that I’ll be getting on a long flight this afternoon — and will be en route when this post appears — so might not be speedy in replying to comments. Apologies for this.)