Theoretical Computer Science has a broader import than its name suggests. To appreciate it, remember what Turing proved: that a certain hypothetical machine would be able to compute every recursively definable function in a finite amount of time. If we supplement that theorem with a plausible assumption about physics then we can arrive at a remarkable result: One which is widely known, although without being so widely celebrated as it should be.

Whatever the laws of physics are, it seems plausible that the function taking us from how things are at one time to how they are the next will be describable in mathematically well-behaved language. That leads to a version of *the Church Turing Thesis*—(the version which is sometimes called the *Physical* Church Turing Thesis)—saying that the behaviour of any system whatsoever can be modelled by one of Turing’s machines (to an arbitrary degree of precision), provided only that the system to be modelled is governed by the laws of physics.

To believe this thesis — which we should — is to believe that the study of Turing Machines provides a route by which to establish some quite general limits on physical explicability. The thesis entails that, if something cannot possibly be modelled by a Turing Machine in a finite amount of time, then there is *no physically explicable system that could possibly do that thing*. When Turing discovered that there are mathematical questions that it is logically impossible for one of his Machines to solve, he thereby discovered that such questions could not possibly be answered by any physically well-behaved system.

The above results are familiar. What may be slightly less familiar is that Church and Turing’s claim about *logical* impossibility can be extended to a claim about certain sources of *physical* impossibility. This extension is known as the Cobham Edmonds thesis.

To see the force of the Cobham Edmonds thesis (the foundations of which are only slightly more contentious than those of the Church Turing Thesis) notice that there are two ways in which one might show that a Turing Machine will fail to do something: One can show that it is *logically* impossible for a Turing Machine to do it; or can show that, although the thing isn’t logically impossible for a Turing Machine to do, it is nonetheless *physically* impossible for a Turing Machine to do it, since the universe does not provide sufficient time or space in which to get the thing done.

Turing himself gave an example of the first sort, by showing that a contradiction follows from the supposition that a Turing Machine could answer the halting problem. For an example of the second sort, consider the task of printing out every possible way of colouring in a twenty by twenty grid, using one colour per square, with black and white as one’s only colours.

Each square in the grid doubles the number of possible colourings, and so there are 2^{400 }ways in which a twenty-by-twenty grid can be coloured. There is no contradiction entailed by the supposition that some machine prints out all of these grids. We can nonetheless be certain that it is physically impossible for any actual machine to do it. It would be physically impossible to print them all since 2^{400 }is too large a number. The crucial point here is that it is *far* too large: It is many times greater than the number of nanoseconds since the beginning of the universe.

The Cobham Edmonds Thesis tells us that, when considerations of this latter sort make it impossible for a Turing Machine to model some process, no other physical system could complete that process. If the completion of the process by a Turing Machine requires an amount of time or space that grows exponentially, and so outstrips the time or space available in our universe, then the amount of time or space that would be required by any other physically well-behaved system will grow almost as fast, and so will get to be just as impossible almost as quickly.

What these theses show is that *computer science can prove things to be inexplicable:* If getting a Turing Machine to do something requires the truth of a contradiction, or if it requires vastly too much time, then the doing of that thing by any physical means would be inexplicable. This is true, even if those physical means incorporate sources of randomness. The only possible exceptions are systems that make use of effects allowed by quantum mechanics. Setting these quantum mechanical cases aside we can say that, if something cannot be done in a feasible amount of time by a Turing Machine, then it would be miraculous if we found that thing being done.

Various complications are incurred when we apply the above considerations to the explanation of intelligence (especially because human intelligence is an imperfectly achieved thing), but the remarks above already reveal something that philosophers of mind should note. This is that the theory of computability has the potential to tell us something about the explicability of intelligence. *It does not tell us only about the viability of one particular explanatory approach.*

If we found that we had been conceiving of intelligence in such a way that intelligence could not be modelled by a Turing Machine, our response should not be to conclude that some alternative must be found to a ‘Classically Computational Theory of the Mind’. To think only that would be to underestimate the scope of the theory of computability. We should instead conclude that, on the conception in question, intelligence would *absolutely* inexplicable. This need to avoid making intelligence inexplicable places constraints on our conception of what intelligence is.

In the next post, I’ll try to indicate where those constraints come from, before saying more about how they can be negotiated.

(Readers should note that I’ll be getting on a long flight this afternoon — and will be en route when this post appears — so might not be speedy in replying to comments. Apologies for this.)

I remember in Penrose’s Shadows of the Mind that he sketched the possibility of a deterministic but non computable universe in which (perhaps as a matter of brute law) successive states of the universe are determined by the outcome to non computable equations. E.g. a particle passing between two magnets might swerve left or right depending on whether a random system of Diophantine equations have a common solution, or whether a random Turing machine halts on a random input. Does the truth of the Physical Church Turing Thesis depend on the universe *not* being like this?

That’s exactly right. The physical Church Turing Thesis requires the assumption that there are no ‘oracles’ built into the laws of nature (as the Computer Scientists would say). (For my purposes, I don’t need to suppose that Penrose’s universe isn’t in the space of possible worlds, but I do suppose that it must be some way off in that space.)

Hi Chris,

You say that we should believe the Physical Church-Turing thesis—“that the behaviour of any system whatsoever can be modelled by one of Turing’s machines (to an arbitrary degree of precision), provided only that the system to be modelled is governed by the laws of physics.”

I wonder why you think this. Do you suppose that we should be confident in thinking that the fundamental laws of physics—whatever they turn out to be—entail that physical systems are Turing computable? Wouldn’t a more cautious claim be that study of Turing machines provides us with powerful tools for modeling certain aspects of physical systems but that it could turn out that these tools are insufficient to model certain physical phenomena or cetain aspects of physical systems?

The issue for me arises in the context of thinking about living systems. Are you familiar with the work of mathematician and theoretical biologist Robert Rosen? He argued that organisms are not Turing computable and hence not machines. His argument was based on claiming to prove mathematically that a certain class of systems (M,R Systems/metabolism-repair systems), which provide an abstract representation of cell metabolic activity, cannot be modelled by a Turing machine.

I discuss this argument briefly in Mind in Life: https://books.google.ca/books?id=OVGna4ZEpWwC&pg=PA143&lpg=PA143&dq=autopoietic+and+m,r+systems&source=bl&ots=4oacuc63lk&sig=Tvit7_mUi1u2NjKqT6lidLRvN5Y&hl=en&sa=X&ved=0ahUKEwjRvaGkz4nNAhUDMGMKHd0eDqEQ6AEIVDAH#v=onepage&q=autopoietic%20and%20m%2Cr%20systems&f=false

Rosen’s views are controversial. There’s a small but growing literature on them that has developed over the past decade in theoretical biology. See this 2009 paper for an overview: http://users.dcc.uchile.cl/~cgutierr/papers/computabilityLife.pdf And see this one from 2013 for a more recent discussion: http://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-7-128

I’m not competent to judge the mathematical arguments. But if Rosen and others building on his work are right, then your claim that “if something cannot possibly be modelled by a Turing Machine in a finite amount of time, then there is no physically explicable system that could possibly do that thing” is false. M,R Systems and organisms more generally would still be physically and mathematically explicable, even if it should turn out that they cannot be modelled by a Turing machine in a finite amount of time.

As I say, I’m not competent to judge the issue, but my reading of science leads me to believe it’s very much open and unresolved.

What do you think?

Cheers,

Evan

Hi Evan,

I’m not familiar enough with Robert Rosen’s work to hazard any informed opinion of it. But let me say some inexpert things that might be helpful.

Three substantial assumptions are needed for the main claims of this post to be on a secure footing: The first is (a version of) The Physical Church Turing Thesis; the second is what I’m calling the Cobham Edmonds Thesis (and which is also sometimes called the Extended Physical Church Turing Thesis), and the third is the P≠NP Conjecture (about which I’ll say no more here, except to note that even the people who don’t believe it believe something that is, for present purposes, relevantly similar to it).

None of these three assumptions is currently provable (although the reasons for this are different in the third case from the first two). The truth or falsity of the first two is an empirical matter, depending on what the laws of nature happen to be. I had been supposing that the second would be the one that would feel contentious, although it is true that the first can also be given an air of mystery.

Let me say something about each of them in turn:

The version of the Church Turing thesis on which I’m depending says that, if the behaviour of some thing is explicable by the laws of physics, then that behaviour could be modelled (in a finite amount of time, and to an arbitrary degree of precision) by a Turing Machine. Since it is provable that a Turing Machine can compute all and only the recursively definable functions, this amounts to the assumption that those laws of physics by which behaviour can be explained are representable with the resources of recursively definable mathematics. It’s hard to find a priori reasons to think that this must be true, but its equally hard to think of ways in which it could possibly be false. Believing it is a bit like believing in what Wigner called the ‘unreasonable effectiveness of mathematics in the natural sciences’. Based on the briefest of looks at a couple of websites, I don’t think that the theory of Rosen’s which you mention could falsify the idea that mathematics is adequate for the formulation of physics. He seems to have been operating with a notion of ‘modelling’ that is stronger than the one I’m using here. (The notion I’m using requires only that the function describing the behaviour of the system that is modeled be computed by the system that serves as its model.)

If the above sounds like a bit of a round-about way of putting things, that’s partly because I’m trying to avoid a problem raised by the possibility that the laws of physics might allow for absolute randomness. A system obeying laws of that sort would be doing something that a Turing Machine can’t do — since no Turing Machine ever does anything that is genuinely random — but such a system would not be displaying an explicable behaviour that cannot be modelled on a Turing Machine. Insofar as randomness contributes to explicable behaviour, it can be replaced by pseudorandomness without loss of computational power; and pseudorandomness is within the realm of what a Turing Machine

cando. I don’t think that the caveats needed to accommodate the possibility of randomness change anything substantially, but they do make things harder to explicate.The second thesis on which I depend feels much more controvertible: so much so that one version of it was more or less falsified by the discovery of Shor’s algorithm (in 1994).

Shor’s discovery means that a qualification needs to be added to the formulation of the Cobham Edmonds thesis. It may be that

somephysical systemscancompute functions that any Turing machine would take an impracticable amount of time to compute, but the only counterexamples allowed by this qualifications are systems that must reside in parts of the world that are small enough and cold enough to allow for uncollapsed quantum states. I don’t think that my application of the Cobham Edmonds thesis to intelligent organisms is affected by making these qualifications: organisms are too warm, and too large.As ever, I’m aware that some of the above is probably a bit of a fog. Do please let me know if there’s anything in particular that you’d like to see clarified, and I’ll do my best to clarify it.

I’ve never heard of Rosen’s work before but there are a lot of other people who have defined systems, more or less realistic, that are more powerful than Turing machines. It’s not difficult to do so mathematically (Turing was the first to do this with his oracle machines). The hard part is to show that there are plausible physical implementations. This does show that the truth value of the Physical Church- Turing thesis is more difficult to establish than Chris seems to suggest. That being said, I discuss the literature I was aware of in the last two chapters of my book (Physical Computation: A Mechanistic Account, OUP 2015) and more briefly in my SEP article on Computation in Physical Systems. My conclusion is that the Physical Church-Turing thesis, properly understood, is plausible. I would be astonished if Rosen had found a counterexample that is actually implemented in biological systems. For starters, a genuine counterexample would presumably have access to an unbounded memory resource (like a Turing machine), and I’d like to hear how cells could do have such a thing.

These issues are related to some of those that get aired in a recent response to Penrose on the (ever excellent) blog of Scott Aaronson.