One often finds computation described as having

originated with Alan Turing and as symbol manipulation. Yet, Turing imposed certain sorts of

finiteness conditions on computations, such as that a program must be finite in

length. These finiteness conditions are

not captured in the description of computation as symbol manipulation. So, it looks like you cannot both define

computation as symbol manipulation and say that it is Turing’s definition. This looks to be taking a mere *feature* of Turing’s formulation of a

computational formalism to be his definition.

I think that Gualtiero, for one, at least flirts with this problem in some of his papers on computation.

Turing had a very good reason for imposing finiteness conditions. He was analyzing the notion of algorithmic computation employed in mathematics. An infinite algorithm (or an infinite computation) is not very useful (for its usual purpos: guiding calculations), because no mathematician can execute it.

Now for some special purposes (e.g., discussion of the physical limits of computation), it may be useful to relax some of the finiteness conditions and see where that leads us. But that doesn’t mean we are using a notion of computation different from Turing’s. It just means we are relaxing one or more of his conditions for certain specialized purposes. Or you might want to say that we are generalizing his notion of computation.

In cognitive science, generally there is no reason to relax any finiteness conditions. Although some people have speculated that cognition may be hypercomputational (meaning a computation that is more powerful than what can be done by Turing machines), they haven’t provided any solid evidence that I’m aware of.

Hi Ken,

The ‘finiteness’ conditions that Turing suggested are a matter of a great deal of controversy. Although it is widely accepted that computations should be, in some sense, finite, exactly how this requirement should be fleshed out is far from clear.

A recent instance is the controversy over whether Alex Smith’s (2,3) Turing machine—claimed to be the simplest universal Turing machine yet discovered, and winner of the Wolfram prize—is really a universal Turing machine. The objection is that it violates finiteness constraints by requiring infinite and non-trivial initial inputs.

For the announcement: https://www.wolframscience.com/prizes/tm23/solution_news.html

For the objection and response: https://forum.wolframscience.com/showthread.php?s=&threadid=1472

One could imagine incompatible, but nevertheless intuitively compelling, notions of computation developed by fleshing out the content of the ‘finiteness’ condition in different ways. Indeed, this seems to be what has happened in the mathematical computation theory community. So in some ways, it is an advantage of symbol manipulation accounts that they are silent about the finiteness condition—one could imagine plugging into them any one of the resulting notions of computation with different finessing on ‘finite’. The question of what Turing originally intended I guess is largely moot, since on this point, his original work is not regarded as the last word.

All the best,

Mark

I forgot to mention that (as you know) Turing himself relaxed his finiteness conditions in his 1939 Ph.D. dissertation. There, he let Turing machines use “oracles”, which are entities that provide solutions to problems in computability theory (problems defined in terms of strings of symbols) but are not bound by finiteness conditions. He then proved theorems about what can be done by Turing machines that have access to certain oracles. So he was still proving things about computation as symbol manipulation, but in this case it was symbol manipulation that was not bound by the usual finiteness conditions.

I beg you pardon for my lack of deep scholasticism on Turing, but what implications have the finitness condition on “valid” definitons of computation for contemporary uses of the notion of computation, and particularly, the use of computation in philosophy of mind (or neuroscience).

Well, symbol manipulation is an extremely vague notion compared to a Turing machine. Note that a Turing machine takes an infinite tape of symbols, and in this respect it has infinite capacities that can never be instantiated in a finite world. I don’t think that Hintikka/Markov tricks with finite models can really give anything as powerful as an infinite tape, they boil down to a finite machine. Yet, this is a matter of controversy.

So there are at least two senses of symbol manipulation: tape-symbol manipulation, and Turing machine itself as defined and operating as a physical symbol system (so transition functions are taken to be implemented symbolically). Intuitively, the first notion seems to have more in common with symbol manipulation as described in folk cogsci ;).

So, let me sharpen the point.

One say that a nec& suff condition of a process being a

computation is that it be symbol manipulation.

One say that a nec&suff condition of a process being a

computation of the sort Turing described in his 1936 paper is that it be symbol

manipulation.

The problem is in saying that symbol manipulation is both the

definition of computation tout court and Turing’s definition in his 1936

paper. The first one can keep just as a

stipulation, but the second seems to be an historically inaccurate claim. Another necessary condition on a computation

in the Turing 1936 paper is finitude of program. And, in fact, one might well have the sense

that it is necessary in a way that using four-tuple versus five-tuple versions

of Turing machine instructions is not necessary.

Mark,

I agree with you entirely, I think. One thing that is dissatisfying with defining computation as mere symbol manipulation is that one might think that this defines what computation is in computation theory, while it overlooks or underemphasizes the issues having to do with finitude. Go ahead and simply stipulate, if you will, that computation is symbol manipulation. The problem is that this threatens to mislead people about what is going on in theoretical discussions of computation.

The question of what Turing originally intended matters if one is concerned to make an historical claim, namely, that for Turing in 1936, computation was just symbol manipulation.

And it might matter as well for folks like Piccinini and Scarantino, who seem to want to say that we should use terms like “computation” and “information processing” in the ways they were originally used by, say, Turing and Shannon. I don’t buy that line, but if I were, I think it would matter how those terms were originally used.

I don’t see any reason to reject computation as symbol manipulation for the purpose of cognitive science or philosophy of mind. That might get the idea one wants of there being an alternative to behaviorism’s rejection of ideas or mental representation.

The rub comes when one makes the historical claim about Turing 1936 per the original post, and indeed, following up on Sprevak’s point, when one slights the role of considerations of finitude in computation theory.

Yes, symbol manipulation is much less specific than what is investigated in computation theory. Finitude, in a number of respects, such as memory, alphabet over which computations are performed, program size, and number of computational steps, are all factors that have to be examined in, not definining computation, but in developing a theory of computation. If you want to know what can be accomplished with what kinds of resources, one needs some sensitivity to considerations of the finitude of various sorts of resources.

Ok., Ken and Mark,

so the real case is for those who don´t know the concepts properly, and moreover, when they use them without knowing its historical roots.

Just one thing to add that many layers in the tradition of modern philosophy mind stands or stem from misconcpetions of the “godfather” (Turing),is´nt?

Anibal,

That’s not the point I mean to drive at. It seems to me that one can define computation as symbol manipulation for purposes of doing, say, cognitive psychology or philosophy of mind. Only don’t think that this capture Turing’s 1936 account, or very much of what is thought of as computation in the philosophy of computation.

Hm, in practical respects it could have mattered, but it doesn’t matter in principle: after all, nothing beats UTM in respect of the things it can compute. Adding an infinite alphabet changes nothing. Infinite length of the computation itself, on the other hand, seems to be incompatible with a notion of effective computation, so most theories of computation simply say the number of steps must be finite.

If you don’t find a smart way of mapping infinite sets on the finite world (like Hintikka’s using world as a finite model to simulate near-infinity), it doesn’t matter anyway. What we can do in practice is only finite-state machines, or physics lies 😉

I would agree, provided one’s program is finite. A finite program can only respond to finitely many distinct letters.

But, Gualtiero seems to disagree. In his recent, or forthcoming paper, “Computers” in the PPQ, he writes,

Well, this definition suffers a little from vagueness. So the length of output strings must be >= 0 (it can be zero in case the partial recursive function the string manipulator computes has no defined value for the input string), and the length of input string is >= 0. Both things seem trivial, so this vagueness is not really important.

You can think of both strings as Turing machine tapes, so they can be infinite easily and they will only compute partial recursive functions.

I don’t think you can say what a program is for non-programmable computers (like Turing machines), so I don’t really know what you mean by an infinite program here. Anyway, my guess was that Gualtiero’s definition is more in the tradition of Markovian way of defining computation as string manipulation. Markov wanted everything to be finite but probably to make his strings UTM-equivalent he had to make them infinite. I’d need to look up his papers to check it out, as my memory fails me.

I’ve just googled Markov’s seminal paper. He speaks of “abstraction of potential realizability” (which is what an intuitionist logician uses instead “infinity”). He writes: “This consists in departing from real limits of our constructive possibilities and beginning to discuss arbitrarily long abstract words as if they were constructible. Their realizability is *potential*: their representatives could be practically realized if we had at our disposal sufficient time, space, and materials.” See: https://urlcut.com/Markov

As far Markov algorithsm are equivalent to classical UTM, Gualtiero made no mistake if he meant infinite strings.

By a finite alphabet, I think Gualtiero meant there are only finitely many distinct types of letters. This is consistent, however, with allowing an arbitrarily long, or infinite, sequence of those letters. So, an alphabet of {0, 1} would be finite, but there need be no limit on the number of, say, 1’s that could be written on the tape. I think that alphabet size and word length are distinct issues.

Moi (“Hi” in Finnish) Ken,

first I´d like to say I was really glad when I (finally) met you, Carl G. and company in Cambridge. It was a wonderful seminar – I had no opportunity to thank Mark for organising that brilliant seminar. I really enjoyed every second.

Anyway; just a brief question about this:

” 1. One say that a nec& suff condition of a process being a computation is that it be symbol manipulation.

2. One say that a nec&suff condition of a process being a computation of the sort Turing described in his 1936 paper is that it be symbol manipulation”

I may walk in a complete darkness here, but isn´t one part of this problem that being “symbol manipulation” is not a sufficient condition for computation – even if it would be a necessary condition? Or one of the necessary conditions?

Let me emphasize, I am not saying it would be in Turing´s 1936 paper – this is just an innocent question.

Yes, Gualtiero meant finite alphabet and potentially infinite strings, as far as I understand. So where’s the problem? It’s the same – in terms of the class of functions it can compute – as the classical Church-Turing computation. I can’t see any problem here, as adding another infinity doesn’t give you a bigger class of functions being computed.

Ken, what problem exactly do you have with Gualtiero’s definition? Infinite length of the string gives a classical definition of computation, so what’s wrong with this? It’s the same as Turing computation, after all.

BTW, Oracle machines are another pair of shoes, they can do more but they’re not algorithmic (and not practical).

Gualtiero can stipulate a meaning of “computation” however he wishes. But, if he wants to say that Turing accepted a definition of computation as symbol manipulation or as this formulation in PPQ, then that seems to me historically inaccurate. Why? Because both these definitions do not do justice to Turing’s concern with finite programs.

Now, in Gualtiero’s first comment, he write that Turing has reasons for imposing finiteness conditions, which seems like a concession that defining computation simply as symbol manipulation really doesn’t capture what Turing was up to in 1936. But, to me at least, the tone of the reply did really seem all that concessive.

Well, one can worry about whether 1 is true. I suppose it depends on whether you treat 1 as a stipulative definition or whether you think it explicates a target in some field or another. It probably does not do a bad job for the very thin notion of computation invoked in many areas of cognitive science, but if one is taking Turing, (1936), and a lot of other computation theory as well, as the target, then computation = symbol manipulation does not include some fairly standard finitude conditions.

Thanks for the nice comments, Anna-Mari. Echoing Anna-Mari’s comment, one might argue that it is a virtue of traditional symbol manipulation accounts that they don’t attempt to say much about finiteness constraints. Given how controversial the exact content of the finiteness constraint is (and the possibility of different and equally plausible ways of fleshing it out), it is to the credit of those accounts that they don’t attempt to beg the question on this point. Provided symbol manipulation accounts acknowledge the need for some appropriate finiteness constraints to be added to the core of their account (and granted that there may be different ways of fleshing out the details of the exact nature of those constraints), would you be happy with their descriptive accuracy Ken?

Mark, Anna-Mari,

It seems to me that in trying to define computation, folks have been looking for a feature that everything that has been called “computation” has in common. So, they come up with a broad characterization, such as computation is symbol manipulation, and drop all conditions of finitude. (This could be because conditions of finitude are controversial or because folks just don’t see why they are important.) This gives us an account that is something like “the least common denominator” in computation, but I think the finitude conditions are important (as I argue in a separate post).

So, here is what I would push for, if you want an account of what computation is, and it is to do justice to what Turing did, then you probably need to at least mention one or another condition of finitude, such as that programs be finite. But, of course, you could decide that you don’t care about giving an accout that is true to Turing. That’s fine too. But, if you want an account of computation that is true to Turing, you should have probably have some mention of conditions of finitude, perhaps with some caveat about it being a matter of controversy itself.

Symbol manipulation is definitely vague, as Marcin pointed out. And as Anna-Mari suspects, it’s insuffient for computation. At the very least, the symbolic structures have to be strings of symbols from a finite alphabet or something else that is effectively manipulable and the manipulation needs to be goverened by a rule defined over the symbolic vehicles (as opposed to their content). (So random symbol manipulations are ruled out as well as manipulations based on the “meaning” of the symbols.)

When fleshed out appropriately, this notion of symbol manipulation does come close to what Turing used and what provides the foundation for computability theory. It’s not just a matter of historical accuracy: It’s a matter of understanding what computability theory and computer science are about. And insofar as cognitive scientists appeal to the notion of computation developed by Turing and applied by computer scientists, it’s about that too.

But, this still seems to me to miss the point of my original post. If you want to understand what Turing did, you have to attend to more than just symbol manipulation. You need to say at least

somethingabout finitude.But, the point I’m driving at does extend beyond Turing, even though I’ve not tried to push this. But, consider some things from some texts on computability theory. Take a definition of TMs from Machtey and Young’s

An Introduction to the General Theory of Algorithms:Or this on Markov algorithms,

Or look at section 1.1 of Hartley Rogers’

Theory of Recursive Functions and Effective Computability.(You can read the relevant pages on Amazon’s preview reader.) Or look at Cutland’sComputabilityaccount of unlimited register machines, (p. 9).Not all of these mention finite programs, but this is why I initially used the vaguer notion of “finiteness conditions.”

So is your point that you need to say something about finitude? Of course you do. Who ever said anything different? That’s precisely why, e.g., I always talk about finite alphabets (following standard computability theory). But you need to be careful about which finiteness conditions you impose (and for which purposes). When you explicate the notion of Turing machine, you impose the constrain that the program be finite (or other constraints that entail that the program is finite). But when you explicate the notion of (digital) computation in its most general form, you shouldn’t impose the constrain that the program be finite. Otherwise you rule out other machines that fall under the boundaries of classical computability theory.

So is your point that you need to say something about finitude? Of course you do. Who ever said anything different? That’s precisely why, e.g., I always talk about finite alphabets (following standard computability theory). But you need to be careful about which finiteness conditions you impose (and for which purposes). When you explicate the notion of Turing machine, you impose the constrain that the program be finite (or other constraints that entail that the program is finite). But when you explicate the notion of (digital) computation in its most general form, you shouldn’t impose the constrain that the program be finite. Otherwise you rule out other machines that fall under the boundaries of classical computability theory.

Yes, if you want to enable folks to understand what Turing was up to, then you have to say something about finitude. That’s the opening move.

The problem is not one of saying, “and all there is to computation is symbol manipulation and it has nothing to do with finitude at all.” The problem is one of conversational implicature. It’s like saying, “Epistemologists are concerned with belief. If you want to understand the foundations of epistemology you have to understand belief.” That’s correct, but there is also an important sense in which this is misleading.

Pete Mandik’s treatment of computation is kind of like this: Pete on Computation.

Yes, finite alphabets do typically appear, but it is unclear that that is necessary, rather than a bit of elegance. If you have a finite TM program, then it can only call finitely many different letters, so the condition is technically unnecessary. By contrast, if you have a finite alphabet, you might still need to impose the finite TM program condition. I don’t know. But the worry would be this. Let each program and a piece of data for that program be given a Godel number. Now let your infinite instruction TM simply scan across the input with the read-write head converting the number of 1’s on the input string into a read-write head state number. Once it finally encounters a 0, it can proceed to write “yes” or “no”. The worry is that this might be a way to solve the halting problem. But, I dunno. I don’t know if you can diagonalize against this.

But, while I can’t prove this, I do have to wonder if all those computation theorists were just throwing in those finite program conditions for no reason.

Fair enough, but this just goes back to my point about implicature. If you say, that Alan Turing was interested in digital computation. That’s true. Just as it’s true that epistemologists care about belief. The problem is that Turing was interest in just any old digital computation, he was interested in a more restrictive notion.

Now, I don’t know what finitude conditions one needs to get explicate the notion of Turing equivalent computation, but I’m willing to draw attention to the need for that, since I’m not going to write about it. But, the first step is to see the need.

I read with great care Arbib’s book in 1067 (1st edition of “Brains, Machines and Mathematics”). (If I remember correctly) I notice that something I remember from the 1st edition is not in the second edition.

I am only guessing, but it was some argument about some kind of computation one could do with 2^2^n elements such as nand-gates.

Since 2^2^8 is eddington’s estimate of the number of electrons in the universe, it could mean that the algorithm given is not of immediately practical use.

UQ library has the 1st edition in the warehouse, so I should follow this up.

curiously, I notice that great books written in the 1960’s or earlier often find their way to the warehouse. e.g. Hartree’s numerical analysis.

this must mean that a lot of later work is re-inventing the wheel because people do not have access to the earlier work.