Suppose one is allowed denumerably many letters in an alphabet and denumerably many instructions (i.e., denumerably many

four-tuples of the “Q1 0 1 Q2” (quintuples would do as well) in a TM program.

Then one can treat each of the standard numerals in decimal

notation as a letter in an alphabet.

Then one can get a trivial program for every computable function. So, take the squaring function. Here is a trivial “TM” program for this

S0 1 1 S1

S0 2 4 S1

S0 3 9 S1

S0 4 16 S1

…

Now, one might think this trivialization is enough to make

one want to impose the finite alphabet and particularly a finite program

condition. But, I think one can do

better than make this “counterintuitive.”

Consider standard kinds of results from standard

computational complexity theory, e.g. how many steps it takes the “fastest”

program to compute a given function as the input size increases. Well, now, all the inputs have the same size,

namely 1, so what happens to the standard computational complexity results.

So, stipulate a definition of computation as you will

or pick whatever target notion of computation to explicate you will, but if you

want an account of computation that is descriptive of what folks working on

computational complexity theory, it looks as though your account should impose

a finite program condition.

Speaking of a finite program is a bit confusing. Some machines execute programs while others don’t. Of those that don’t, some are governed by programs and others aren’t (such as many types of connectionist networks).

For the first two kinds of machines, you can impose the conditions that the alphabet and number of internal states be finite. (That’s equivalent to your finite program condition.) For the third type of machine, even this is not going to work (because the number of internal states may not even be denumerable).

But, I focused on one computational formalism, a version of TMs, and defined it for that formalism, so I could make things clearer. I am trying to get the basic point home for Turing machines. Then I can maybe work on extended the basic point to other computational formalisms.

Here are a couple of other reasons why this isn’t going to work to capture what’s done in theoretical computer science:

– The halting problem (and other related results) would seem to be trivial.

– The class of TMs you’ve described (call it C) is bigger than the class of computable/recursive functions. The class C seems to compute all functions from natural numbers to natural numbers, which is nondenumerable.

Those thoughts had crossed my mind, but I would have liked to have had a proof. But, I didn’t get around to figuring one out, if there is one.

On the halting problem, I got as far as supposing that you let each program and data pair be Godel numbered and represented as a single input, the for each such number you have an instruction that maps it to either one or zero. But, I’m not sure you can’t diagonalize against that. Just dunno.

To prove the fact that this set of TMs (C again) is bigger than the class of finite-program TMs, I think it’s sufficient to note that every infinite list of natural numbers can be identified with the list of all of the “outputs” for a single TM in C. In the example, the infinite list {1, 4, 9, …} is identified with a TM. The set of all infinite lists of natural numbers is uncountably infinite, and there’s a correspondence between that set and C, so C is uncountably infinite.

For the Halting Problem, I should have been more specific. If the only TMs in C have programs that “look like” the one in your example, then it’s trivial, because there is always only one state transition. Otherwise, I don’t know, but there would be a TM in C that would solve the Halting Problem for finite-program TMs, which is what I think you’re getting at. I’m not certain, but I don’t see how diagonalization would work, given that C is nondenumerable.

Corey,

Thanks for these comments.

The first paragraph looks pretty good to me.

I take C to be a proper superset of the standard TM programs. They are essentially standard TM programs with the finite alphabet and finite program conditions removed. So, not all programs “look like” the one I put in at the first post.

I was originally thinking that C programs would solve the standard halting problem and that this would be an objection to it as providing an understanding of “what Turing was doing” in his 1936. These unrestricted programs do a lot more than standard TMs.

But, here is another move. One might say that C programs are fine. They are just another kind of computational formalism. That’s fine, but they appear no to be Turing’s idea.

As an additional question, I wonder if the halting problem for C programs is solvable by C programs. This is different than the question of whether the halting problem for standard TMs is solvable by C programs.

Ken, by relaxing finitude conditions you’re probably arriving at something equivalent to ‘hypercomputation’. See arxiv.org/pdf/math/0209332v1 – especially references to non-finite computation.

Thanks, Marcin. I’ll check it out.

In truth, this supports the gist of where I’ve been going with finitude conditions. If dropping them leads to hypercomputation, and Turing in 1936 was not interested in hypercomputation, then there is something misleading about not not noting them.

But, in truth, even in generalized recursion theory, one drops some finitude conditions and goes for others. So, there is an ongoing concern with finitude conditions throughout the logico-mathetical work in recusion theory/ computability theory.

Regarding this always some confusion prevails in my mind but somewhat now i’m clear in this by going through this site

=====================

jamespeter

Vermont”>http://www.alcoholaddiction.org/vermont”>Vermont Alcohol Addiction Treatment

Ken, the only way to remain neutral wrt Church-Turing computation is to adopt something along the lines of Ron Chrisley’s transparent computationalism – basically, computational equivalent of ideal physics; safer philosophically…

Anyway, there are profound problems with hypercomputation as (1) it might be impossible physically; (2) it might be impossible logically (contradictory). I’ve heard a very convincing talk by Vincent Mueller who showed that hypercomputation is not as attractive as many people think (it’s yet not published in English, I guess, but there is a… Greek version on his website http://www.typos.de/index.htm).