Suppose one is allowed denumerably many letters in an alphabet and denumerably many instructions (i.e., denumerably many
four-tuples of the “Q1 0 1 Q2” (quintuples would do as well) in a TM program.
Then one can treat each of the standard numerals in decimal
notation as a letter in an alphabet.
Then one can get a trivial program for every computable function. So, take the squaring function. Here is a trivial “TM” program for this
S0 1 1 S1
S0 2 4 S1
S0 3 9 S1
S0 4 16 S1
Now, one might think this trivialization is enough to make
one want to impose the finite alphabet and particularly a finite program
condition. But, I think one can do
better than make this “counterintuitive.”
Consider standard kinds of results from standard
computational complexity theory, e.g. how many steps it takes the “fastest”
program to compute a given function as the input size increases. Well, now, all the inputs have the same size,
namely 1, so what happens to the standard computational complexity results.
So, stipulate a definition of computation as you will
or pick whatever target notion of computation to explicate you will, but if you
want an account of computation that is descriptive of what folks working on
computational complexity theory, it looks as though your account should impose
a finite program condition.