Vincent Mueller has posted a note on digital states at Interdisciplines. Below is my brief commentary…
Mueller’s main thesis is that “a state is digital if and only if it is a token of a type that serves a particular function”. I think Mueller is on the right track, but his view can use some refinement.
First, are we talking about digital states in general or states of digital computers (or other computing mechanisms) in particular? Mueller seems to be talking about both, even though he motivates his inquiry by an analysis of computational theories of mind, which rest on an analogy between minds and computers (or other computing mechanisms). As I have argued at length (Piccinini 2004a, forthcoming), the analysis of computers and computing mechanisms poses special challenges, which are best addressed by investigating their specific properties. Incidentally, the same point applies to states of analog computers (Piccinini 2004a).
In other words, the notion of digital state is vague and perhaps ambiguous. If we wish to make progress on our understanding of digital computational states, we should focus on digital computational states alone, leaving aside other notions of digital state.
Second, Mueller seems to analyze digital states regardless of whether they represent anything. If this is his intent, I commend him. I have argued in several places (2004b, 2007) that computational states are not individuated semantically and should be analyzed independently of representation, without presupposing reference to semantic properties. However, in his note Mueller mixes considerations that support his account with considerations that pertain to representations. He even follows the old-fashioned practice (which I criticized in my 2004b) of formulating computationalism as presupposing representationalism. I find this confusing. If the goal is to leave representations and representationalism to one side, it would be better to avoid discussing representations altogether and to formulate computationalism independently of representationalism.
Third, I agree with Mueller that the notion of function has an important role to play in understanding digital computational states. In fact, function is one cornerstone of my mechanistic account of computation (2004a, b, c, 2007a, b, forthcoming).
But fourth, function is insufficient to account for what’s digital about digital (computational) states. Much more needs to be said. What are the special characteristics of functional states that are digital, as opposed to analog or whatever else? Mueller appeals to tokens and types, but the notions of token and type, without further constraints, are too general to separate digital states from other kinds of states. In some of my papers, I have made a serious attempt to specify in some detail what it takes for a state to be digital. The most detailed and sophisticated account that I have given is in Piccinini forthcoming. Currently the paper is conditionally accepted at Philosophy of Science and is available on my website. Comments on it are most welcome.
References (all available at https://www.umsl.edu/~piccininig/my%20works.html):
Piccinini, G. (2004a), Computers. https://philsci-archive.pitt.edu/archive/00002016/
Piccinini, G. (2004b). “Functionalism, Computationalism, and Mental Contents.” Canadian Journal of Philosophy 34(3): 375-410.
Piccinini, G. (2004c). “Functionalism, Computationalism, and Mental States.” Studies in the History and Philosophy of Science 35(4): 811-833.
Piccinini, G. (2007a). “Computation without Representation.” Philosophical Studies.
Piccinini, G. (2007b). “Computational Modeling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?” Australasian Journal of Philosophy 85(1): 93-115.
Piccinini, G. (forthcoming). “Computing Mechanisms.” Conditionally accepted at Philosophy of Science.
Gualtiero,
As you already know, we have differing views on this subject. I like the project of disentangling computation and representation, but I find the notion of a non-representing digital state to be confusing: I would replace “digital” with “discrete”.
Briefly, it seems to me that what makes “digital” states interesting is their place in digital computing mechanisms, which, by definition and design, operate on numbers that are represented by their digits. Very closely related–but separable–is the discreteness of those states, and I think it’s worth making the distinction between digital and discrete in these cases. I don’t see what using the term “digital” gets you above and beyond “discrete”.
I’ve had more to say about this in this paper: AnalogDigital.pdf“>https://dcl.wustl.edu/~cmaley/AnalogDigital.pdf”>AnalogDigital.pdf.
I’d be curious to hear what others think about this.
Corey,
I don’t think we need to get too hung up on what we call what. If you want to call them discrete states, so be it. But it’s important to realize that the states of digital computers are not just discrete in the ordinary sense; much more needs to be said to give an account of them (which I tried to do in some of my papers).
That said, it’s also useful to have an account of digital representations, discrete representations, and all kinds of other representations. I read your paper as contributing to that project.
Finally, I think it’s misleading to say that digital computers “operate on numbers”. I know that computer scientists say this all the time. But they are not philosophers and they are not trying to do the foundations of computer science. What does it even mean to operate on numbers? At best, digital computers are designed to operate on numerals. Numerals are digits in my sense; i.e., states that may or may not represent something, and can be (and ought to be) understood independently of what they represent.
Gualtiero,
I think you’re right to point out that the states of digital computers are not just discrete in the ordinary sense; what concerns me is the use of the term “digital state” outside of the context of a digital computer (or at least some digital system). However, there are other computers that, to me, are not digital, simply because they do not operate on digits, which I take to be the parts of representations of numbers (although most actual computers are digital). The reason I think there is some utility in making distinctions this fine is just that there are many kinds of computers.
Digital computers operate on numbers because their operation on bits is set up to correspond to understanding those bits as representing numbers. The rules governing the manipulations of strings of bits are not arbitrary, and they are only a small subset of all of the possible rules that one could use. The rules that *are* used are those that correspond to the manipulation of the number that the string represents, when that string is treated as the digital representation of a number. This fact allows one to understand the computer as operating on numbers. So there are rules that manipulate the bits so that the represented numbers are multiplied, added, or whatever. But there are other computers that operate on bits (or other symbols), where strings of those bits need not correspond to numbers in any interesting sense. I think it would be extraordinarily difficult to understand such a system if it were large enough, which is precisely why computer designers use digital computers: it’s easier to understand a system that operates on numbers (or operates on strings of digits that can be said to represent numbers) than one that operates on symbols in some arbitrary way.
As for “numeral” versus “digit”: this is another very fine distinction, and I’ve tried to avoid the confusion surrounding “numeral”. Some people take “212” to consist of three numerals, others say it’s just one, others say it’s two. So, you can think of “212” as the name of a number, or something else, and then you have to worry about whether you’re talking about types versus tokens. There seems to be less confusion about digits; mathematicians and computer scientists have a fairly precise idea of what these are, and there are algorithms for converting numbers from one digital base to another, and theorems about properties of numbers given their digital representation (e.g. if you add the individual digits of a number n in base b, and they sum to an integer multiple of (b-1), then n itself is divisible by (b-1)). So “212” has exactly three decimal digits: a “2” in the hundreds place, a “1” in the tens place, and a “2” in the ones place.
I hope that clarifies some things; thanks for bearing with the long post!
Corey
Corey,
Just two brief questions about this:
“Digital computers operate on numbers because their operation on bits is set up to correspond to understanding those bits as representing numbers.”
1. So, is your point that “operating on numbers” is dependent on the _interpretation_?
2. Why the “set up” you mentioned would work only with the numbers, and not with the numerals, as Gualtiero thinks? I mean the plausibility of your argument depends on the fact, whether or not you are able to make some distinctions with that claim… And I am not sure yet, whether you are (at least without a further argument).
But it may well be that I just do not completely understand your argument.
a
Hi, thanks for the questions.
I hope I can explain this sufficiently, because I believe this to be quite complicated.
At a basic level, one can formulate computation as the manipulation of strings of symbols. I think Gualtiero has explicated this quite well in some of his papers (particularly with respect to how difficult it might be to reconcile this with neuroscientific ideas of computation). Understanding much of computation doesn’t seem to require interpreting the symbols, with the exception of digital computation. This is a particular kind of computation in which the strings of symbols, and the operations on those strings, are all interpreted to be numbers, but in particular, numbers represented by their digits.
For example, one could have a computer that turns the symbol-string “ABC” into “D”. Given a bunch of other symbols and rules, we might want to say that “A” represents 117, “B” represents an addition operation, “C” represents 14, and “D” represents 131. We would be interpreting symbols as numbers, but this is not digital. If we represent 117 as “117”, or as “1110101” in base 2, then we are representing the number by representing its digits, which is what digital computers do.
One can still understand a digital computer as operating on individual numerals, but then one loses why the manipulations have been set up as they are. There are very good reasons in terms of computational design and abstraction for using digital representations of numbers: manipulating (representations of) numbers by manipulating their digits generalizes very well to different kinds and sizes of numbers, as well as allowing some (representations of) numbers to be treated as instructions, and others as data. This is a very complicated problem (at least as I see it), but one that appears in other areas of computational theory. For example, many proofs in the theory of computation require that a string of symbols is something like a description of a particular abstract machine, which in turn will function as the input of another abstract machine. One can understand these proofs as simply manipulating symbols, but then one loses the generality of the idea of what the strings are supposed to represent.
So, my primary point is that digital computation is a special kind of computation, and what makes it digital is worth understanding in its own right. At the same time, not all computation is digital, and assuming that it is glosses over important facts about digital computers that do not hold for non-digital computers, and vice-versa. This, at least to me, is an unfortunate consequence of people using “digital” as synonymous with “discrete”.
I’m running out of space, so I have to stop there. I hope that helps!
Corey
Hi Corey,
and thanks for your brilliant answer(s). However, since it is a midnight here in Helsinki and the sun just went down, I am only publishing this post now. I´d better to get some sleep before I´ll start to think your fascinating reply…
Corey,
I am sorry for the delay… However,I have read your post several times and been thinking about it.
The following question is not intended to be a argument or a critical comment. It is something I have been thinking when I´ve been reading your post.
So, if I understand you correctly, you argue that the generality of the idea what the strings are supposed to represent, will be lost if the interpretation of a abstract machine is not fixed as an interpretation of computations as manipulations of numbers.
Please let me expand. Here in Helsinki we sometimes call the “rules” (or the task level- descriptions of the tasks what the computational system is supposed to fullfill) which dictates which interpretation will be fixed as “abstract mechanisms”. So, is your idea that somehow the task level descriptions of “the abstract machine” i.e. a system level description of the abstract mechanisms will dictate when a computation should be understood as manipulation of numbers. But… I still don´t understand why the _generality_ would be lost on the basis of making that choice. I guess I am missing something.
So lost, but hopefully to be found, as we say here in Helsinki,
a
… just publishing this…
I don’t think I’m making a claim as general as I think you’re suggesting.
Basically, I just want to argue that “digital” does not (or should not) equal “discrete”, with respect to both representation and computation. I think it is very useful to make a precise distinction between these two terms, with the crucial assumption that to be digital is to be a representation of a number via a representation of its digits (this was exactly the point David Lewis made in his 1971(?) Nous paper “Analog and Digital”). And while this is an interesting point when we’re only worried about representations, I think it’s even more important when we’re talking about computers, because of the central role that this type of representation plays in a digital computer.
That said, I don’t think I want to make a more general point about computation. I don’t think one needs to understand computation in general as operating on numbers. But extant digital computers operate in such a way that, at some levels, it’s quite important to understand the operands of the system as digits (in the sense I’m pushing for here). At higher levels (such as high-level programming languages, or just running a word processing program), understanding the operands of the system rarely requires knowledge of these levels.
The reason to be interested in this (or the reason I’m interested anyway) is that it may be quite useful to take “levels” of computation seriously. In my mind, the digital computer is a great example of a system that can be understood at numerous levels, where each level does not nicely reduce to the one below it. Even in the few cases when there is a strict reduction (say, a high-level programming language to assembly language), understanding what’s going on at the lower level in terms of the higher-level algorithm can be practically impossible. To me, understanding all of the issues here would be a great case-study for multilevel explanation (that might be useful for understanding other, non-engineered systems such as the mind/brain), while at the same time shedding some light on some issues in computational explanation. It seems, for this kind of project, it would be useful to be clear about what issues are specific to digital computation, and which apply to computation generally. This digital/discrete distinction is just one starting point.
Corey
Moi,
And sorry for the delay… Yes, I agree that it is (a) interesting and (b) useful to take “levels” in seriously insofar we are strict and clear what we mean by “levels” in the first place. It is crucial to distinguish the levels as the levels of scales/organization/explanation and so on, but I guess we both agree on that.
One question… You say: “even in the few cases when there is a strict reduction… uderstanding what´s going on at the lower level in terms of the higher level algorithm can be practically impossible…”.
Ok, I see your point, if the “what´s going on”- part is understood as a claim concerning the causal constitution… but could you please define what you mean exactly by “the higher level algorithm” here. If you mean what I guess you may mean, there will be many, many, many interesting questions to be posed.
See you,
a
Guys,
Thanks for this illuminating discussion. Just a few more points (and sorry for the delay in responding).
I agree with virtually everything that Corey says, and especially with the overarching point that it’s important to get clear on these issues if we are going to make progress in understanding computation, computational explanation, and related issues.
I am still uncomfortable with the expression “computers operate on numbers”. It sounds like a category mistake. “Operating” is a causal concept, whereas “numbers” are not the kind of thing that can be causally acted upon (under most views about numbers).
I understand Corey’s concern that my use of “digit” for something broader than symbols that can represent numbers in a digital way blurs the important distinction between what he calls digital and symbolic computation. For better or worse, at this point I’ll have to keep using it, because it’s many of my published papers.
When I originally wrote my papers in this area, I used to write “symbol” instead of “digit”. But then I encountered referees who would get stuck on the semantic connotation of “symbol”, and they would reject my papers simply because although I was arguing that computing mechanisms are individuated non-semantically, I was using the word “symbol”. They just couldn’t accept that I was talking about symbols regardless of whether they meant anything. That’s the only reason why I switched to the term “digit”. After that, I stopped getting that objection from referees, even though the only change was terminological.