What Exactly Is a Computer?

Anyone who is familiar with computational theories of mind (or brain), according to which the mind (or brain) is (analogous to) a computer, should agree that this is a good question (cf. John Searle, The Rediscovery of Mind, 1992, p. 205).  For computational theories of mind to be formulated in a precise and testable manner, it must be made clear what it takes for something to be a computer.  Yet there is little consensus on what a computer is.

Searle, for one, thinks that everything is a computer, because computational descriptions are very much like unconstrained series of labels, which may be freely applied to anything we wish.  Because of this, Searle argues that the computational theory of mind is empty:  It doesn’t tell us anything substantive about the mind.

Searle’s computational nihilism aside, a number of other issues in the foundations of computational theories of mind require an adequate account of what counts as a computer, and more specifically, what counts as a certain kind of computer.  For instance, is the brain a digital or analog computer?  Is the mind a serial or parallel computer?  These questions have been debated ad nauseam.  But without some precise criteria for what counts as a computer of a certain kind (digital vs. analog, serial vs. parallel, etc.), these debates remain unresolved.

In one of my papers, entitled “Computers,” I investigate the notion of computer systematically, grounding my account in the practices of computability theorists and computer scientists.  I begin by rejecting the contention that everything is a computer.  Then, I explain what distinguishes computers from calculators in terms of their functional properties and consequent computing power.  I also offer a systematic taxonomy of kinds of computer, including general purpose vs. special purpose computers, analog vs. digital, and serial vs. parallel, with relatively explicit criteria for each.  My account is mechanistic:  which class a system belongs in, and which functions are computable by which system, depends on the system’s mechanistic properties.  Finally, I briefly discuss some implications of my account for how to understand computational theories of mind.  There are some surprises; for instance, the view that the brain is not a serial computer because it’s parallel turns out to be confused.  For first, there are different notions of parallel computation; and second, the same computer can be both serial and parallel, in several senses.

As far as I know, my paper contains the most general, comprehensive, and systematic discussion of computers and their properties to date.  I’m hoping it will help clarify some of the above debates about computational theories of mind and their testability.  (I’m also hoping it will help researchers in other areas—e.g., historians and philosophers of computing—but that is less germane to this blog.)

My paper will appear in Pacific Philosophical Quarterly.  As I prepare my final revision, I would greatly appreciate any comments.  (You can access the paper from my webpage by clicking on “works” and then “Computers”.)


  1. Eric Thomson

    A very interesting topic.

    I’ve read a little way in and am a little uncomfortable with the following (when you argue against Searle’s claims that computers aren’t objectively real):
    First, it would be too strong to assume that observer-relative descriptions are generally unscientific. Many bona fide physical properties, such as position and velocity, are observer-relative in the sense that observers within different frames of references will obtain different measurements of such properties. And according to the popular Copenhagen interpretation of quantum mechanics, measurement by an observer is one of two fundamental processes that determine the state of a physical system—the other being the unitary evolution according to Schrödinger’s equation. So, the notion of observer-relativity that is relevant to Searle’s argument needs to be specified further.

    This comes off sort of new-agey to me, especially the reference to the Copenhagen interpretation, which really isn’t part of QM at all, but is a controversial overlay that gained a cult following. The physics texts will talk about measurement, typically, and smartly saying nothing about observers.

    Also, velocity differences are derivable without any reference to an observer. As you allude to, such facts be derived just using the notion of a frame of reference (and besides, the physicists I know tend to consider as objective only those laws that are formulated to be true independently of reference frame). Observers don’t need to be brought in.

    You then bring up the properties like being ‘sharp enough to kill a human.’ This is nice, but do you really need any of this stuff? Does your notion of computation leave it open that computation is intention-dependent in any of these ways? You make general claims about why mind or human dependent properties can be objectively real, but it isn’t clear this does any work: unless you think computation is like this, why bring it up? And if you do think computation is like this, bring it up a bit earlier. Otherwise the discussion comes off as sort of dangling red herrings.

    This may have some plausibility in the case of artifact functions, but it seems false for many functional properties of artifacts and for all biological functions.

    At first this seemed to say that artifact functions may be intentionality-relative, but some artifact functions are not. It took me a bit to understand what you were actually saying.

    Again, I haven’t gotten to the real theory yet.

  2. gualtiero

    Eric, thanks a lot for the comments. I’ll try to clarify the role of this discussion in my final draft.

    In my initial submission, this stuff about observer-dependence wasn’t there. I assumed that my positive account would be more than enough to refute those who wonder whether being a computer is an objective property/kind. But various referees would not give my view a chance until I addressed Searle’s view head on, so I did. The main point is, when push comes to shove, the claim that computational explanations are empty or trivial because they are observer-dependent doesn’t hold any water.

    As to QM, you’ve convinced me that I should delete the reference to the Copenhagen interpretation. But wouldn’t you agree that given QM, some physical properties (e.g., position, momentum) are defined only relative to a measurement operation, and yet they remain objective?

    As to position and velocity, my point is simply that being observer-dependent is not enough for being unscientific or lacking objectivity (as Searle seems to imply). I don’t think I need to get into a detailed discussion of philosophy of physics to make this point. But I will try to be clearer in the final draft.

  3. Eric Thomson

    I would put off that discussion of Searle until the end. I think it is a mistake to have it at the beginning, before we’ve had a chance to see your positive story. Just give us your story, and then consider possible objections, this being one of them. That would help the flow, as then we’ll be in a better position to judge whether your theory fits the bill as giving an objective theory or not.

    Put  caveman-like, why respond to objections to a theory you haven’t even presented yet?

    <i>wouldn’t you agree that given QM, some physical properties (e.g.,
    position, momentum) are defined only relative to a measurement
    operation, and yet they remain objective?</i>

    They have knowable determinate values after measurement, after state vector reduction. I’m afraid to say much more than that!

    <i>As to position and velocity, my point is simply that being
    observer-dependent is not enough for being unscientific or lacking
    objectivity (as Searle seems to imply).</i>

    Yes, I understood. I just didn’t like the example. Most laws of physics can be formulated independently of reference frame, and more than once I have seen this used as the hallmark of the “objective” laws by physicists. I know nothing really about this, so you should ask a physicist.

    Also, why not mention natural selection first, instead of first launching into a seemingly strange defense of views that include observers? Or why get into it at all? Just say you think hearts have objective function to pump blood, and while I can’t give an analysis of that, I will use it to justify what I’m saying.

    In general, put this stuff much later in the paper, and dismiss Searle a lot faster by reference to what others have done, rather than attempting to be original with a kind of strange justification that seems half worked out. You don’t want this section to be the focus of all the grad students reading this paper, and it will be. They are vultures, and they will end up in a nit-picking discussion of whether you are right that velocity is objective and observer dependent (even if you don’t mean them to). So, make them work to get to that part (if you must leave it in), as you want them to understand your actual theory./

  4. Eric Thomson

    I just finished the paper. I still largely agree with my previous comment, but I address that topic more below.

    General Impression: I like the paper. The best part, and I hope
    this was your goal, was the summary of the different types of
    computational architectures, with details that are usually glossed over
    (and probably not known) by philosophers. I learned a lot, and thought
    it was cool to use what was essentially a review of different types of
    computational architectures into a first attempt at forming a helpfully
    specific hierarchy of potential computational theses about the brain.

    Question: is mRNA translation a computation? You go from mRNA strings to protein strings, with ribosomes as the intermediary. Using your nonsemantic definition, it seems to be a computation. This isn’t a criticism, just checking for clarification.

    Medium thing:
    It’s not clear that anything you said is inconsistent with Searle and
    Churchland’s claim that whether X is a computer depends on how X is
    interpreted. Indeed, you seem to say that it wouldn’t be all that bad
    anyway, given Copenhagen and classical reference frames. I’m not sure
    why any of that is necessary, as you relegate the discussion of such
    things to other papers. And based on reading this, I would have no idea
    why I should say that whether X is a computer is not just a matter of
    interpretation. I again would relegate this to the end of the paper
    (where, indeed, you revisit it), and make it less of an emphasis. The
    emphasis should be on what the paper actually emphasizes: the
    neo-computational hierarchy.

    What you really seem to be saying is this: fine, if the fact that X is
    a computer depends on an interpreter, it is still an answerable
    question what type of computer in your architecture hierarchy, so there
    are still interesting questions to ask. Or, just say you think there
    are some computers for which they are right (e.g., my laptop), and some
    for which they may be wrong (the brain, using analogy with the heart),
    and it isn’t the point of this paper to address such things (and cite
    other stuff on this).

    I don’t know, I am sure my suggestion isn’t the best approach to this
    problem, but it is clearly a problem that the discussion of objectivity
    seems half-formed.

    Smaller things: I have a few picky things to mention…and note I read it somewhat quickly so please don’t be offended if I asked a question that you addressed.

    Analog computers do have the limitations you mention, in theory, but in practice they are very helpful: typically you know beforehand the order of magnitude of noise that is acceptable, and this becomes part of the computers’ design. Plus, if you are modeling an analog system, you will have this same problem if you simulate it with a digital computer. (In practice, though, modern stored-program computers are just easier to use to simulate an analog system). Reading the paper gives the impression that analog computers are just not helpful. (Also, perhaps mention VLSI in your paper as an example of analog, parallel, man made, but not programmable hardware?).

    Connectionists nets don’t execute instructions explicitly, but doesn’t a trained neural network implicitly embody the instructions fed to modify its weights during training?

    You didn’t explicitly address how your
    computing-architecture hierarchy relates to Chomsky’s hierarchy: that
    will be in many minds, and since the paper is already largely review,
    you should probably include a quick review of the Chomsky hierarchy. I didn’t
    find one mention of Chomsky in the paper. People who don’t know
    anything about the C-hierarchy might think, ‘well didn’t chomsky
    already do this?’ They’d be wrong, but after reading your paper they
    should know they are wrong.

    I think it’s no big deal to call a neuron a processor. There is simply a wider notion of processor out there. Comp. neuroscientists, with few exceptions, are not familiar with theory of computation. For them processing is some sort of ‘information processing’ such as the mechanism by which signals from cones are combined to get a center-surround receptive field in a retinal ganglion cell. They aren’t confused, just using a different sense of the term.

  5. Eric Thomson

    I said:
    Connectionists nets don’t execute instructions explicitly, but doesn’t
    a trained neural network implicitly embody the instructions fed to
    modify its weights during training?

    This is ambiguous. I don’t mean that the neural network implicitly contains the ‘instructions’ used to modify it, but that once it is trained, it implicitly is executing the instruction to classify X or Y. It is just doing linear discriminant analysis (if it is a perceptron, anyway), which can be seen as carrying out instructions. Why isn’t this implicit in the trained network?

    Second, for similar reasons, isn’t training a neural network at least analogous to loading a program into the network?

  6. gualtiero piccinini


    Thanks even more for these very helpful comments.  I will definitely take them into account.  A few quick points:

    1. I’ll try to follow your nice suggestion to move the part on observer-relativity to the end of the paper.

    2. My view is inconsistent with the view that whether a system is a computer is a matter of interpretation (in the intended sense).  In my view, whether something is a computer (of a certain kind) depends on its mechanistic properties, which are an objective matter (i.e., not a matter of interpretation).

    3. I don’t know anough about mRNA translation to tell you whether it’s a computation.  It certainly has some properties in common with computations, in that the inputs and outputs can be easily characterized as strings of digits.  And of course, when the process is appropriately regimented in the lab, there is such a thing as DNA computation.  It’s one area of non-conventional computer science.

    4. On connectionist computation, I would say that there is an analogy between training a network and loading a program into a program-controlled computer, but the disanalogy is more important.  The disanalogy is, in a computer you can replace the current program with another, store separate programs in the memory and switch between them, etc., without changing the architecture.  In a network, you don’t have the same flexibility.  For any new computation, you need a new network (or in some cases you might be able to do it with a newly trained network, which is almost the same thing).  At any rate, I have written a whole paper on connectionist computation, called “Some Neural Networks Computer, Others Don’t”, in which I talk about this stuff in more detail.  A shorter version, called “Connectionist Computation,” is available on my website.  But if you are interested, I’d be happy to send you the longer version, which is more interesting and is currently under submission (thus, if it gets accepted, I would greatly benefit from your feedback while working on the final version).

  7. Eric Thomson

    My view is inconsistent with the view that whether a system is a
    computer is a matter of interpretation (in the intended sense).  In my
    view, whether something is a computer (of a certain kind) depends on
    its mechanistic properties, which are an objective matter (i.e., not a
    matter of interpretation).

    Yes, I understood that, but my point was that based on this paper, it isn’t clear at all. This paper is much broader than all that, consistent with many different interpretations of the ontology of computation. It seems out of place to address the topic at any length in this paper, rather it would probably be better to just mention it as an issue, state your opinion, that it is beyond the scope of the paper, and cite the relevant lilterature. Any reviewer that doesn’t see that is just being silly.

Comments are closed.