Does Computation Require Representation?

Most of the philosophers who discuss computation are interested in computation because they are interested in the computational theory of cognition. Cognitive systems are typically assumed to represent things, and computation is supposed to help explain how they represent. So many philosophers conclude that computation is the manipulation of representations. Or perhaps computation is a specific kind of manipulation of a specific kind of representation. (People disagree about which representations and manipulations are needed for computation.)

This semantic view of computation, popular as it may be, doesn’t hold up under scrutiny. The main problem is that just about any paradigmatic example of computing system can be defined without positing representations. A digital computer can be programmed to alphabetize meaningless strings of letters. A Turing machine can be defined so as to manipulate meaningless symbols in meaningless ways. And so on.

The traditional alternative to the semantic view is the mapping view, according to which all there is to physical computation is a mapping between a physical and a computational description of a system. According to the mapping view, a physical system computes just in case there is a computational description that maps onto it. The main problem with the mapping view is that it leads to pancomputationalism–the view that everything computes. For just about any physical system is such that a computational description can map onto it. Even sophisticated mapping views, which are augmented by counterfactual, causal, or dispositional bells and whistles, still lead to pancomputationalism. But it doesn’t seem that everything computes–at least, not everything computes in the same sense. (I plan to discuss pancomputationalism in a later post.)

The mechanistic account of computation attempts to give an adequate account of physical computation, which does justice to the practices of the computational sciences, without requiring that computations manipulate representations and without falling into pancomputationalism.

The way I formulate the mechanistic account proceeds in steps. First, I give an account of mechanisms, which are just systems of organized components performing functions. Second, I give an account of the teleological functions of biological systems and artifacts–an account according to which teleological functions are causal contributions to the goals of organisms. Third, I offer a specific account of computational (teleological) functions.

Roughly, a physical system is a computing system just in case it is a mechanism, it performs teleological functions, and its teleological functions include manipulating vehicles in accordance with a rule that is sensitive to differences between different portions of the vehicles. (Of course, the devil is in the detail, which I cannot go into here.) The important points here are that no representations are required for computation (although most computations surely do manipulate representations, and this is perfectly consistent with the mechanistic account), and most systems are excluded from the class of computing systems so pancomputationalism is avoided.

Before concluding this post, I’d like to add that since I started working on the mechanistic account, others have also offered mechanistic accounts of computation (Marcin Milkowski, David M. Kaplan) or accounts that are closely related (Nir Fresco). So the mechanistic account is catching on 🙂

26 Comments

  1. Eric Thomson

    Are there any bioteleological systems that do not manipulate “vehicles in accordance with a rule that is sensitive to differences between different portions of the vehicles”? Intuitively, I could see things like simple enzymes or phototaxis implementing computations under that definition. What of the concern that this is still too inclusive a notion of computation?

    • Gualtiero Piccinini

      Thanks, Eric. Great point. Some people argue that bacteria compute. I don’t know enough about bacteria to form an opinion. According to the mechanistic account, that is an empirical question. There is probably room for some good papers about this topic.

  2. Telos is in the eye of the beholder. I get suspicious of invocations of teleology. And what is a “vehicle”? I am also unconvinced that pancomputationalism is a problem. If the answer is “everything computes”, maybe we should rethink the question, but not discard the answer for being counterintuitive.

    • Gualtiero Piccinini

      Thanks, John. Telos is a complicated matter; some of it may even be in the eye of the beholder. That’s not necessarily a bad thing depending on what that means etc. Anyway, in the book I give an account of teleological functions in terms of current causal powers, so that the truthmakers for teleological claims are nonteleological facts about the world. A vehicle is a physical variable whose values can enter a system, be changed by the system, and exit the system. On whether everything computes, see my next post.

  3. Ken

    Here is a simple doubt I have about a mechanistic account of computation. So, a familiar idea is that a mechanistic explanation is an explanation of a process in terms of lower level processes (entities and activities). But, why would a process’s being a computation (or not) depend on the lower level subprocesses at all? That seems to me irrelevant.

    Or, put the matter another way. It looks as though, on a mechanistic view, the fundamental processes in the world just could not, as a kind of conceptual truth, instantiate a computational process.

    • Gualtiero Piccinini

      Thanks, Ken. According to the mechanistic account, physical computation (in the most interesting sense) requires the performance of (computational) teleological functions. Fundamental physical processes do not perform any teleological functions. Therefore, fundamental physical processes are not computational. But surely fundamental physical processes may be computational in broader, weaker senses than the one explicated by the mechanistic account. I am writing a long paper with Neal Anderson (U Mass, Electrical and Computer Engineering) that discusses this in more detail.

      • Ken

        Hi, Gualtiero,
        I don’t think your reply addresses my concern. I proposed that if you are a MDC sort of mechanist, then one consequence would be that the fundamental processes in the world could not be computational. That, I assumed, would be a bad consequence for your view.

        Your first reply, however, is to give a second reason why the fundamental processes in the world could not be computational. They are not teleological. But, that is merely to give a second reason why your view leads to a bad conclusion, right?

        So, here is what might be another way to make the point regarding the fundamental level. On your view, quantum computation is not computation, right?

        • L.No

          Ken, a quantum computer is not a fundamental phenomena, but a complex system. So your point is confused. One may grant your initial point and also accept that q-computation is possible.

          • Ken

            My point might be confused, but let me check. So, the notion of fundamental that I think is in play is that there are no “lower level” entities and activities that explain it. Are you saying that there are lower level entities and activities that explain what is going on in a quantum computer?

  4. Gualtiero Piccinini

    Ken, thanks for your reply. Two things.

    First, the mechanistic account does not preclude a non-decomposable entity from performing computations, although it is a sort of “degenerate” case. In the general case, computing mechanisms are made out of other (smaller) computing mechanisms, which are made of (even smaller) computing mechanisms, and so forth, until we reach primitive computing components. A primitive computing component may or may not be decomposable, for all the mechanistic account cares, but it won’t be decomposable into anything that performs computations.

    Second, nothing in the mechanistic account prevents “quantum computing systems” from computing. It’s just that they use non-classical (i.e., quantum) vehicles rather than classical vehicles.

    • Ken

      So, let me pursue this point:
      “First, the mechanistic account does not preclude a non-decomposable entity from performing computations, although it is a sort of “degenerate” case. ”

      So, this looks like you are saying that computation does not require “decomposability” into entities and activities. Is that right?

      • Gualtiero Piccinini

        Right. But that’s the special and rather trivial case of primitive computing components. In more interesting cases, computing systems are decomposable into simpler computing systems.

  5. My worry is also that you’re simply swapping out dogmatic mysteries, here, and that different versions of all the old problems will spring up as result. Intentionality is always the fly in the ointment.

    As a tourist, anyway, I see the empirical question as one of how computation, as a species of intentional cognition, participates in the (reverse) engineering of physical information processing systems. This is the question that I think cognitive neuroscience will progressively inform by giving us a picture of what it means to be a biomechanism attempting to cognize the way it (reverse) engineers mechanisms.

    In the meantime, we can make numerous guesses, particularly about the kind of information that biomechanism (in this case, you or me) would *not* be able to intuitively access. In a great many cases, this turns out to be *mechanical* information. The complexities overwhelm us, and we cognize via *correlational* heuristics. We simply neglect mechanical information, causes, and rely on adaptive cues and stable backgrounds to explain/predict/manipulate.

    Heuristics, adaptive cues and stable backgrounds (ecologies) give you the selectivity you need to discriminate between the systems we are prone to call ‘computational’ and not. The *correlational* status of the heuristic systems involved (including metacognitive mechanisms), provide a parsimonious way to explain away the apparent peculiarities of computation (and most importantly, the difference between what researchers do and what we think they do). And you have an account that is mechanistic from head to toe, understanding computation as traditionally conceived as the product of a superordinate system taking both cognizer and cognized as ‘components.’

    This is one possible way to go ‘full mechanical.’

    • Gualtiero Piccinini

      Thanks. Luckily, I’m not trying to explain intentionality, at least in this book. I think it’s an additional advantage of the mechanistic account that it doesn’t require an account of intentionality/representation, because it doesn’t posit that computation requires representation. Of course, typical computations do manipulate representations, and giving a full account of those does require an account of representation/intentionality. But that is a separate project.

        • Gualtiero Piccinini

          Thanks, Scott. I don’t think I agree. Physical systems must possess very specific properties in order to perform computations in the most interesting sense (the sense in which our laptops and iphones compute). Of course, there are mappings from computational descriptions to the microphysical states of such systems. But it takes a lot more than mappings from computational descriptions to the microphysical states of a physical system for that system to perform computations (in the most interesting sense).

          • I look forward to your book, Gualtiero–very much. This whole dimension of the debate is something I personally need to learn much more about. My stance is that problems involving intentionality are systematic, that the reason you find so many of the same patterns arising in different domains (such as the parallel dualism in philosophy of mind and computer science) is a good indicator of heuristic malfunction along the lines Wimsatt describes. You, Carl, and your fellow ‘new mechanists’ are on the sharpest cutting edge, I reckon.

    • Gualtiero Piccinini

      Thanks for the query, ihtio. No, Performing computations is not even close to being sufficient to being a mind. Even if performing some sorts of computation were sufficient for being a mind (a huge if), it would take very special computations–not just any computation.

  6. Mike Tintner

    “no representations are required for computation”.. are you suggesting computation can take place without a mind, dependent on representation and converting its computations into representations, attached?

    The human mind cannot think – compute – without continually attempting to “make sense of”/”realise” what is being thought about. It is continually reality checking – and this is essential for real world survival.

    Note: “A digital computer can be programmed to alphabetize meaningless strings of letters.” You – being a human mind – couldn’t help assessing whether the computation here had “meaning” – and actually that ultimately involves “sense”/”reality” – being about some thing in the world.

    Computers and computation are an extension of the human mind. There is no program that functions without a human minder.

    So yes, you can be subject to the illusion that computation can take place without representation, because symbols can be physically separated from their referents in a computer or on a page. But they can’t be separated in a functioning human mind, which will automatically assess symbols without referents as “meaningless.”

    • Gualtiero Piccinini

      Mike, thanks for your comments, to which I am very sympathetic. The main thing I’m suggesting is that a physical system can perform computations without the vehicles of the computation being representations–that is, without the vehicles representing anything at all. You don’t seem to disagree on this point. You raise the question of whether a physical process can be a computation (in the most interesting sense) without there being a mind that interprets it as such. You think there can’t be. I think there can be. What’s sufficient IMHO is the manipulation of vehicles according to a rule defined over the vehicles in such a way that the manipulation contributes to the goals of organisms. So organisms have to be there, but there is no need for a mind external to the computation. Otherwise minds could not be computational on pain of infinite regress!

      • Mike Tintner

        Well, as we seem to agree, the most “interesting” point is that all (no?) existing computational processes are implicitly if not explicitly representational – and, more interestingly still, (pace me), consist implicitly of “embodied information” about the real world. (Even fantasy/makebelieve info/worlds *have* to be classified as not-real).

        And this is of extreme importance because we have a very large percentage of people and *concerned professionals* who are still working on the basis that thought/computation can be purely logical/symbolic and non-representational. The entire fantasy of a Singularity is based on this. A great many actual strong AI projects are based on this. An awful lot of people are wasting an awful lot of time and energy, based on this.

        And all this is because there is no truly good philosophy that explains how thought/computation is and has to be basically representational. That’s where the real challenge and action lie – to explain how thought/computation are not just embodied in processing (as embodied cog sci validly maintains) but embodied in information.

        You can argue that a monkey pounding away on a typewriter is engaged in non-representational computation and purposefully producing meaningless groups of letters. So what? Why waste time that way, when there are so vastly more important and urgent dimensions of computation to study?

        • Mike Tintner

          Out of extremely-framed debates like this often come interesting ideas.

          Here I think off the top of my head is one. We’ve effectively agreed that you can have non-representational computation (a la monkey).

          What I suggest you can’t have is non-representational CONCEPTUAL computation. Note: there is no such thing at the moment. Computers do not currently handle concepts. Being logicomathematical, they handle variables which are not the same thing at all.

          A computer that could handle any concept in our language, like say BOX or LINE as you can, would have to be representational and embodied in its computations (a robot using its body for computations) – as you are.

Comments are closed.

Back to Top