Competence, Computation, and Mechanistic Levels


I apologize for the long post.  It’s inspired by an email exchange I’ve had with Anna-Mari Rusanen.


 


One recurring theme in philosophy of cognitive science is David Marr’s distinction of computational, algorithmic, and implementational levels (from his book Vision, 1982).  Sometimes people assimilate Marr’s computational level to Chomsky’s competence.  (Marr himself said his distinction is inspired by Chomsky’s competence/performance distinction.)


 


I think Marr’s distinction is a mess, and it’s not the same as Chomsky’s distinction.  First, some general comments on Marr.


 


I think Marr was under the spell of a view that was popular at MIT and other places when he worked there, i.e., that the only way to explain the mind mechanistically was by appealing to computations.  Computational = mechanistic.  But I think this is a confusion.  One version of this view is what I call pancomputationalism—the view that everything is computational.  There are many more types of mechanism than computing mechanisms


 


In addition, Marr, again along with others at MIT and surrounding communities, was not clear on the structure of computational explanation.  He subscribed to what Bill Lycan (in his book Consciousness, 1987) calls “two-levelism”, i.e., the view that explanations of mind come in two levels:  the cognitive/algorithmic level, and the neuroscientific/implementational level.  Marr also included the “computational” level, but that does not eliminate his two-levelism.  For the truth of the matter is that if you understand either brains or computing mechanisms correctly, you understand that there are many, many mechanistic levels (to use Craver’s phrase). 


 


In brains, there are systems, made out of areas, made out of networks, made out of neurons, etc.  At any of these levels, one may ask what the system does and how it is implemented.  (Is this the point made by Churchland and Sejnowsky in The Computational Brain?  Yes and No.  Yes, because Churchland and Sejnowsky also say there are many levels, but no, because Churchland and Sejnowsky also subscribe to pancomputationalism, which lead them to conclude that all these levels have computational and algorithmic descriptions.) 


 


In computers, there are processors, made out of datapaths and controls, made out of arithmetic-logic units, memory registers, etc., made out of logic circuits, made out of logic gates, etc.  At any of these levels, one may ask what the system computes, what the algorithm is, and how the system is implemented.  (If Churchland and Sejnowsky had been talking about computers rather than brains, they would have been right.)


 


The question of whether the brain computes is independent of whether the brain can be understood in levels, and how those levels relate to one another.  Marr did not seem to understand very well how mechanistic levels relate to one another, and on top of that, he confused the question of whether the brain is a computer with the question of how levels are related.


 


Now, how does Marr’s computational/algorithmic/implementational distinction relate to Chomsky’s competence/performance distinction, and how do both of these distinctions relate to levels of mechanisms?


 


As I remember, for Chomsky competence is something like a specification of the idealized capacities of a language system, with the vagaries of performance and related constraints (memory size, processing time, distractions, tiredness, etc.) “abstracted away”.  (At least in this limited sense, competence is certainly “abstract”; it ignores many details of performance.)  In other words, it’s the set of sentences that the (idealized) system is capable of recognizing, producing, transforming into one another, etc., including a set of rules for doing the recognition/transformation/production.  Presumably, this idea can be generalized beyond the domain of language.


 


For Marr, the computational level is simply a specification of the input-output mapping (e.g., addition), without any specification of rules for generating the outputs.  Hence, Marr’s computational level is even more “abstract” than Chomsky’s competence, in the sense that it contains even fewer details.


 


As I see it, Marr’s computational level is a kind of functional specification.  By functional specification, I mean a specification of the function of a system.  E.g., the stomach processes food so that it acquires certain properties, the lungs exchange oxygen for carbon dioxide, the liver filters certain substances from the blood, etc.  For computing mechanisms, the functional specification is computational in a generic sense of the term. E.g., an adding machine performs addition, etc.


 


These computational descriptions may be formulated either in terms of the objects denoted by the inputs and outputs of the computation (e.g., numbers) or in terms of the inputs and outputs themselves (numerals).  Marr blurred this distinction, referring freely both to the inputs and outputs and to what they represent according to his needs.


 


When it comes to neural systems, it’s important to know what the system is trying to do in terms of the objects and properties “denoted” by the inputs and outputs.  The resulting descriptions may be called semantic functional descriptions.  What “denoting” amounts to is a complicated question, which I will leave aside.  (As a first approximation, in neuroscience “denoting X” means driving the behavior of the organism based on correlating with X.  See also our recent discussion.)  This kind of semantic functional description of a neural system corresponds to what Marr calls the computational level.  But there is nothing especially computational about this!  Given a correct understanding of computation, according to which computation does not presuppose representation and semantic functional descriptions can apply to non-computational systems, you would give the same semantic functional description of a neural system in terms of objects/properties denoted by the inputs and outputs regardless of whether the system is turning its inputs into its outputs by performing computation.  (This is one reason that I think Marr is misleading.)


 


Once you have the semantic functional description (a cleaner version of Marr’s description at the “computational level”), you need to start worrying about how the system fulfills its function. This is where mechanistic explanation comes in.  You need to start worrying about the properties of the inputs and outputs and the intermediate steps that transform the inputs into the outputs.  Now you can still give more or fewer details about the internal processes.


 


If you simply give an idealized description of rules for carrying out the transformations, without necessarily being concerned about whether the system actually follows those rules, you get what Chomsky called a theory of competence.


 


If you add a detailed description of the intermediate steps actually carried out by the system, you get something analogous to what Marr called algorithmic level.  (I say analogous because the theory need not be computational in the strict sense of the term, whereas an algorithmic theory is by definition computational in the strict sense of the term.)  If you add enough mechanistic details and constraints, you get what Chomsky calls a theory of performance.


 


But in order to give these more detailed mechanistic descriptions (algorithms, intermediate steps, etc.), you need to know what the “architecture” of the system is, so you need to get into the so called “implementing” mechanism (what the components are, what capacities they have, how they are organized).  For not all processes/algorithms run on all machines (unless you are talking about computations in the strict sense AND your machine is computationally universal, but then you need to know that fact, and you need to know a lot of details about the machine before you can draw that conclusion; so either way, you need to know how the machine works).  So, there is no neat separation of algorithmic level and implementation level.  Instead, there are descriptions of intermediate processes that include varying amounts of details, at many mechanistic levels.

2 Comments

  1. Corey Maley

    This is an informative post. I have often wondered how Marr’s three levels view is supposed to fit with any but the most simplistic conception of computation, much less a cognitive system. It may be able to fit at some particular level of a computational system (e.g. an assembly language routine, or a digital logic specification), but not an entire system of any real complexity. Similarly for cognitive systems, which is how I understand what Chomsky had in mind: his performance/competence distinction was not meant to extend beyond a very narrow cognitive phenomenon. Two-levelism (or even three-levelism) just doesn’t seem to be the right way to understand cognitive or computation systems as whole systems.

  2. Basileios Kroustallis

    The post is thought-provoking. I think that Marr’s point in describing different levels of explanation is not that the algorithmic level represents somehow a higher, more determinate level of explanation than the implementational level. He is happy to state that the neurophysiological story is self-sufficient in its own terms, and he also distinguishes between lower and higher stages in this story (from neurons to system areas). Therefore, I don’t think that he wrongly presupposes an inadequate neuronal account, which has to be supplemented by a different level of explanation. He only thinks that an independent computational explanation is essential, no matter what the complete neurophysiological story might be. This comes to the point of representation as computation, and this has to be faced by rival accounts. But this is not related with the question of different (lower or higher) stages in cognitive processes.

    Regarding the computational level, I think Marr wants to describe more than just a functional specification of the system under question.He introduces external, physical constraints in the computational process (e.g. spatial continuity in vision), and furthermore, he attempts to prove that those constraints are necessary and sufficient for the process under question. It is an open question whether that project succeeds, but surely it is more than just denoting that inputs and outputs are to be thought of in a certain semantic way. He is not satisfied with a simple correlation of denoted elements with input and outputs, but also promotes a certain kind of dependence of computational processes upon that semantic content. Again, that brings the problem of computations, but it seems he invests even more in that computational theory than it may be supposed.

Comments are closed.

Back to Top