Decomposing the hierarchy of thought

Like other social animals, humans are status-conscious creatures, obsessed with hierarchy and rankings. This is obvious in the realm of finance, the entertainment industry, and academic reputation-chasing, but it also turns up in the more staid realms of theory. Psychology and ethology make frequent reference to the distinction between the “higher” and “lower” cognitive capacities of creatures, but these terms are usually allowed to remain metaphorical or tacitly normative—merely a way of praising the specialness of humans.

However, I think we can productively move past treating these terms as metaphors or honorifics. There is real empirical content to distinguishing between higher and lower cognitive capacities. But the hierarchy of thought cannot be understood in terms of any single, simple property. Rather, there are three separate ways in which the higher/lower distinction can be drawn, and once we separate these out we will also be in a position to see what is special about conceptual thought.

Higher cognitive systems can be characterized by representational abstraction, causal autonomy, and free recombination. Abstraction refers to the ability to represent objects and categories in a way that transcends the distinctions that can be made using just perception. Several routes lead to transcendence. One involves making classifications that cross-cut those made by perception. For example, while water and “heavy water” (deuterium oxide) differ in their chemical composition, they are more or less identical to superficial examination. A creature restricted to making perceptual discriminations might be unable to separate the two, while one that can represent chemical structures could do so. The reverse situation also occurs: caterpillar and butterfly belong to the same category despite being perceptually dissimilar, a fact that it takes some biological insight to perceive. Finally, some categories do not manifest themselves perceptually at all, either because they are too large, too small, too distant, or simply have no physical qualities at all. (For example, what is the perceptual “look” of something’s being a Hilbert space, or an allusion to Dante’s Inferno?) Grasping these sorts of things requires taking a large step above and beyond the representational powers of the senses.

Causal autonomy refers to the ability to represent and reason about something without its immediate presence. In purely stimulus-driven cognition, the impingements of the environment on a creature’s senses are totally responsible for guiding its actions. We are more than bags of reflex arcs in two important ways. First, we can represent things that are not currently present to the senses, even including things that do not now and possibly couldn’t ever exist. Second, the course of our cognition can be detached from our current perceptual circumstances and allowed to proceed endogenously. In its simplest form this occurs during “mind-wandering,” but when it is under intentional control it manifests in the ability to produce sophisticated long-term plans and intentions spanning years and even decades. Complex planning engages the ability to reason hypothetically and in explicitly counterfactual modes, a type of thinking that is also active in imagination, pretending, and pretense. Causal and modal detachment are two ways in which cognition liberates itself from the present and the actual.

Finally, free recombination is another term for the ability that Gareth Evans picked out under the name of the Generality Constraint. Obeying GC requires the ability to put together potentially any piece of information with any other piece of information. A little more formally, if you can think that a particular thing has a certain quality, then you can at least entertain the idea that it has any other quality that you can also think about. When this is present in a completely unbounded form, there are no limits on the types of new structures that can be created (aside from whatever purely formal constraints determine whether a set of representations can be put together). In my view, the import of free recombination lies not just in the fact that it lets us build new structures, but also enables complex chains of inference involving them. Without these kinds of freely re-shuffled components, it would be impossible to carry out cross-domain inferences that integrate separate bodies of information. Free recombination underwrites informational integration.

My conjecture is that when we talk about higher cognition we are speaking ambivalently, intending to pick out sometimes one, sometimes another of these properties. At times, higher cognitive abilities appear as those that ascend beyond the realm of the senses; at other times, as those that underwrite autonomous, self-guided thought and action, or those that are responsible for unbounded creativity and inferential potential. Most other proposals in the literature pick out one or two of these as essential, but in so doing they have hold of only part of the puzzle (for examples, see Allen & Hauser, 1991; Amati & Shallice, 2007; Camp, 2009; Christensen & Hooker, 2000; Corballis, 2011; Sterelny, 2003).

These properties are logically and theoretically independent: they do not build on each other to form a single hierarchy, and any given creature may possess a patchwork mix of higher and lower faculties. Combining them, however, brings clearly into view the contours of conceptual thought: a conceptual system is one that displays all three of these properties. It occupies the space where all three characteristics converge.

This model has a number of virtues, not least of which is that it offers several means to empirically draw the conceptual/nonconceptual distinction both in developmental psychology and in cognitive ethology. Given the haphazard way that experimentalists often toss such language around, this is all to the good.

Two final points should be made. First, I haven’t made any mention of consciousness as a condition on concept possession or higher cognition in general; in this I depart from Stich’s (1978) way of separating doxastic from subdoxastic states. Nor have I made any mention of normativity, or of requirements that concept possessors be able to rationally justify the judgments they make. Concept possession is demanding, but it should be distinguished from these much more stringently intellectualized sorts of concept mastery.

Of course, the best test of an empirical taxonomy is whether it productively organizes the field of phenomena and suggests further theoretical and empirical explorations. Any thoughts on how well this model performs on those criteria would be welcome.

 

References

Allen, C., & Hauser, M. (1991). Concept attribution in nonhuman animals: Theoretical and methodological problems in ascribing complex mental processes. Philosophy of Science, 58, 221-40.

Amati, D., & Shallice, T. (2007). On the emergence of modern humans. Cognition, 103, 358-85.

Camp, E. (2009). Putting thoughts to work: Concepts, systematicity, and stimulus-independence. Philosophy and Phenomenological Research, 78, 275-311.

Christensen, W. D., & Hooker, C. A. (2000). An interactivist-constructivist approach to intelligence: Self-directed anticipative learning. Philosophical Psychology, 13, 7-45.

Corballis, M. C. (2011). The Recursive Mind: The Origins of Human Language, Thought, and Civilization. Princeton: Princeton University Press.

Sterelny, K. (2003). Thought in a Hostile World. Malden, MA: Blackwell.

Stich, S. (1978). Beliefs and subdoxastic states. Philosophy of Science, 45, 499-518.

 

11 Comments

  1. jptxs

    What relationship, if any, might this have to the notions of chunking or concatenation in mental processes? The question may actually be more coherent in Stich’s terms. One can model a belief as being chunked into a subdoxastic notion through repitition or familiarity. But it seems to be applicable here, too, as you see something as conceptual or being subconceptual. The best chunking example in my mind is the chess moves compressed into a single notion. Surely any chess opening or other chess scheme begins being conceptual. But after years of playing, some moves are judged by chunked notions that are not even conscious at the time the judgement is made. What is the status of such chunked up concepts in this framework? Is it valid to think of chunking in this way given this idea of hierarchy?

    • Dan Weiskopf

      I think that the generic notion of chunking is one that applies to all sorts of mental functions, within both higher and lower capacities. So perceptual and motor representations may be chunked through frequent use, just as fluent higher cognitive performances may be. For some great discussion of how this might happen in the perceptual domain, see:

      Schyns, P. G., Goldstone, R. L., & Thibaut, J. P. (1998). The development of features in object concepts. The Behavioral and Brain Sciences, 21(1), 1–54.

  2. Hi Dan – good to see you blogging here! I don’t have anything profound to ask, but I’m wondering where the imagination fits in your scheme. It seems to exhibit causal autonomy and a significant degree of free recombination, but it wouldn’t normally be considered conceptual. This suggests that feature #1, abstraction, is maybe the leading feature of a conceptual system?

    • Dan Weiskopf

      Hi Dan, thanks for the question. First, imagination isn’t a single capacity–Currie and Ravenscroft (2003) distinguish between its perceptual and propositional forms. The former kind is perceptually grounded and perhaps not fully general, though it may display significant causal autonomy. So it wouldn’t count as a form of conceptual cognition. Propositional imagining, on the other hand, is as fully general as propositional representations can be, and also may be abstract. I don’t have any problem considering this sort of imagination to be a conceptual capacity.

      Note that this raises all sorts of interesting questions about the unity of imagination: for instance, whether propositional forms represent an add-on to perceptual forms, or whether they make use of different (potentially dissociable) systems and processes, and whether one form is a precondition for the other. The fact that apparently singular capacities like imagination can occur in these different forms is something that this framework for thinking about higher and lower gives us natural tools to explore.

      • I take it that you think the conceptual/non-conceptual distinction has at least a good chance of being a distinction in kind. Then it would make sense to look especially carefully there for a faultline within the imagination. Maybe Currie and Ravenscroft find one! (I haven’t read Chapter 5 of their book.)

        By contrast, I’m attracted to a view (which seems to appeal to us cortical neurobiology types) according to which the conceptual/non-conceptual distinction is not a distinction in kind. (At least not within human and maybe mammalian cognition more generally.) I guess that was behind my question.

        Suppose we start by supposing the perceptual imagination isn’t conceptual. It does seem to exhibit pretty full forms of two of your factors, though, namely generality and causal autonomy. (You hesitate on generality. Let me say a bit in defense. Insofar as coherent representational combinations within its repertoire are concerned, the perceptual imagination seems pretty general. You can’t imagine red and green all over, or a red sound etc., but a wide range of coherent combinations are OK. Also, maybe you can’t perceptually imagine a loud democracy, but that’s plausibly because there’s no perceptual-imaginative representation of democracy available to be used. Hypothesis: if perceptual-imaginative representations of X and Y are available, and the XY combination is coherent, the XY combination is imaginable. That’s fully general, no?)

        If that’s right, i.e. if the perceptual imagination has two signs of conceptuality, it raises the question of whether the “conceptual imagination” is just the same thing as the perceptual imagination. (Which may be the same thing as conceptual thought more generally, plus judgement the same as perception, etc. Radical!) But that’s blocked by your third factor: abstractness. My reaction is that abstractness comes in a continuum from very concrete to highly abstract. What we call “perceptual imagination” comes at the concrete end, and what we call autonomous “conceptual thought” comes at the abstract end. No distinction in kind. (And when our investigations move from function to mechanism, we shouldn’t expect to find a fault line. As in fact it seems we don’t, in cortex.)

        This isn’t to raise a problem for your framework, I don’t think, but maybe to demonstrate a way of using it. But maybe you’re uncomfortable with a use that makes the conceptual not come out as a kind? Or maybe you just think the smooth transition story has some other independent problems? In that case, maybe the territory in which those problems are found should be brought in as further factors, to serve in distinguishing the conceptual from the non-conceptual?

        Or maybe this line of thinking departs from your target concern of regimenting higher/lower talk. It does at least seem to put pressure on the idea that your model “offers several means to empirically draw the conceptual/nonconceptual distinction”, though.

        • Dan Weiskopf

          These are good examples. For starters, it’s a certainty that these factors are themselves not unitary and need to be explored in a more fine grained way. Causal autonomy in particular comes in a lot of forms, as the rather loose characterization that I gave it indicates. I don’t think there is necessarily a single way to order these subforms either. But degrees and types of autonomy are clearly part of what is meant by higher cognition. I also think that conceptual thought tends paradigmatically to possess most of the subforms that cluster under causal autonomy. So insofar as it occupies an endpoint of this factor, it has a special or distinctive status.

          On recombination, here again there are degrees to which systems can fall short of the full combinatorial ability. I’m thinking in particular of views on which central cognition is modular or otherwise disunified. Similarly for abstraction; this is clearly something that also comes in different subforms and degrees. So I don’t think the view that I’m sketching is necessarily that far from the transition-based view that you have in mind.

          Final point, though — I don’t quite see why degree differences that are sufficiently wide aren’t sometimes kind differences (given of course my background skepticism about anything beyond the notion of a relevant kind). Having the ability to represent extraordinarily abstract categories may be something that is achieved only by many small incremental steps away from perception, but that doesn’t show that the endstate thus achieved isn’t one that is different in kind than the perceptual point of origin.

    • Dan Weiskopf

      I wish I had an answer to this! A small clarificatory point: the conditions are meant to apply primarily to capacities, not to whole creatures. Derivatively, a creature has higher cognition in the event that it has at least one higher cognitive capacity.

      So the question is whether any capacities display free combination but not autonomy or abstraction. Here the answer might be yes. A slightly fictionalized example: there might be a “visual parser” that operates automatically, in reflex-like fashion, using only a set of perceptual representations to decompose seen objects into their salient parts. This might involve freely recombining the constituents of percepts, but not doing so in an autonomous way–that is, the operations are all highly constrained in processing terms, and they are always stimulus-driven, not endogenously activated. It doesn’t seem too far fetched to envision the existence of such devices.

  3. Kristina Musholt

    Hi Dan,

    Thanks for your excellent posts, and sorry for being a bit late to this discussion!

    I very much like the model you sketch here. I don’t have any well worked out questions at this point, but I was wondering whether, insofar as you say that the factors you outline come in degrees, you would also want to say that there are degrees of concept possession? Also, do you have any views on the possible progression from lower (i.e. nonconceptual) to higher (i.e. conceptual) cognitive capacities? In particular, do you have any thoughts on what role (if any) language might play here?

    I am also wondering how this model fits with the dual-sytem accounts of cognition that seem to be becoming increasingly popular. Is a model such as yours, which admits of degrees (or a transition-based model, in the way proposed by Dan Ryder) compatible with a dual-system view?

  4. Dan Weiskopf

    Hi Kristina, thanks for these questions. I don’t treat concept possession as such as being graded. That’s because once you have a system that satisfies the conditions on being a conceptual system, having a concept is just a matter of having a representation in that system. But this can be a pretty minimal affair — there is nothing in particular that one needs to represent something as in order to have a concept.

    As for the progression from lower to higher, I don’t have any evolutionary speculations on the subject, but in terms of psychological processing it’s clear that language plays all sorts of roles in selecting and organizing information, including some prominent roles in concept acquisition. It’s a great way to get concepts for very nearly free, for example (I talk a bit about this in “The Origins of Concepts”).

    The dual systems question is excellent, and I need to think about it some more. For one thing, I’m more inclined to think in terms of there being two processing modes rather than systems, where the latter has connotations that these are distinct parts of the cognitive architecture. In this sense, System 1/2 cognition crosscuts the lower/higher cognition distinction. But I need to work these relations out in greater detail.

  5. Gilad

    Hi Dan,

    I’m interested in what you called “Abstraction” as the ability of a conceptual system to represent according to distinctions unavailable in perceptual content. As I myself try to work out a better understanding of what abstraction as a cognitive performance is, I have a few questions regarding this issue.

    1. Isn’t the ability to make distinctions that are decoupled from perception already assume a set of capabilities such as conceptual repertoire separated to some extent from perception, and is therefore the result of having higher cognitive capacity with the free recombination and casual autonomy and is not by itself a further a separate cognitive skill that explains it?

    2. Do you see abstraction as exclusively related to perception, so that abstraction transcend the perceptual ground but is not for example a more general ability to make new distinctions, conceptual or otherwise, that ignore lower level delineations in a more general way?

    3. Do you see place in your model for Peirce’s Hypostatic Abstraction as the conversion of a predicate to relation, such as “The honey is sweet” to “Honey posses sweetness”, which is purely conceptual and linguistic?

    4. Do you think that abstraction can be seen as the general ability to ignore certain distinctions in order to establish new ones (in accordance with one of the dictionary definition of abstraction), and is therefore cross-cuts the perception/non-perceptual distinction?

    5. Are you familiar with philosophical/psychological literature that deals with abstraction as a central issue and explanatory tool?

Comments are closed.