Conceptual pluralism is the view for any category that we can think about, we typically possess many different concepts of it—that is, many different ways of representing that category in higher cognition. Different kinds of concepts encode their own specific perspective on the target category, and each one can be deployed in a host of interconnected cognitive tasks.
A challenge for pluralism is explaining why, given that concepts display this kind of polymorphism, we should think that there is anything interesting to say about them in general, from the standpoint of psychological theorizing. Doesn’t the term “concept” become just a label for this disunified and unsystematic grab-bag of representations?
I’d like to approach this topic by tracing out some connections between theories of concepts and some related ideas in the psychology of memory. The links between these two fields are worth exploring insofar as they hint at some wider generalizations about the adaptive nature of cognition.
In his 1979 article “Four Points to Remember: A Tetrahedral Model of Memory Experiments,” James Jenkins proposed that we think of experimental paradigms as having four independently manipulable components: subject population, to-be-remembered materials, orienting or encoding task, and criterial or retrieval task. As the paper’s title suggests, when diagrammed these form the vertices of a tetrahedron that gives us an abstract space of possible experiments to explore:
Relationships between particular values of these component variables are often stated without qualification, as if they reflected universal facts about the operation of memory processes. Jenkins’ model suggests, however, that such generalizations should always be implicitly qualified or hedged by reference to all the other components. Claims about how this particular subject population performs when manipulating these materials in this task context may not hold if other factors about the situation are perturbed.
As I read Jenkins, then, he is advancing a form of contextualism: memory generalizations are highly dependent on the total character of the experimental setup. They are situationally circumscribed and always potentially fragile.
To illustrate this, consider the emergence of the transfer-appropriate processing (TAP) framework (Morris, Bransford, & Franks, 1977; Roediger, Buckner, & McDermott, 1999). This occurred during the heyday of the levels of processing model, which posited that whether an item is remembered is primarily determined by the depth to which it is processed or the degree of elaboration it undergoes, so that basic perceptual analysis results in less recall than more complex conceptual or semantic encoding.
Against this, the TAP framework claims that whether a particular item is remembered depends not on depth of processing, but instead on whether there is a fit between the encoding and retrieval tasks. So while in some conditions phonological properties of verbal materials are poorly retained relative to their semantic properties, this relationship can be reversed by modifying these conditions.
This nicely illustrates Jenkins’ point. Memory performance is enmeshed in a complex web of factors that need to be carefully teased apart and manipulated. Conceptual pluralism holds that the same is true of human categorization. The perspective taken on a category and how information about it is retrieved and processed are sensitive to a range of contextual parameters, and this gives rise to the diversity of experimental results that show the effects of structures such as prototypes, exemplars, causal models, ideals and category norms, and so on.
An upshot of contextualism is that laws of categorization and memory that hold with complete generality may be thin on the ground (see Roediger, 2008, where this is explored in depth). And this is the eliminativist’s cue to pounce.
Standard eliminativist arguments run as follows: a class of things forms a kind (or a natural kind) when there is a sufficiently large and interesting body of inductively confirmable generalizations about its members. But there is no such body of generalizations covering all concepts whatsoever, irrespective of their form. So concepts are not a kind, and they will have no place in our best psychological science (Machery, 2009).
Versions of this argument often appeal to specific notions of natural kinds such as Richard Boyd’s homeostatic property cluster theory—Griffiths (1999) seems to be the point of origin for this maneuver—but these substitutions don’t affect its basic logic. So why not embrace eliminativism and split concepts into subkinds?
For one thing, I’ve increasingly come to doubt whether the notion of a natural kind deployed here is actually that useful in understanding scientific theories, models, and practices, for reasons that Ian Hacking (2007) lucidly explains. I’m especially suspicious of the idea that the sciences only deal in natural kinds, or that their vocabulary ought to be purged of terms that are not kind-referring. This wildly strong claim has often been stipulated, but never, as far as I can tell, been shown to hold in any level of detail with respect to the practices of any specific science.
What does seem correct is that there are what Hacking, following Nelson Goodman, calls “relevant kinds”—classifications that hold greater or lesser value relative to some established ends and practices within a field. My own notion of functional kinds is meant to be a way of spelling out one way in which a category can be relevant, namely by playing a role in a number of well-confirmed models. So the question is whether the conceptual/nonconceptual distinction is a relevant one in psychology.
Suppose we followed the eliminativist and agreed to keep talk of representations such as prototypes, causal models, and the rest, but declined to use any general term grouping them together by their common function in higher cognition. A few distinctions that we might reasonably want to make immediately become inaccessible.
First, it seems contingent that humans have these particular representations in their higher cognitive toolkit. We might have (or develop) others, and nonhuman cognizers might have their own proprietary ways of representing categories. The list of ways that creatures might conceptualize the world seems to be quite open-ended. It’s a familiar point that if animals have concepts, they certainly need not resemble human concepts in their form and content. For that matter, it’s perfectly coherent that there could be nonhuman creatures that reason and represent the world using an utterly foreign sort of inner code. Without reference to their common functional role in higher cognition, however, we lack an architectural framework for even stating the similarities and differences between human and nonhuman concepts.
The same taxonomic problems arise within human cognition itself. Prototypes and exemplars have been invoked to explain both visual object recognition and categorization in higher cognition. That is, they play roles both in perceptual processing and in conceptual thought. Reference only to types of representations cannot capture this functional distinction. Adding the qualifier “perceptual” or “conceptual” would only reintroduce the distinction, and also invite more general questions about what separates those representations labeled conceptual from those labeled non-conceptual.
This seems more than sufficient to establish relevance. Of course, a contextualist theory of concepts also has its own explanatory burden to discharge. It needs to give an account of why in one sort of context this particular kind of process is invoked, and why variations in parameters of the context promote the use of alternate modes of processing. If these variations are systematic, there must be some cognitive apparatus that pairs circumstances with types of representations for the purposes of certain higher cognitive tasks. The existence of this web of contextually parameterized generalizations is itself a fact that stands in need of explanation.
What this means is that a pluralist, contextualist theory of concepts takes as its subject matter the properties of the architecture of higher cognition that enable it to select and deploy information for a particular kind of processing given the right inner and outer circumstances. Such a theory aims to capture the adaptive structure of the conceptual system. In my next post, I’ll sketch some proposals about how to understand this structure.
Griffiths, P. E. (1997). What Emotions Really Are. Chicago, IL: University Of Chicago Press.
Hacking, I. (2007). Natural kinds: Rosy dawn, scholastic twilight. Royal Institute of Philosophy Supplement, 203–239.
Jenkins, J. (1979). Four points to remember: A tetrahedral model of memory experiments. In L. S. Cermak and F. I. M. Craik (Eds.) Levels of Processing in Human Memory (pp. 429–446). Hillsdale, NJ: Erlbaum.
Machery, E. (2009). Doing Without Concepts. Oxford, UK: Oxford University Press.
Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16(5), 519–533.
Roediger, H. L. (2008). Relativity of remembering: why the laws of memory vanished. Annual Review of Psychology, 59, 225–54.
Roediger, H. L., Buckner, R. L., & McDermott, K. B. (1999). Components of processing. In J. K. Foster & M. Jelicic (Eds.), Memory: Systems, Process or Function? (pp. 31-65). Oxford, UK: Oxford University Press.