Some Roles for Language in Concept-Driven Thinking: Comments on Nicholas Shea’s Concepts at the Interface

Some Roles for Language in Concept-Driven Thinking: Comments on Nicholas Shea’s Concepts at the Interface
Eric Margolis
University of British Columbia

There are many different questions that philosophers should be asking about concepts. One of the things that makes Concepts at the Interface such a valuable book is that it recognizes that much recent work on concepts has been too narrowly focused on questions about categorization—particularly how concepts are applied in perception—and that we need to think about concepts more broadly, including how they facilitate powerful forms of cognition by drawing upon and orchestrating a variety of specialized processes in different parts of the mind.

Roughly, the picture Shea has is that concepts have a dual life as plug-and-play devices / labels. On the one hand, they are unstructured units in working memory that can be freely combined with one another, thereby supporting logical and content-general reasoning. On the other hand, they function as labels for potentially large and diverse bodies of information, only some of which is activated on any specific occasion. What happens in conscious deliberate thinking is that the arrangement of concepts in working memory operates at both levels simultaneously. It initiates simulations and related processes in the informational structures that the concepts as labels activate, and new concepts and new combinations of concepts are subsequently formed in response to this activity and as a consequence of relevant content-general operations.

Suppose you are wondering if you will be able to get that stuffed chair home from the store all by yourself. For Shea, the concepts involved in this thought drive various simulations (e.g., a visuomotor simulation of trying to place the chair in the backseat of your car), which causes the suppositional scenario in working memory to fill out. Shea’s insight here is that the changes that ensue in working memory as a consequence are effectively inferences that can lead to productive conclusions (e.g., the decision that your car is too small and that you are better off asking your friend if you can borrow her minivan).

Shea covers a lot of ground in setting out this picture, leaving us with a realistic high-level synthesis in which concepts can be seen as multifaceted tools that are essential to human intelligence and creativity. But one gap in his discussion is that we aren’t told very much about the role of natural language in concept-driven deliberate thought. It’s noteworthy that people often find themselves talking through a problem, engaging in imagined conversations, or producing snippets of inner speech when leisurely considering a possible course of action—all of which suggests that language is often involved in conscious deliberate thinking. So a fully developed account of the plug-and-play / label model will eventually have to be more specific about what language is and isn’t doing here. To help advance this project, I’d like to mention a few general proposals for Shea to consider.

Proposal 1. Language is essential to forming certain conceptual combinations. One possibility holds that the key feature of language is that is that it is domain general (in that it isn’t confined to representing any particular type of content) and has a compositional semantics for generating endlessly many novel combinations of words, phrases, and sentences. Given that concepts as plug-and-play devices are supposed to combine freely with one another, one possibility is that they do so when realized as words. Shea notes one reason for not pursuing this possibility as a general account of concepts—evidence that there are individuals who have been found to solve arithmetic and logical problems that would seem to involve conscious deliberate thought even though these individuals have severe linguistic impairments. At the same time, some have argued that humans share with other animals a basic cognitive architecture in which the contents of specialized cognitive systems cannot be combined with the contents of other specialized cognitive systems, and that this limitation is overcome in the human mind as we learn to map the contents from these different systems onto language; in which case, language would be the medium for domain-general cognition (Carruthers 2003; Spelke 2003, 2022) or perhaps for some types of domain-general cognition.

Proposal 2. Language increases conceptual combinations. Another possibility is that language per se isn’t necessary to combine contents that derive from different specialized systems but that it is particularly efficient at forming these sorts of combinations and, as a consequence, aids the conceptual system in approximating the ideal of the generality constraint. For example, it’s possible that many combinations are not likely to form, or are downright difficult to form, without assistance from the linguistic system. When sticking to purely non-linguistic forms of thought, it might never occur to you to put colourless, green, and ideas together, or there might be some type of cognitive resistance to combining them. But forming the verbal construction “colourless green ideas” is effortless in that the linguistic system generates this combination as easily as any other with the same combinatorial structure (e.g., the phrase “priceless golden artifacts”). If this is right, then language production that is relatively unconstrained regarding the combinations it can perform might be an important factor in explaining how concepts come to be combined with one another in an equally relatively unconstrained manner.

Proposal 3. Inner speech inserts linguistic representations into the supposition scenario. This third possibility emphasizes the downstream psychological impact of inner speech. In some cases, this could be as simple as individual words (e.g., “Paris”) directly initiating activity in the specialized systems that fall outside of working memory and that lead to new content being placed inside of working memory (e.g., initiating visual imagery pertaining to Paris, leading to concepts associated with outdoor cafés entering into the suppositional scenario).

Proposal 4. Inner speech cycles. This last possibility is an extension of the previous one. However, this time the idea is that inner speech interacts with the supposition scenario in a more dynamic way, where changes in the supposition scenario lead to further inner speech, which leads to further changes in the supposition scenario.  In other words, as one hears what one is saying to one’s self, this initiates simulations and related processes that affect what appear in working memory, which affects what one says next, which triggers changes in the simulations or new simulations, and so on (see Dennett 1991 and Carruthers 1996 for suggestions along these lines).

Of course, these aren’t the only possibilities. But to the extent that inner speech seems to play a role in conscious deliberate thought, an important continuation of Shea’s project is to say exactly what inner speech does when we engage in this sort of cognition and what the implications might be for the view that concepts are plug-and-play devices / labels.

References

Carruthers, P. (2002). The cognitive functions of language. Behavioral and brain sciences, 25(6), 657-674.

Carruthers, P. (2006). The Architecture of the Mind: Massive Modularity and the Flexibility of

Thought. Oxford: Oxford University Press.

Dennett, D. (1991). Consciousness Explained. Brown and Company.

Spelke, E. S. (2003). What makes us smart? Core knowledge and natural language. In D. Gentner & S. Goldin-Meadow (eds.), Language in Mind: Advances in the Study of Language and Thought, pp. 277–311. Cambridge, MA: MIT Press.

Spelke, E. S. (2022). What Babies Know: Core Knowledge and Composition, Vol. 1. Oxford:

Oxford University Press.

Ask a question about something you read in this post.

Back to Top