Distinctively Human

Distinctively Human
Sarah Fisher

In Concepts at the Interface, Shea offers us a compelling account of what concepts are and why they matter. His nuanced and careful treatment captures the function and feel of offline deliberation, ranging from abstract logical reasoning to building concrete suppositional scenarios.

The puzzle motivating Shea’s analysis is how concepts can fulfil the distinct roles attributed to them. Take an ordinary concept like lavender. On the one hand,it can appear as an element in relatively abstract reasoning. For example, someone might infer from the premise that lavender is a purple plant the conclusion that some plants are purple.

On the other hand, the concept of lavender plumbs deep wells of knowledge accumulated through concrete experience. For instance, an individual might store under that label visual representations of the plant, olfactory representations of its scent, motor representations of picking the flowers and filling scented bags with their petals, and facts about lavender’s cultural associations with remembrance, perhaps reactivating affective states of happiness or nostalgia.

Shea seeks to explain how a single mental representation can perform both functions—one abstract and general, the other concrete and specific—by interfacing between different psychological processes. On the abstract side, concepts are constituents of thoughts that have syntactic and semantic properties, similar to words in a language. These are flexibly combined and recombined to produce an open-ended number of complete thoughts with systematically related meanings (akin to sentences in a language). Thinkers manipulate these language-like conceptual representations when performing logical reasoning tasks of the kind described above.

Crucially, though, the very same conceptual representations function as ‘plug and play devices’ (p.15, pp.120-122)—they can be ‘plugged into’ our sensory, motor, affective, and other special-purpose cognitive systems, to ‘play’ rich situational simulations. For example, in deliberating over whether to plant lavender by my front door, I might envisage how it would look in that location, imagine the pleasant sound of bees humming as I leave the house each morning, recall that lavender grows well in south-facing locations, and so on. The information flows directly through the conceptual interfaces to deliver a conclusion (I will plant the lavender there)—I need not first compress it into conceptually represented premises, over which to reason in more general, abstract ways.

For concepts to perform the proposed interfacing function, they must have ‘plug-and-playability’ alongside syntax and semantics. They must organise and orchestrate disparate information from distinct psychological systems, to support imaginative deliberation. As Shea sums it up on p.147:

[Concepts] serve to connect together, combine and rearrange information from special-purpose informational models into a coherent suppositional scenario—an interconnected representation of a situation. A concept is a working memory label that can be combined and manipulated independently of the body of information to which it is connected. But a concept also acts as an interface to that information.

In arriving at his analysis, Shea weaves together theoretical and empirical insights from a range of disciplines. In this way, the book itself sits at the interfaces of cognitive science, conducting and coordinating distinct bodies of knowledge. It not only promises to illuminate human cognition but enables us to think anew about the capabilities of non-human entities.

Of particular current interest is the question of whether large language models (LLMs) possess concepts. These enormously impressive systems — like GPT-4, Claude, Gemini, and Deepseek— produce apparently meaningful and relevant outputs in response to ordinary conversational prompts. For example, asking ChatGPT about planting lavender yields the following:

Planting lavender by your front door can be a great idea—depending on your goals and conditions. Here are some things to consider…

There ensues a helpful list of pros and cons, along with an offer to recommend specific varieties or companion plants.

In light of such performance, it is often hard to avoid characterising LLMs as understanding the meaning of words like “lavender”; as knowing and reasoning about things like lavender, and so on. But are such characterisations literal attributions or merely loose or figurative talk? The answer will surely depend on the systems’ underlying capacities, i.e. what representations they are processing and how. Yet the internal workings of LLMs remain for now a black box, opaque even to those developing them.

It is here that Shea’s framework can be helpfully deployed—after all, we have some idea of what can’t be going on inside the black box. The main reason to think that LLMs do not possess concepts is that they lack any kind of sensorimotor apparatus; they can’t, for instance, see, smell, or pick lavender. That means they can’t simulate doing so either.

Nor do LLMs seem to have anything equivalent to long term memory for storing knowledge about lavender. Rather, they respond to each new prompt by generating afresh a probable continuation of linguistic tokens (that continuation being a complex function of the tokens’ statistical properties, as extracted from vast corpora of text during the models’ training period). So, LLMs seem not to be recalling the facts expressed by their outputs but outputting sequences that simply happen to express such facts (presumably due to how human knowledge and experience is reflected in our speech patterns).

In a nutshell, LLMs remain representationally impoverished (at least for now). They lack the experiential systems with which human concepts essentially interface, only processing linguistic form, not our rich multitudes of information types. As such, they fail the ‘plug-and-playability’ test for genuine concept possession. Accordingly, Shea notes on p.224 that “[c]onceptual thinking is the engine of distinctively human cognition.”

In closing, we might hypothesise that LLMs do at least share our ability to reason abstractly. However, it is not yet clear whether LLMs will be able to pass the requisite syntactic and semantic tests—whether they are processing linguistic tokens in the right kinds of ways. It remains an open question, then, whether we should treat them as a new breed of general reasoners, or rather as special-purpose systems that further support our own deliberation. With that question moving rapidly up the agenda, Shea’s analysis will continue to be an important touchstone.

One comment

  1. Navneet Chopra

    Interesting post! And I agree with most of this content. Further, I will like to add, following thesis of embodiment, that concepts are not same for all – they are “thin” and “thick” for the same concept, say, for the concept of SEA for different people. For someone raised in plains, for away from coastline, and never ever been exposed to real sea will have only a ‘thin’ concept of SEA, while for the one living near sea and actively engaged with sea, say, by being a fisherman, has much more ‘thicker’ concept of SEA: the way he sees it, hears it, and interacts it for his daily life by boating , surfing, fishing , etc. makes the concept SEA for him much more richer than the one for the former. And the former feels the vast difference experientially on actually facing the sea for the first time in life – the vastness, depth, ferocity, etc.of real sea overwhelms on encountering the sea in reality for such a person!

    The distinction is important since many kinds of thoughts will be not possible for the one with ‘thin’ notion of sea.

    LLMs work on statistical patterns of entities occuring together in human experience and linguistic performance. There is no experiencer, nor experience of senses, motor movements and affective experiences directly or indirectly which a human subject goes through while dealing with relevant concepts. Associating then and presenting them as sentences as “thoughts” is just a trick just like Clever Hans phenomenon. There is no one there to understand, nor any understanding happening to anyone. But it mimics so well that the photocopy looks better than the original!

Leave a Reply to Navneet ChopraCancel reply

Back to Top