Symposium on Concepts at the Interface: Author’s Reply to Commentaries

Brains Blog Symposium on Concepts at the Interface

Author’s Reply to Commentaries

Nicholas Shea
Institute of Philosophy, School of Advanced Study, University of London
nicholas.shea@sas.ac.uk

Many thanks to Eric, Johan and Gualtiero, and Sarah for their thoughtful commentaries. Their kind words about the book are also much appreciated. It’s great that they have selected different topics to focus on: language, predication and structural representation, and AI. I will say something about each in turn.

Reply to Eric Margolis

Eric argues that a fuller account of concept-driven thinking would need to be more specific about the role of natural language. I couldn’t agree more. There are actually two issues here, one internal and one external. Eric points to the potential role of the language faculty in thought processes. In addition, there is the role of language as a cultural phenomenon. Language-based cultural transmission of information is surely a major force for shaping our conceptual thinking (whether or not a strong hypothesis about cultural selection is true).

To keep the project manageable the book doesn’t get into the substantial literature on either of these topics. On the internal issue, I tried to formulate a view that would be neutral between the various positions Eric lays out. Which one if correct is an empirical issue, which I don’t take to be settled, but I completely agree that surveying the state of the art and integrating it into the plug-and-play account would be a valuable project. (And, in the way of these things, would doubtless force some changes to the account.)

His proposal 1 is that it is the cognitive mechanism for processing natural language which is responsible for the general-purpose way that concepts can be combined in thought. That is a key issue. There is a host of relevant evidence. It will be important here to look, not just at combinatorial capacities, but at the sort of computations that are supported. I argue that conceptual representations in working memory play a special role in supporting content-general inference. Language may be involved in these broadly logical computational processes. However, work like the geometrical and sequential sound tasks reviewed in Dehaene et al. (2022, TICS) may show evidence of relatively content-general computations that do not draw on natural language processing. One task involved remembering sequences of spatial locations presented around an octagon; another involved detecting deviant sounds in a sequence of sounds. These experiments also arguably involve a capacity for general-purpose combination of representational constituents. Eric’s proposals 2 and 3 look plausible to me; and proposal 4 seems like a nice hypothesis of what it is to think ‘in’ language.

Reply to Johan Heemskerk and Gualtiero Piccinini

Johan and Gualtiero ask about ‘how concepts mediate between predication and simulation’. My idea is that concepts enter into predication relations (a relation on representations) and the resulting complex representations in working memory serve to activate and organise information from memory (into a ‘suppositional scenario’). Not all special-purpose resources use structural representations. The cognitive map of space is a structural representation, but representations in a high dimensional state space (e.g. colours, faces) need not be. Of course, in almost every case where there is a system of representations there will be a relation-preserving mapping between the representations and the entities represented. However, in some cases something stronger is true: a relation on two (or more) representations itself carries representational content, representing a relation on the entities represented by those two representations. It is because this stronger case has its own psychological limitations and affordances that it is worth reserving a special term for it (my ‘structural representation’).

Johan and Guiltiero argue that predication is a form of structural representation. I agree with them of course about the existence of a mapping between representations and things represented. I think their first proposal is equivalent to my position. We can hold that predicative concatenation has the effect that the representation is true iff the particular represented has the property represented (iff the orange is round) without incurring any commitments about the metaphysics of instantiation. But I disagree when they go on to say, ‘Provided there is a relation between the target of a subject and a target of its predicate, and regardless of the metaphysics of that relation …’. Being neutral about the metaphysics is precisely to allow that there may be no relation between ‘the target of a subject and a target of its predicate’.

Their second proposal is interesting. I very much agree with Coelho Mollo and Vernazzani’s point about computational constraints (echoing my remark in response to Eric above). Lossy compression of information is surely also important. Eliasmith’s semantic pointer architecture is one leading model of how that can work. When Johan and Guiltiero apply this idea to structural representations, it is important to distinguish between compression in two places: in the relata and in the represented relation. My point about limitations is that once computations with a complex representation are so-configured that the device of concatenation carries a specific representational content, that in itself strongly constrains what can be represented by that system, irrespective of constraints on the relata. A cognitive map that uses co-activation to represent spatial proximity does not have a device of concatenation that would allow it to represent other relations on the represented relata, whether those relata are locations, as in the actual case, or not. This constraint is not a matter of the representations of the relata being compressed.

However, once we turn to compression with respect to the relation, an interesting possibility surfaces.  Sensorimotor process might track a bunch of quite specific relations that are detectable visually: bumping, hitting, throwing, pushing, and so on. A lossy compression system might generalise over these to represent a slightly more abstract category of physical interactions. As theorist we might call it cause-by-visible-contact. A further compression might represent a fully general relation, cause. A structural representation of x causes y could allow for some reasonably general computations, for example those involved in making causal inferences from making interventions or observing interventions.

Now it seems that it would be possible, in principle, to go on, further loosening the restriction on what is represented by the device of concatenation in play in this representing system. Liz Camp suggested to me that the rather general-purpose concatenation of prediction could have developed by progressively loosening the restrictions on a mode of combination that evolved to represent cause (pers. comm., see fn. 10 on p. 50 of the book). And of course predication itself is not a fully general-purpose mode of combining representational constituents. For that we need something more akin to Merge (more precisely, something akin to a representational device whose semantic significance is given by Merge).

Ask a question about something you read in this post.

Back to Top