Inference in (neuro)cognitive systems

Inference in (neuro)cognitive systems

By Urte Laukaityte and Matteo Colombo

Psychologists speak of perceiving as inferring. Neuroscientists maintain that the brain solves inference problems. Biologists say that individual cells infer the structure of their environment. Computer scientists suggest artificial systems can at times draw better inferences than humans.

For many philosophers, these ways of talking are nonsensical because only human reasoners can make inferences. In their view, an inference consists in the reflective activity of drawing a conclusion from other statements, which the reasoner takes to be premises supporting said conclusion. Whenever the premises warrant the conclusion via something like a formal rule of logic, the inference is a good one—otherwise, it is bad. Under this conception, then, inference relies on complex capacities, such as language and consciousness.

In our chapter, we take issue with this traditional picture. We argue that existing philosophical treatments of inference are out of touch with successful scientific practice. Inspired by Susanna Siegel’s work on the rationality of perception, we put forward a less restrictive account of inference in (neuro)cognitive systems, which can capture and do justice to many of the ways psychologists, neuroscientists, biologists, computer scientists, and other researchers use ‘inference’ in their domains.

Our proposal is to construe inference as a rationally evaluable transition from input (and current) representations to some output representation in the context of organism-relevant information. In this, we adopt a biogenic approach towards both rationality (following ecological rationality) and representation (in line with the basal cognition paradigm). According to our account, human reasoners can perform inferences, but so could sub-personal systems, non-human organisms, and potentially certain artificial systems. This is because the kind of transition involved in inference does not require conscious accessibility, linguistic abilities, a language-like neural code, or a formal system of rules in a mental logic, as some philosophers stipulate.

One objection to our view is that it strays too far from the original (at least post-1500s) meaning of inference as a rule-governed conscious transition in thought, which has been foundational in logic and epistemology. This being the original sense was acknowledged by Hermann von Helmholtz himself, who was the person to extend the notion of inference to unconscious perceptual processes back in the 1800s. In particular, he wrote: ‘The psychic activities that lead us to infer that there in front of us at a certain place there is a certain object of a certain character, are generally not conscious activities, but unconscious ones’ and so we may ‘speak of the psychic acts of ordinary perception as unconscious conclusions, thereby making a distinction of some sort between them and the common so-called conscious conclusions’. While we do not suggest that logicians employ a different concept when drawing valid inferences, the more permissive reading has been around for more than a century now. We submit there is value in philosophical accounts which can better reflect and thus potentially feed back into the usages underpinning productive scientific practices.

Another objection is that our definition confuses inferential and a-rational, merely causal, associative processes. The problem with this objection is that it assumes there is a sharp dichotomy between associative and inferential transitions. Instead, there is plausibly a continuum going from merely causal a-rational transitions to person-level, conscious, linguistic inferences. We can individuate a given transition as further away from or closer to one of these two poles, thereby making it possible to draw pertinent distinctions relevant in a specific context. The character of a particular transition can vary based on the kind of system involved, the type of pre-existing informational or representational states it recruits, and the tasks in which the outputs of such transitions are deployed, among other factors.

A final objection is that our notion of inference, with its reliance on ecological norms of rationality, collapses epistemic and practical inferences. But again, rather than being a bug in our proposal, this should count as a feature. After all, outside of idealized domains in formal logic, inferences are pragmatically encroached, contextual, and oriented towards some adaptive goal in the world. In line with this observation, the degree to which an inferential transition is (ecologically) rational is modulated by how it exploits the material and social regularities in the environment, given the inferrer’s motivational or energetic state, relevant goals, and informational outlook. And so, we contend that the rationality of an inference depends on various considerations, including adaptivity, frugality, speed, and its role in guiding context-sensitive organism-relevant perception-action loops.

Although our conception of inferences as rationally evaluable transitions from some inputs and current representations to some subsequent or output representation is minimalistic, it is not uninformative. In addition, it accommodates fruitful scientific practice, which we take to include promoting links between disparate fields, offering novel hypotheses, extending overlooked results, developing research paradigms, and so on. Crucially, our formulation still allows us to make empirically productive distinctions between different forms of inferential and non-inferential processes in a wide range of (neuro)cognitive systems.

We hope that philosophers will find our account interestingly wrong and cognitive scientists will deem it helpful.

One comment

  1. Paul Bello

    How does this proposal sort out the differences in character between reflective/deliberative reasoning and “inferences” drawn by subpersonal systems? The psychology certainly suggests that the former are more flexible than the latter in general and possibly deal with information that is represented more abstractly. If the respective character of these two forms of inference are irreconcilably different, it seems to me that you still need to maintain sharp conceptual boundaries around each. Sort of related to this problem and inherent in the “thicker” definition of inference as “premise-taking” etc., there are considerations of agency which, at least for humans, seem to sort out differences between inferences that are subpersonal and generate content that “comes to mind” and conscious manipulation of premises with respect to rules, insofar as we are capable of doing that. I’m not sure how that can be accommodated on a spectrum. It seems like this is much more an issue of what kinds of capacities or powers a system has, and this would seem to be very discontinuous between species. Maybe I’m just being an inferential chauvenist ;). Very interesting post. Thanks!

Ask a question about something you read in this post.

Back to Top