The Unexplained Intellect: The Importance of Rapport

An analogy may, at this point, be useful, even if the one that I propose — between satisfiable beliefs and colourable maps — can be grasped only by stretching.

If four different colours are available then any map can be coloured so that no territories sharing a border will have the same colour: Given a particular map, there is no need to check whether four colours will suffice to colour it.  If only two colours are available then a colouring that avoids same-coloured borders may be impossible, but any given map would be easy to check, just by trying to colour it.  If three colours are available then things are tricky.  It can be hard to tell, on being given a map, whether three colours will be enough.

The task of working out whether some particular map is three-colourable is not merely tricky: it is one of the things that looks to be inexplicable in the sense that my previous posts indicated: If somebody is having success in identifying whether or not maps are three-colourable then they must be having some good luck—(we don’t really know how much)—in avoiding the hard-to-identify cases.

Checking for three colorability is therefore tricky, but producing it is quite straightforward.  There would be nothing inexplicably lucky about someone who could be reliably depended upon to produce three-colourable maps. They could do it in the following way:  Take three colours; apply blobs of these colours to a page until it is covered; stipulate that the borders on this map occur in all and only the places where two different colours meet.

Now for the analogy:

I have suggested that there would be something inexplicably lucky about the successful maintenance of satisfiable beliefs, if this were done by taking representations of our belief sets and checking to see whether they are satisfiable.  This does not entail that there would be anything inexplicably lucky about creatures who can be reliably depended upon to have satisfiable beliefs.  They could do it in the following way: Start out by encountering the world; derive beliefs from this encounter in such a way that those beliefs are guaranteed to be satisfiable because the world provides a model for them; add further beliefs to this set, ad lib, in accordance with some sound rules of logic.

The crucial point to notice is that the satisfiability of such a creature’s beliefs would be explained by the rapport between those beliefs and their model-providing world.  It would not be explained by any process internal to the creature.

I suggest that something roughly similar is true of us.  We are not guaranteed to have satisfiable beliefs, and sometimes we are rather bad at avoiding unsatisfiability, but such intelligence as we have is to be explained by reference to the rapport between our minds and the world.

Rather than starting from a set of belief states, and then supposing that there is some internal process operating on these states that enables us to update our beliefs rationally, we should start out by accounting for the dynamic processes through which the world is epistemically encountered.  Much as the three-colourable map generator reliably produces three-colourable maps because it is essential to his map-making procedure that borders appear only where they will allow for three colorability, so it is essential to what it is for a state to be a belief that beliefs will appear only if there is some rapport between the believer and the world.  And this rapport —  rather than any internal processing considered in isolation from it — can explain the tendency for our beliefs to respect the demands of intelligence.

I’m aware that these remarks are no better than suggestive, but hope that The Unexplained Intellect goes some way towards making good on their suggestion.  In my final post, tomorrow, I’ll try to say something about the fact that there is more to life than the maintenance of satisfiable beliefs, and something about the character of ’epistemic encountering’.

2 Comments

  1. Chris,

    Thanks for this series of posts.

    As I read your previous posts, at first I thought you were arguing that at least some human beings can solve an NP problem such as satisfiability (SAT) in polynomial time. That would have been rather bold (because that conclusion is quite implausible). I’m glad to realize that you are not defending that.

    If I understand correctly, you are arguing that people’s beliefs are largely mutually consistent not because they can solve SAT efficiently but simply because people form and update their beliefs in response to states of the world and changes in such states, so the world is the model that satisfies enough of our beliefs to explain why they are mutually consistent and satisfiable. This is an insightful observation.

    For some reason you also seem to think that in order to see your point we have to change our ontology of the mind in some important way, taking mental events as fundamental and beliefs as derivative, or something like that. I’m still not sure why you think this. As I put your point in my previous paragraph, it seems to me that your point can be made within any ontology of the mind that one likes–whether mental states are more fundamental, or mental events are more fundamental, or the two are ontologically on a par.

    What am I missing?

    • You’re quite right to pull me up on this, Gualtiero. I should be clearer about the fact that I am not saying that humans can solve or approximate solutions to NP-hard problems.

      The point of my argument is that we need a conception of intelligence which doesn’t require them to be doing so.

      If we adopt the events-first metaphysics that I indicate then I think we can arrive at such a conception — and this is supposed to be a reason that speaks in favour of that metaphysical picture — but the need for a computationally tractable conception of intelligence does not entail that the events-first metaphysics must be the right one, and so this reason isn’t conclusive.

      In the book, the events-first metaphysical picture is arrived at by considerations of the way in which a states-first approach would fail to explain how inference differs from the merely-sequential occurrence of attitudes; and from consideration of the closely-related way in which the states-first approach would fail to explain how action differs from a merely-sequential occurrence of movements and thoughts. (I also say some things, in the same vein, about the need to account for the difference between a succession of experiences and an experience of succession.) In each of these cases the problem that is faced by a states-first approach seems to indicate that the facts about dynamic mental entities are not reducible to facts about the temporal and causal arrangement of their static parts. It is this that suggests the move of taking dynamic entities to be at least as basic as the static ones. The treatment of intelligence that comes with such a move is one pay off from making it, and it is intended to give some corroboration of the move’s rightness. But I don’t mean to claim that this shows the move to be compulsory.

      In general, I don’t have any very good theory of how it can be proven that one sort of entity is more fundamental than another. That looks like a difficult question of philosophical methodology. I suppose that an entity’s claim to being relatively fundamental (within a circumscribed domain) depends on the strength of its explanatory credentials. Showing that the intelligence of a dynamically-founded mind is more readily explicable than the intelligence of a statically-founded one is intended as a way to show that the credentials of dynamic mental entities are good.

Comments are closed.

Back to Top