The Unexplained Intellect: Consequences of Imperfection

The previous post argued that Theoretical Computer Science can show things to be naturalistically inexplicable—(where this is much stronger than showing them to be inexplicable with a Classically Computational Theory)—by showing those things to require more time than the universe allows.  I’ve not yet said anything about which things might be inexplicable for this reason, nor why they might be relevant to our understanding of intelligence.  On these points it is hard to come by mathematical guarantees: Despite a great deal of work (and despite the Clay Institute’s offer of a million dollars), we do not yet know how to prove, of anything in particular, that it requires more time than the universe allows, in the sense of requiring an amount of time that would be exponentially larger than the time required to check whether that thing had been done successfully.  We nonetheless have some reason to believe that certain things must have this status, since we can prove that if they don’t then nothing does.

The things in question form an heterogenous bunch.  They include playing invariably-better-than-chance Tetris; picking out the smallest number of people from a group in such a way that everyone in the total group would be guaranteed to have at least one friend amongst those picked; and parsing the sentences generated by certain artificial grammars.  These are not things that intelligent humans reliably do.  But at least two of the things that appear to be inexplicable do seem somewhat related to the accomplishments of normal human rationality: these are the task of working out whether a set of sentences is satisfiable, and the task of finding a model that will satisfy as many sentences in a set as possible.  Of the several accomplishments that we believe would be inexplicable, these two are amongst the most studied.

To say that these accomplishments would be inexplicable is not to say that they can never be approached.  It means only that, were we to find that the tasks in question had been reliably accomplished, we would know that they must have been accomplished imperfectly, or else that the trickiest cases had been avoided.  That being so, one might think that the inexplicabilities discovered by Theoretical Computer Science can reveal, at most, that human intelligence must be imperfect, and must be restricted in its applicability.  That much we already knew: The intelligent accomplishments of real humans are, of course, imperfect, and we avoid a swathe of hard cases merely because these never come up in our environments.

It would once have been reasonable to stop with that last thought, concluding that considerations of computational complexity show only that any naturalistically explicable intelligence must be ‘bounded’.  Some remarks of Herbert Simon suggest such a view, especially when he writes that it is ‘one of the most important laws of qualitative structure’ that:

Because of the limits on their computing speeds and power, intelligent systems must use approximate methods to handle most tasks. Their rationality is bounded.
(emphasis Simon’s)

This is no longer a satisfactory stopping point: In the light of results that have been proven in the last twenty years or so, our capacity for dealing intelligently with unfamiliar predicaments raises a puzzle, even once the imperfection of that capacity has been taken into account.  I do not claim that this explanatory puzzle is a Hard Problem.  It goes away if our intelligence is viewed from the correct perspective, but it takes some metaphysical work to reach a vantage point from which that perspective is available.

In 1978 Simon remarked that:

One interesting and important direction of research in computational complexity lies in showing how the complexity of problems might be decreased by weakening the requirements for solution – by requiring solutions only to approximate the optimum, or by replacing an optimality criterion with a satisficing criterion.  Results are still fragmentary, but it is already known that there are some cases where such modifications reduce exponential or NP-complete problem classes to polynomial-classes. (Simon, 1978, p. 12)

When he made this remark it was perfectly accurate.  But the subsequent decades have seen theorems being proven that show these ‘fragmentary results’ not to generalize: Some problems remain too complex to compute in a feasible amount of time, even if we are willing to tolerate sub-optimal solutions.  For some, there is no better-than-chance method for avoiding the hard cases, and we do not know how much luck it would take to avoid those cases by luck alone.   None of these problems gets any easier if we allow ourselves to make use of randomness.

The lesson to be drawn is that, if we think of intelligence as involving the maintenance of satisfiable beliefs, and if we think of our beliefs as corresponding to a set of representational states, then our intelligence would depend on a run of good luck the chances of which are unknown.

My suggestion is that we can reach a more explanatorily satisfactory conception of intelligence if we adopt a dynamic picture of the mind’s metaphysical foundations.   To avoid going on at too much length, I’ll use a new post, later today, to say something about why that should help here.

Back to Top