One theme of this week’s posts has been the claim that dynamic entities are among the most metaphysically basic of the things in the mental domain. I’ve made only the vaguest gestures towards saying what I mean by this (in response to Gualtiero’s earlier comment).

By *dynamic* entities, I mean those that essentially involve some particular relationship among their temporal parts. Deciding is, in this sense, dynamic, whereas believing is static: To have been believing something throughout an interval, one does not need to first have been doing one thing, and then later doing something else. To have been *deciding* during that interval one does. Ryle found such a distinction in the work of Aristotle:

We can say that Socrates knew, believed or detested something from, say, his twentieth birthday to the end of his days; but we could not say that at any particular moment he was occupied in knowing, believing or detesting. As Aristotle realized,knowing,believinganddetestinghave to be listed not as acts or processes but as ‘hexeis’. […]Knowingandbelievingare not incidents in a person’s mental life, though they make an important difference, of quite another sort, to his mental life. […] Ifind outorbecome convincedof something at a particular moment; but being in possession of something is remaining and not attaining; having and not getting.

(emphasis Ryle’s)

If we thought that we knew how to account for mental states, we might then imagine that all of the facts about decisions or inferences could be accounted for by starting out with our theory of these states, and by supplementing it with a theory of the temporal and causal relations among them: Our explanatory tactic would be to *first* explain what beliefs are, and then later to explain what inferences and decisions are, by describing the way in which beliefs can be arranged into some particular temporal and causal configuration, so that inferences or decisions are constituted by them.

To say that dynamic entities are metaphysically basic is to imply that this explanatory approach would be a mistake, of much the same sort as would be made if we attempted to first explain word meanings, before later explaining the meanings of whole speech acts. The mistake, in both cases, would be that of putting a cart before a horse. Word meanings must be understood as contributions to the meanings of whole speech acts. Mental states must be understood as contributors to the dynamic entities in which they participate. It is these dynamic entities that an account of the mind must first explain. Inferences are not reducible to temporally organized sets of belief states, nor even to sets of belief states with causal relations among them (nor are melody-perceptions reducible to temporally or causally organized sets of note-perceptions).

In my response to Gualtiero, I used a less philosophical analogy to make the same point: An attempt to explain how there can be mental states without first explaining how there can be mental events would be as mistaken as an attempt to explain how there can be husbands without first explaining how there can be relations of marriage. Among the facts about marriage, facts about that *relation* are most basic. Among the facts about minds, it is the facts about the *events* of epistemic encountering that are most basic (although these may not be the only facts to have such a status).

My central argument for this claim is given by the considerations of computational complexity that I outlined in previous posts. I take those considerations to show that any explanatory approach which attempted to build its account of the mental domain on purely static foundations would make our capacity for intelligence puzzling. I also think that it would fail to account for the psychological significance of time (and it is on this point that the second part of *The Unexplained Intellect *focuses). That failure might take several forms, one of which arises in connection with the temporal orientation that comes from memory. Since memory has been a recent topic of discussion on this blog, I’ll conclude by indicating some of the things that the book has to say about it.

Memory plays some role in enabling us to orient ourselves relative to the passage of time. This cannot be because the propositional attitude of memory situates its subjects as being temporally downstream from its objects. States of propositional memory have no such temporal orientation. If we consider memory as a relation to * propositions, *we find that it can be directed at the future, just as it can be directed at the past: One may remember *that one leaves tomorrow*, just as one may remember *that one arrived yesterday*. Temporal asymmetry appears only if we consider memory as a relation to *events*: I can remember the *event* of yesterday’s arrival, but cannot yet remember leaving tomorrow. (Something similar is true about the forward facing orientation of expectation. If P is a proposition, and ε an event, then we can *expect that P* for future, past and present tenses of P, but can *expect ε* only if we take ε to lie in the future.)

The explanation for this (which the second part of *The Unexplained Intellect* relates to the psychology of episodic and semantic memory) is that memory is essentially a form of epistemic retentiveness: One’s present knowledge counts as an instance of memory when and only when it was attained on the basis of an epistemic encounter that lies in one’s past. One can epistemically encounter a *proposition* as the conclusion of an argument, and so can encounter it before the occurrence of any event to which it pertains, but one cannot encounter an *event* in that way. In the resulting explanation of memory’s temporal asymmetry, it is the dynamic events of epistemic encountering to which we must make reference. These encounters, and not the knowledge states to which they lead, do the lion’s share of the explanatory work. I conclude the book with some discussion of the different forms that such epistemic encounters can take, and of the role that our interactions with the physical and social environment can play in them.

[Acknowledgment: The medieval owls that have illustrated this week’s posts were taken from the excellent collection at Discarding Images. Thanks also to the editor of this blog, for hosting this discussion.]

Dear Christopher,

Thanks for a stimulating series of invited posts.

A: Simplifying your perspective

For the moment, let me simplify your thought-provoking perspective by making an arbitrary distinction between:

(i) The mind of an applied scientist, whose primary concern is our sensory observations of a ‘common’ external world;

(ii) The mind of a philosopher, whose primary concern is abstracting a coherent perspective of the external world from our sensory observations; and

(iii) The mind of a mathematician, whose primary concern is adequately expressing such abstractions in a formal language of unambiguous communication.

My understanding of your thesis, then, is that:

(a) although a mathematician’s mind may be capable of defining the ‘truth’ value of some logical and mathematical propositions without reference to the external world,

(b) the ‘truth’ value of any logical or mathematical proposition that purports to represent any aspect of the real world must be capable of being evidenced objectively to the mind of an applied scientist; and that,

(c) of the latter ‘truths’, what should interest the mind of a philosopher is whether there are some that are ‘knowable’ completely independently of the passage of time, and some that are ‘knowable’ only partially, or incrementally, with the passage of time.

B. Support for your thesis

It also seems to me that your thesis implicitly subsumes, or at the very least echoes, the belief expressed by Chetan R. Murthy (‘An Evaluation Semantics for Classical Proofs’, Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, 1991; also Cornell TR 91-1213):

“It is by now folklore … that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic …”

If so, the thesis seems significantly supported by the following paper that is due to appear in the December 2016 issue of ‘Cognitive Systems Research’:

‘The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Goedelian Thesis’

The CSR paper implicitly suggests that there are, indeed, (only?) two ways of assigning ‘true’ or ‘false’ values to any mathematical description of real-world events.

C. Algorithmic computability

A number theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL(F) that can provide objective evidence (cf. ibid Murthy 91) for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), …}.

(We note that the concept of `algorithmic computability’ is essentially an expression of the more rigorously defined concept of `realizability’ on p.503 of Stephen Cole Kleene’s ‘Introduction to Metamathematics’, North Holland Publishing Company, Amsterdam.)

D. Algorithmic verifiability

A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL(F, n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), …, F(n)}.

We note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

The following theorem (Theorem 2.1, p.37 of the CSR paper) shows that although every algorithmically computable relation is algorithmically verifiable, the converse is not true:

Theorem: There are number theoretic functions that are algorithmically verifiable but not algorithmically computable.

E. The significance of algorithmic ‘truth’ assignments for your theses

The significance of such algorithmic ‘truth’ assignments for your theses is that:

Algorithmic computability—reflecting the ambit of classical Newtonian mechanics—characterises natural phenomena that are determinate and predictable.

Such phenomena are describable by mathematical propositions that can be termed as ‘knowable completely’, since at any point of time they are algorithmically computable as ‘true’ or ‘false’.

Hence both their past and future behaviour is completely computable, and their ‘truth’ values are therefore ‘knowable’ independent of the passage of time.

Algorithmic verifiability—reflecting the ambit of Quantum mechanics—characterises natural phenomena that are determinate but unpredictable.

Such phenomena are describable by mathematical propositions that can only be termed as ‘knowable incompletely’, since at any point of time they are only algorithmically verifiable, but not algorithmically computable, as ‘true’ or ‘false’

Hence, although their past behaviour is completely computable, their future behaviour is not completely predictable, and their ‘truth’ values are not independent of the passage of time.

F. Where your implicit faith in the adequacy of set theoretical representations of natural phenomena may be misplaced

It also seems to me that, although your analysis justifiably holds that the:

“\ldots importance of the rapport between an organism and its environment”

has been underacknowledged, or even overlooked, by existing theories of the mind and intelligence, it does not seem to mistrust, and therefore ascribe such underacknowledgement to any lacuna in, the mathematical and epistemic foundations of the formal language in which almost all descriptions of real-world events are currently sought to be expressed, which is the language of the set theory ZF.

G. Any claim to a physically manifestable ‘truth’ must be objectively accountable

Now, so far as applied science is concerned, history teaches us that the ‘truth’ of any mathematical proposition that purports to represent any aspect of the external world must be capable of being evidenced objectively; and that such ‘truths’ must not be only of a subjective and/or revelationary nature which may require truth-certification by evolutionarily selected prophets.

(Not necessarily religious—see, for instance, Melvyn B. Nathanson’s remarks, “Desperately Seeking Mathematical Truth“, in the Opinion piece in the August 2008 Notices of the American Mathematical Society, Vol. 55, Issue 7.)

The broader significance of seeking objective accountability is that it admits the following (admittedly iconoclastic) distinction between the two fundamental mathematical languages:

1. The first-order Peano Arithmetic PA as the language of science; and

2. The first-order Set Theory ZF as the language of science fiction.

It is a distinction that is faintly reflected in Stephen G. Simpson’s more conservative perspective in his paper ‘Partial Realizations of Hilbert’s Program‘ (#6.4, p.15):

“Finitistic reasoning (my read: ‘First-order Peano Arithmetic PA’) is unique because of its clear real-world meaning and its indispensability for all scientific thought. Nonfinitistic reasoning (my read: ‘First-order Set Theory ZF’) can be accused of referring not to anything in reality but only to arbitrary mental constructions. Hence nonfinitistic mathematics can be accused of being not science but merely a mental game played for the amusement of mathematicians.”

The distinction is supported by the formal argument (detailed in the above-cited CSR paper) that:

(i) PA has two, hitherto unsuspected, evidence-based interpretations, the first of which can be treated as circumscribing the ambit of human reasoning about ‘true’ arithmetical propositions; and the second can be treated as circumscribing the ambit of mechanistic reasoning about ‘true’ arithmetical propositions.

What this means is that the language of arithmetic—formally expressed as PA—can provide all the foundational needs for all practical applications of mathematics in the physical sciences. This was was the point that I sought to make—in a limited way, with respect to quantum phenomena—in the following paper presented at Unilog 2015, Istanbul last year:

‘Algorithmically Verifiable Logic vis `a vis Algorithmically Computable Logic: Could resolving EPR need two complementary Logics?’

(Presented on 26’th June at the workshop on ‘Emergent Computational Logics’ at UNILOG’2015, 5th World Congress and School on Universal Logic, 20th June 2015 – 30th June 2015, Istanbul, Turkey.)

(ii) Since ZF axiomatically postulates the existence of an infinite set that cannot be evidenced (and which cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency—see Theorem 1 in \S4 of the link below), it can have no evidence-based interpretation that could be treated as circumscribing the ambit of either human reasoning about ‘true’ set-theoretical propositions, or that of mechanistic reasoning about ‘true’ set-theoretical propositions.

The language of set theory—formally expressed as ZF—thus provides the foundation for abstract structures that—although of possible interest to philosophers of science—are only mentally conceivable by mathematicians subjectively, and have no verifiable physical counterparts, or immediately practical applications of mathematics, that can materially impact on the study of physical phenomena.

The significance of this distinction can be expressed more vividly in Russell’s phraseology as:

(iii) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;

(iv) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of fictional interest.

H. The importance of your ‘rapport’

Accordingly, I see it as axiomatic that the relationship between an evidence-based mathematical language and the physical phenomena that it purports to describe, must be in what you term as ‘rapport’, if we view mathematics as a set of linguistic tools that have evolved:

(a) to adequately abstract and precisely express through human reasoning our observations of physical phenomena in the world in which we live and work; and

(b) unambiguously communicate such abstractions and their expression to others through objectively evidenced reasoning in order to function to the maximum of our co-operative potential in acieving a better understanding of physical phenomena.

This is the perspective that I sought to make in the following paper presented at Epsilon 2015, Montpellier, last June, where I argue against the introduction of ‘unspecifiable’ elements (such as completed infinities) into either a formal language or any of its evidence-based interpretations (in support of the argument that since a completed infinity cannot be evidence-based, it must therefore be dispensible in any purported description of reality):

‘Why Hilbert’s and Brouwer’s interpretations of quantification are complementary and not contradictory.’

(Presented on 10th June at the Epsilon 2015 workshop on ‘Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics’, 10th June 2015 – 12th June 2015, University of Montpellier, France.)

I. Why mathematical reasoning must reflect an ‘agnostic’ perspective

Moreover, from a non-mathematician’s perspective, a Propertarian like Curt Doolittle would seem justified in his critique (comment of June 2, 2016 in the May 24, 2016, Quanta review ‘Bridging the Finite-Infinite Divide?’) of the seemingly ‘mystical’ and ‘irrelevant’ direction in which conventional interpretations of Hilbert’s ‘theistic’ and Brouwer’s ‘atheistic’ reasoning appear to have pointed mainstream mathematics for, as I argue informally (link below), the ‘truths’ of any mathematical reasoning must reflect an ‘agnostic’ perspective.

Kind regards,

Bhup