This is part of an ongoing argument that logical truths (and rules of inference) can sit comfortably in a naturalistic worldview. It was inspired in part by reading the wonderful articles Is logic a theory of the obvious? and Logical consequence: an epistemic outlook by Gila Sher (UCSD). These articles push for the Quinean thesis that even those logical truths in the center of our web of belief are open to revision based on empirical and conceptual factors.
The topic doesn’t have a lot directly to do with brains, but is part of my ongoing attempt to fit logic within a naturalist framework. If we can establish some plausibility for the claim that logical truths are open to revision, then I will feel more free to explore how logic fits into a more general story of how brains (and the logics they endorse) help us get about in the world. It will also help me think more clearly about how advocates of informational semantics can handle logical truths, which is something that (to my knowledge) hasn’t been addressed by the Dretskians.
Logic involves empirically-selected inference patterns
The overriding norm when it comes to inference-rule making is the goal of formulating truth-preserving inferences. In a kind of natural selection of inference rules, we got rid of those that didn’t work, and have kept those that do. Over the past couple of thousand years, this has led to some pretty impressive results.
While this would be music to the naturalists’ ears, it would have some queer consequences (at least from the perspective of those who advocate the existence of abstract logical objects). Namely, logic could have turned out differently, and the universe could have been so weird that we just failed in our attempt to find any “universal” logical rules. It is something of a happy accident that we live in a world
stable enough, with cognitive systems reliable enough, to allow us to find lots of inference rules that work quite well.
Thankfully, this sentence contains the only mention of quantum mechanics in this entire story. Rather than go down that road, to give the contingency of logic some initial plausibility I will argue that it is possible that some of the more cherished logical truths, thought to be a priori knowable by anyone with a working brain, are false. I’m not saying they are false, but I only need to establish that it is possible to raise the plausiblity of my thesis (and therefore of Sher and Quine’s theses). [Note added–be sure to see the comments for some very good criticisms of my Godel argument, which I’m now pretty sure is wrong–I am less sure of the paraconsistent logic argument].
Consider an axiomatic system S powerful enough to derive certain basic truths about arithmetic. Godel proved that if
S is consistent, then there are statements in the language of S that
are true but not provable within S. To add insult to injury, he also
proved that it is impossible to prove that S is consistent.
Now imagine a universe in which S is not consistent. Nobody can prove it isn’t
consistent, but it is not (e.g., God adds a belief commandment–S is
not consistent). We can’t prove it either way, but God told us the
correct answer so we can rest assured (in this thought experiment world) that S is indeed inconsistent.
But if S is inconsistent, that means there exists some proposition P (in the language of S) such that both P and not P can be proved (this follows from the definition of S being inconsistent). So, in this universe, wave bye-bye to one of the most cherished logical
laws, the law of the excluded middle. Is anyone here sure we aren’t
living in such a universe? Is there a good argument that we aren’t? You certainly can’t prove we aren’t.
continuing this thought experiment, if we are living in such a universe,
why do the rules of inference work so well? Because of the long and tortured selection
process I described above. Over the past few thousand years, we’ve fine-tuned our argument practices and inference rules to match the universe in which we live. To some degree we got lucky, as it could have been different
(after all, P or not P isn’t always true in this universe, so things could have turned out really weird indeed!). Since we can’t just
assume such rules are true, we have to treat them all as predictions or hypotheses
about truth values of sentences that come after ‘therefores’, predictions based on
the truth-values of sentences that come before the ‘therefores’.
Someone might object, “But Godel proved that Boolean logic is complete.” But, I would reply, in the possible world I am describing, there is a sentence that is both true and not true. So in that universe, even though Boolean logic is complete and consistent, one of its axioms is false.
The idea that the law of the excluded middle is not universal is not just pie-in-the-sky fanciful dreaming. The liars paradox (and other Godel inspired considerations) have spawned paraconsistent logics in which the law of the excluded middle is not
Now you might be tempted to say “Fine, but within that paraconsistent logic there are still inference rules. How do you know those are true?” I would say that a priori, I don’t
know they are true, but that it provides our best present model of how
truth-values propagate through propositions. This paraconsistent model was
constructed via trial and error as described above. But it could have been different. However, until we discover reasons to doubt its truth, it is
the best model of truth-propagation that we have, so there is no reason to not use it.
Note I’m not advocating paraconsistent logics, but only pointing out that their actual existence is sufficient for establishing the possibility argument I’ve been making.
Upon which horn will you skewer me?
I have the feeling as I write this that I am either saying something that is obviously false for some technical little reason in mathematical logic, or that I’m saying what is just obvious to the naturalistic professional philosophers who might be reading it. So which is it?