Logic as contingent

Introduction
This is part of an ongoing argument that logical truths (and rules of inference) can sit comfortably in a naturalistic worldview. It was inspired in part by reading the wonderful articles Is logic a theory of the obvious? and Logical consequence: an epistemic outlook by Gila Sher (UCSD). These articles push for the Quinean thesis that even those logical truths in the center of our web of belief are open to revision based on empirical and conceptual factors.

The topic doesn’t have a lot directly to do with brains, but is part of my ongoing attempt to fit logic within a naturalist framework. If we can establish some plausibility for the claim that logical truths are open to revision, then I will feel more free to explore how logic fits into a more general story of how brains (and the logics they endorse) help us get about in the world. It will also help me think more clearly about how advocates of informational semantics can handle logical truths, which is something that (to my knowledge) hasn’t been addressed by the Dretskians.

Logic involves empirically-selected inference patterns
The overriding norm when it comes to inference-rule making is the goal of formulating truth-preserving inferences. In a kind of natural selection of inference rules, we got rid of those that didn’t work, and have kept those that do. Over the past couple of thousand years, this has led to some pretty impressive results.

While this would be music to the naturalists’ ears, it would have some queer consequences (at least from the perspective of those who advocate the existence of abstract logical objects). Namely, logic could have turned out differently, and the universe could have been so weird that we just failed in our attempt to find any “universal” logical rules. It is something of a happy accident that we live in a world
stable enough, with cognitive systems reliable enough, to allow us to find lots of inference rules that work quite well.

The arguments
Thankfully, this sentence contains the only mention of quantum mechanics in this entire story. Rather than go down that road, to give the contingency of logic some initial plausibility I will argue that it is possible that some of the more cherished logical truths, thought to be a priori knowable by anyone with a working brain, are false. I’m not saying they are false, but I only need to establish that it is possible to raise the plausiblity of my thesis (and therefore of Sher and Quine’s theses). [Note added–be sure to see the comments for some very good criticisms of my Godel argument, which I’m now pretty sure is wrong–I am less sure of the paraconsistent logic argument].

Consider an axiomatic system S powerful enough to derive certain basic truths about arithmetic. Godel proved  that if
S is consistent, then there are statements in the language of S that
are true but not provable within S. To add insult to injury, he also
proved that it is impossible to prove that S is consistent.

Now imagine a universe in which S is not consistent. Nobody can prove it isn’t
consistent, but it is not (e.g., God adds a belief commandment–S is
not consistent). We can’t prove it either way, but God told us the
correct answer so we can rest assured (in this thought experiment world) that S is indeed inconsistent.

But if S is inconsistent, that means there exists some proposition P (in the language of S) such that both P and not P can be proved (this follows from the definition of S being inconsistent). So, in this universe, wave bye-bye to one of the most cherished logical
laws, the law of the excluded middle. Is anyone here sure we aren’t
living in such a universe? Is there a good argument that we aren’t? You certainly can’t prove we aren’t.

So,
continuing this thought experiment, if we are living in such a universe,
why do the rules of inference work so well? Because of the long and tortured selection
process I described above. Over the past few thousand years, we’ve fine-tuned our argument practices and inference rules to match the universe in which we live. To some degree we got lucky, as it could have been different
(after all, P or not P isn’t always true in this universe, so things could have turned out really weird indeed!). Since we can’t just
assume such rules are true, we have to treat them all as predictions or hypotheses
about truth values of sentences that come after ‘therefores’, predictions based on
the truth-values of sentences that come before the ‘therefores’.

Someone might object, “But Godel proved that Boolean logic is complete.” But, I would reply, in the possible world I am describing, there is a sentence that is both true and not true. So in that universe, even though Boolean logic is complete and consistent, one of its axioms is false.

The idea that the law of the excluded middle is not universal is not just pie-in-the-sky fanciful dreaming. The liars paradox (and other Godel inspired considerations) have spawned paraconsistent logics in which the law of the excluded middle is not
(always) true.

Now you might be tempted to say “Fine, but within that paraconsistent logic there are still inference rules. How do you know those are true?” I would say that a priori, I don’t
know they are true, but that it provides our best present model of how
truth-values propagate through propositions. This paraconsistent model was
constructed via trial and error as described above. But it could have been different. However, until we discover reasons to doubt its truth, it is
the best model of truth-propagation that we have, so there is no reason to not use it.

Note I’m not advocating paraconsistent logics, but only pointing out that their actual existence is sufficient for establishing the possibility argument I’ve been making.

Upon which horn will you skewer me?
I have the feeling as I write this that I am either saying something that is obviously false for some technical little reason in mathematical logic, or that I’m saying what is just obvious to the naturalistic professional philosophers who might be reading it. So which is it?

27 Comments

  1. Well, you know, intuitionism doesn’t use the law of the excluded middle in inferences (and that is one of the reasons why Godel didn’t use the law either).

    Anyway, I think you should distinguish between logical calculi (which can be built in infinite number of ways) and the way brains work. For starters, all Godelian arguments require that formal arithmetic is implemented, and this is literally impossible for any physical system which is finite (infinite recursion is impossible for a finite system). So actually, brains have definitely less computational power.

    There different ways of accounting for our inferential competences, logical calculi being one of them. But as soon as you get into the unbounded recursion etc., you’re making a gross idealization, which is literally false but can be fruitful for other reasons. So probably brains are in the area of finite state machines, computationally, but this makes the research into their abilities a little harder, no clear-cut distinctions to be made. Depending on what you want to account for, you can use non-monotonic logic, paraconsistent logic, add some Gazdar to the equation etc. All this means however nothing to the validity of logical calculi which are simply justified independently (via mathematical proof, of course).

  2. Eric Thomson

    Marcin–good point about intuitionists. They also don’t accept the law of noncontradiction right? I could have used them instead of paraconsistent logics I guess. In my excitement about paraconsistent logics I totally forgot about the intuitionists. Is there anything written about the relationship between intuitionism and paraconsistent logics? Are they two species of one genus?

    As for your other stuff, I’m not talking at all yet about how brains work, but only making the possibility argument that certain basic logical “laws” (e.g., inference rules, axioms) can possibly be false. Hence, I will for the moment resist the temptation to argue about whether brains fit at the bottom of the Chomsky heirarchy (or anywhere on the Chomsky heirarchy, for that matter) and any implications this might have.

  3. Mike

    If you have one inconsistency, can’t literally any proposition be proven? I seem to remember that by dividing by zero in an algebraic argument, one could prove that any number equals any other number.

    I don’t have any real familiarity with these different logics you are discussing, but it seems like you can’t avoid using our core (not to say ‘real’) logic as you try to reason about alternative possibilities.

    Does that mean you assume you/we have some kind of intellectual freedom to imagine whole new ways of thinking? If you did have such a freedom, would that argue AGAINST the evolutionary picture you are trying to develop? I.e. if our perceptual and inferential were cobbled together to adapt us to this world, where would we get the ability to imagine ‘impossible’ worlds?

  4. Mike

    If you have one inconsistency, can’t literally any proposition be proven? I seem to remember that by dividing by zero in an algebraic argument, one could prove that any number equals any other number.

    I don’t have any real familiarity with these different logics you are discussing, but it seems like you can’t avoid using our core (not to say ‘real’) logic as you try to reason about alternative possibilities.

    Does that mean you assume you/we have some kind of intellectual freedom to imagine whole new ways of thinking? If you did have such a freedom, would that argue AGAINST the evolutionary picture you are trying to develop? I.e. if our perceptual and inferential were cobbled together to adapt us to this world, where would we get the ability to imagine ‘impossible’ worlds?

  5. Eric, I approach this topic with some trepidation and the feeling of profound philosophical naivety. But — if logical inference rules are false, by what formal criteria would you be able to demonstrate their falsehood?

    As for naturalizing the laws of inference (the intuition of a reasonable conclusion?) see *The Cognitive Brain* (TCB) Ch. 6 “Building a Semantic Network”. In particular see pp. 103 (bottom) – 115.

  6. Thanks for this very rich post. First, a minor point. You write: “Someone might object, “But Godel proved that Boolean logic is complete.” But, I would reply, in the possible world I am describing, there is a sentence that is both true and not true. So in that universe, even though Boolean logic is complete and consistent, one of its axioms is false.”

    I don’t think this is right. The funny sentence (which you characterize as ‘both true and not true,’ which I think is kind of misleading) is not a sentence of boolean logic. It has to make essential use of arithmetical language — because we can’t form a godel sentence using just boolean language.

    2. A bigger point: you write “Now imagine a universe in which S is not consistent. Nobody can prove it isn’t consistent.” I don’t think you are allowed that second step. What you can say is that nobody can prove WITHIN S that S is not consistent. But that’s not the same claim.

    3. Don’t those experiments like the Wason selection test suggest that whatever inferential patterns natural selection has given us are not the inferences licensed by classical logic? (Actually, you just need an undergraduate classroom to determine that.)

    4. Finally, with respect to the bigger project: one position that’s out there that you may want to take account of is that logical truths are both revisable and a priori. This has been championed most recently by Michael Friedman, following Carnap. I mention this because I got the feeling from your post that you think revisability automatically guarantees naturalistic acceptability; but maybe I misunderstood you.

  7. Eric Thomson

    Mike: in paraconsistent logics, it isn’t true that you can derive anything from a contradiction.

    I’m not sure what this implies about how we think. I look at logic as a kind of prescriptive enterprise (about transitions among sentence string types) rather than a description of thinking.

  8. Eric Thomson

    <i>Eric, I approach this topic with some trepidation and the feeling of profound philosophical naivety.</i>

    Join the club.

    I think you would have to use a different logical system to show another logical system has false inference rules.

  9. Eric Thomson

    Excellent points. I’ll try to respond in turn. Thank goodness when I stepped away after working on my response the blog software logged me out automatically without saving any changes.

    1. This may be the big deal killer for my argument. Of course I thought of it too but believed my response worked, but now I’m not so sure.

    Let me spell out what my thinking was. We live in this universe where S is inconsistent, so there exists a P such that P and ~P are both true (and P is expressed using terms from S). But does it matter that P is expressed using terms in S? P could be expressed using terms from anything. It doesn’t matter–if we know that there exists a P such that P and ~P are both true, that is sufficient to show what I need.

    This is what I was thinking at the time. Now I think it is probably a bad argument for the reason you give (and likely for other reasons–my point just won’t work for propositional logic). I will have to think about whether there are interesting ideas I can salvage–it’s still kind of cool that there is a possible world in which S yields a contradiction.

    2. Fair enough, and good point. Again you get me on the first horn of my dilemma!

    3. My argument is not to do with how our brains work or how we think, but the development of public formal logic. While I am a neuroscientist, I have TAed logic and know first-hand how hard it is for many people to learn elementary logic! I am skeptical that logic has anything to do with how we naturally reason. I tend to think it applies to propositions (and I take propositional contents to be contents of public sentences–not language of thought symbol strings). I look at logic as a helpful tool that often goes against our natural inclinations.

    4. Thanks for the reference to Friedman.

    More generally thanks for taking the time to set me straight on some of this stuff. I don’t think revisabilility implies naturalistic acceptability, but it raises its plausibility by blocking a common objection to naturalism.

  10. Eric Thomson

    But — if logical inference rules are false, by what formal criteria would you be able to demonstrate their falsehood?

    I should have said, the point I was making is that you don’t need prior formal logical rules to see that an inference rule is false–given the goal of preserving truth-preserving transitions among sentences, we can “see” that some of them just don’t work. E.g., the rule if A then B, B, therefore A often leads to incorrectly concluding A when A is false.

  11. Eric Thomson

    I think this might be wrong, based on what Greg said.

    I obviously have to go back to the drawing board with some of this stuff. I still think I am in the right ballpark with what I am saying, but my Godelian argument is just a car crash. I need to read Peter Smith’s excellent book on Godel more closely I gave it a very superficial reading so it is time to actually work through it. I also think perhaps a more direct approach, simply arguing for my positive theory and letting other people attack it, would be more productive.

  12. Along the issue of logic as contingent i have a very naive question about wether any logician has sometimes based the logical implication on the basis of the material implication.

    This preocupied me because the logical relation stablished with any logical implication, ussually is symbolize by the conditional operation which according to its truth-value table if the antecedent is false and the consequent is true, then the resulting value of the simple conditional, its true!
    On purely material basis this is inconsistent. is´nt?.

  13. Eric Thomson

    Anibal: I think Brandom explores this fairly extensively in ‘making it explicit’.

    I’m not sure about the answer to your question. I don’t see why it would be a problem with material conditionals to use the standard rule with false antecedents, so perhaps I’m missing something.

  14. Eric Thomson

    Greg–I’m still thinking about what bits of my post I can salvage, what bits are complete bullshit. This has a few of my further ideas on the matter.

    1. I still like the idea about the development of logic partly being a sort of natural selectoin on transitions among propositions’ truth values (and by ‘proposition’, I should again stress I don’t mean brain states or mental states, but the content of public linguistic strings). Something just seems right about this, and the idea that logic is a public scaffolding that acts kind of like cognitive prescription lenses that help us steer clear of traps that our natural cognitive engine leads us toward.

    This would probably be fun to model with a kind of monte-carlo
    simulation with little agents running around stringing sentences
    together–we could see what kinds of logics emerge in different types
    of modelled environments, that sort of thing (and run the simulation in
    a world in which all the agents assign three truth values to
    propositions).

    2. When I use the term ‘logic’ do I mean axiomatized logic? I think I was ambiguous about this. In one sense, yes,  because I think we likely groomed our ‘therefore’ habits well before people axiomatized. Heck, people today groom such habits who have no idea of axiomatic systems (For that matter, you could teach an entire course in logic without ever giving axioms). But then there are the fully developed axiomatized systems of logic also. I need to be more clear about what I mean by ‘logic’. Most people don’t even have ‘material implication’ in the center of their web of belief–only a few highly educated people are afflicted with such ideas. Perhaps most westerners have a law of noncontradiction as a pretty central guiding ideal, though. Even Rush Limbaugh implicitly uses it.

    As far as how the two logics are related, it seems reasonable to claim that axiomatized logic grew out of the history of linguistic grooming. Indeed, logic seems to be a formal model of implicit logical practices. (Am I turning into Brandom?)

    3. But, as Greg asks, does the above (1) necessarily imply that logic is revisable? Well, if we are talking about an axiomatized system, you could just say “That is logic L1, and we will not revise it no matter how the world is. It can include A–>B, B, therefore A as a basic rule of inference and it might not work very well, but once L1 is fixed, that’s it.” I would be fine with that, but it would be sort of a strange bullet to bite.

    Does it imply that logic in the broader sense is revisable? In some sense, yes. If an ur-community of ‘therefore’ users employed the inference rule of affirming the consequent, treating it as valid, and then discovered that there were times when this didn’t work, they would revise their set of accepted inference rules by eliminating that rule. So the whole point is that the inference rules we have are the product of such a selection process. But what about the inference rules we have now? How ever could we imagine a case where A implies A is not a valid inference? How do we know when we have hit upon the right rules? Is there any way to check a rule to see if it is immune to revision?

    Sure, we can incorporate it into some formal system, prove the completeness of this formal model of the folksy logic, and then sort of seal off the system from revision, resting comfortably in our knowledge that all and only true statements in the system are provable. But now we have once again moved away from logical truths as taken by the folk, by the web of belief of everyone that matters (i.e., nonacademics), moved away from the model the development of logic, and gone back to doing logic ourselves, giving a sterilized model of a subset of ordinary logical language.

    I think there is not much else we can do. If I say we should just stick to studying folk logic, then it seems there is no way to prove that some rule can never be revised (since such a proof would require some sort of general abstract logic). Plus, why should I really complain, since formal logic likely started out simply as a model of the logic already being used by some people? And it’s not like logicians are insensitive to the folk. Predicates like ‘is greater than’ which add an ordering to numbers are really attempts to capture a concept (an ordering, which has certain transitivity properties) that was long in use by mathematicians.

    On the other hand, some of the models that actually model how people tend to do proofs intuitively, that don’t invoke things like theories of types, lead to serious problems (Russell paradox). So the logicians aren’t just modelling how people in fact reason, but adding elements to the model to guarantee that the reasoning doesn’t lead to things like contradicitons. Nobody had a theory of types before Russell.

    But why not just say, fine. Isn’t Russell one of the folk? Isn’t he just one of these people working on weeding out crappy inference rules, and patching things up so that we don’t end up with systems in which truth doesn’t propagate as it should? Sure, he no longer was working on truth values of claims about ordinary objects, but truth values of mathematical arguments. This makes things a bit more complicated and far removed from my original story about the development of inference rules, but it still conforms to the ‘selectionist’ pattern I described. Russel revised logic because he noticed a crappy pattern allowed by Frege’s system.

    Who’s to say that someone won’t find a contradiction in Russell/Whitehead’s system? Well, Godel steps in and tells us how we can actually use the logic we have built up thus far to actually make proofs about such things, a kind of crowning achievement in this drive to find which kinds of ‘therefores’ are useful. So Boolean logic is complete and consistent.

    I guess if I wanted to attack that, I’d have to say, but can you prove that the axioms of Boolean logic are the right ones to use? Would it be possible to have a consistent and complete logic that was completely ridiculous in the real world? E.g., a complete and consistent propositional logic in which affirming the consequent was a basic inference rule? So the question becomes, how do we choose which logical system to use, and it seems to me (and please correct me if I’m wrong), we pick the one that satisfies pragmatic criteria that are extralogical. So now we have evolved to the point where the pragmatic selection process I described above has evolved so that it can operate on formalized logical systems. We don’t want the ‘affirming the consequent’ system because it blows. If logic were merely a game that didn’t care about whether truth values tracked something interesting in the real world, there would likely be teams of mathematicians examining such deviant logics.

    So, getting back to Greg’s question…

    Does (1) above really imply that logic is revisable? Yes, I think it does, and that the revision is sensitive to pragmatic concerns (i.e., does it lead to crappy logics that bear no resemblance to how truth-values propagate in the real world).

    4. Finally, could my Godelian argument be salvaged for mathematical truths rather than logical truths? That is, for some standard axiomatization of arithmetic, could my argument work?  I think it does, but it is trivial, as everyone who knows about this stuff knows that ZFC can’t prove it is consistent, which means it is possible that it is inconsistent! So the argument is skewered in the second horn of the ad hominem at the end of my original post. Unless the argument works for something like Boolean logic (and as we see from the comments above, it seems to not work), then it really isn’t novel at all.

  15. Eric Thomson

    To cap off my discussion in number 3 above (which I am now thinking is the real crux of the matter) I think that it is onto something interesting.

     I argued there that logic is an attempt to construct useful models of inference practices used by real people in real situations (and not just an abstract exercise–if that were the case a complete and consistent logic would be interesting even if it included affirming the consequent as an inference rule)). Hence, the choice of which logic to use is constrained by extralogical concerns about truth-value propagation in the real world. While Boolean logic is complete and consistent, is there a proof that it is not sneaking in an affirming-the-consequent type premise? Is this something that can be decided within Boolean logic? It seems the answer is no, so again this leaves open the possibility that even Boolean logic can be flat-out wrong, in that they are a crappy model of how truth-values should propagate when thinking about propositions.

    OK, it is 3am and I really need sleep in my life.

  16. From Eric’s interesting reflections:

    > So the question becomes, how do we choose which logical system to use, and it seems to me (and please correct me if I’m wrong), we pick the one that satisfies pragmatic criteria that are extralogical. So now we have evolved to the point where the pragmatic selection process I described above has evolved so that it can operate on formalized logical systems. < Doesn’t this bring us right back to the biological limitations of the cognitive brain that I described in “The Pragmatics of Cognition” (TCB pp. 300-301)?
    Seen within this framework, all formal schemes are brain inventions consructed to satisfy some kinds of pragmatic criteria. All kinds of formalisms (including mathematics) are possible and may be objects of study, but only the useful ones are applied. Would you agree?

  17. Eric Thomson

    Arnold: I’m at a conference and dont’ have access to your book right now. Since I am focusing on the contents of public linguistic strings, that complicates the relation between my view and a neuro-centric view.

    This model should work even in little simple agents that have very different brains from us, or even no brains. For instance, I would like to build a computer model of little agents that run around
    making perceptual judgments (e.g., the table top is square that are followed by random claims about  things in their virtual world (or other properties of the same thing) in the
    environment (e.g., the table top is spherical). When the ‘consequent’ (the thing uttered after the original perceptual judgment) Give these agents only a few abilities–the ability to
    recognize whether a given statement is true or not, and to eliminate strings that are false, given the antecedents. Perhaps they can interact by correcting each other.

    I bet after a while, they’d be doing some good inference practices (e.g., ‘That table is square…therefore that table has four sides’). So of course while in humans all this kind of stuf is mediated by our brains, I’m not sure neuronal facts are essential–this is essentially a hypothesis that can be examined at the behavioral level (well, behavior and world).

    It should
    only help that we have a lot more going on cognitively than these
    things. Memories, the ability to experiment on things to really put the
    rules we are using to the test.

  18. Eric Thomson

    Oops I cut off a sentence in the second paragraph.

    I said, about the little model agents I mentioned:
    “When the ‘consequent’ (the thing uttered after the original perceptual judgment)”

    Urr, right.

    I should have said
    When the ‘consequent’ (the thing uttered after the original perceptual judgment)  is false, the agent will weed out the inference rule implicit in the inter-string transition. When the consequent is true, the putative inference rule is installed in the agent’s memory and it’s likelihood of being used again increases.

    Before I were to turn this into an actual simulation, I would have to work out a lot of details about the linguistic strings they use, the semantic rules they are given to evuate the truth of a claim, and the form of the ‘inference rule’ that is preserved and eliminated. Also, it would be very interesting to put the agents in differently structured worlds to see if the world influences the type of logic developed. Also, would we allow the modelled social world to include things like reference to sentences (so it could generate claims like ‘This sentence is false’)? So far this is just an incipient idea, but I think it would be cool to model it (for that matter I am very surprised that nobody has built more formal models of Brandom’s theory of content).

  19. Well, it depends on the flavor of intuitionism but they use law of non-contradiction, if I remember well (I took a seminar on that some seven years ago, so…)

    Anyway, intuitionism was definitely earlier in denying the excluded middle principle. There is also a nice book about Aristotle and the excluded middle by a Polish logician Lukasiewicz (the same one who invented the reverse notation and three-valued logic). He wanted to show that excluded middle is not necessary in logic and introduced the third logical value for sentences about the future.

    Anyway, axioms cannot be false, they are simply never proven (if they are good axioms, i.e., independent from other axioms). Some say axioms are never true as well due to this fact. There’s some literature on that in philosophy of logic (I think the discussion goes on).

    To the point. You can devise a perverse logic with perverse axioms and still be doing logic – that’s the lesson we’ve learned in 20th century with the advent of non-classical logics. It could be useless for science but you never know – there are applications of paraconsistent logics in quantum mechanics (at least some claim their logic is applicable there though not of much practical use, as I gather). It’s the same with all mathematical theories – you can try to have a very non-intuitive theory with perverse axioms but if you stick to the rules of the game you still get something out of it. The theorems are true in that theory.

    The moment when you can say that logic is applicable to something is the moment when you go to empirical matters. And then you just see if your math tool fits the data or not. And in most cases, it can fit only approximately. So does classical logic for natural language. Non-monotonic logic does a better job for inferences in natural language. But it’s not so good for other purposes etc.

  20. Eric Thomson

    Marcin: thanks, that is very helpful. I am so out of my element here I am a little embarassed, so I appreciate the patience.

    Gila Sher just recommended Penelope Maddy’s new-ish book ‘Second Philosophy’ to me: I read the summary over at Amazon and a review, and it sounds like an amazing work–her previous book was a bit above my head, but this one sounds a bit more approachable.

  21. Eric, it seems that Maddy claims that humans have an innate rudimentary logic (RL) embodied in their primitive cognitive mechanisms. Would your inference “agent” have such a mechanism?

    Look at the semantic network detailed in TCB,
    Fig. 6.3. In a computer simulation test, when this mechanism is told that “a cat is an animal” and that “a tiger is a cat”, it responds with the logical inference that “a tiger is an animal”, even though it has *never been told* that a tiger is an animal. What would be the difference in behavior between the logical agent that you envision and the behavior of this semantic network?

  22. Eric Thomson

    I have no opinion on this, but the mechanism I described was at the publically linguistic level, not neuronal/cognitive. I wouldn’t be surprised either way. I don’t require an innate logic, but I also wouldn’t mind one. I tend to assume people are pretty bad at symbolic logic. If a basic logic course were just translating into words what their cognitive mechanisms already do, it wouldn’t be so hard for many students to learn the stuff.

    I would be interested in learning about evidence for logic in nonhuman animals, its development in prelinguistic children, that sort of thing. I’m sure it’s been done, will look up ‘exclusive or’.

  23. Nice poist Eric. As you probably know, I like this kind of argument very much. I have pushed something similar over at Philosophy Sucks!…the point that I think is important is undermining the appeal to a priori intuition. The only leg that these guys have to stand on is their claim that the Law of Non-contradiction is intuitively self-evident. But if our intuitions are the product of selection, then their leg is swept from under them.

    Of course, I think the real point here is that both sides are mistaken. We just have no idea if loigic is contingent or not.

  24. Eric Thomson

    So you are an agnostic? I am certainly less confident now, as I think there are ways for the defender of the a priori view of logic to get out. But I am also more confident in a way, as I am starting to get a better handle on some of the issues here. This is a topic that would be fun to take up full time, and unfortunately I think that’s what it would take to really treat it well.

Comments are closed.

Back to Top