I am happy to announce that our next Mind & Language symposium is on Aaron Norby’s “Uncertainty Without All the Doubt“ from the journal’s February 2015 issue with commentaries by Keith Frankish (Open University), Jennifer Nagel (Toronto), and Nicholas J.J. Smith (Sydney).
Philosophical discussions of rationality and decision making are freqeuntly structured by the assumption that degreed beliefs are stable attitudes that can be used to predict and explain an agent’s behavior, judgments, and decision making. Aaron argues that this assumption, which he dubs the storage hypothesis, is challenged by “the existence of a systematic instability in our apparent degrees of belief, which is due to the way that processes governing recall from memory work.” The same possibility may be treated as having different degrees of likelihood when recalled for use in different contexts. In opposition to the storage hypothesis, Aaron defends a filter theory of decision making. According to the latter, automatic and largely unconscious processes select from among proto-credences — mental representations of “all those possibilities that an agent thinks might be actual” — and only after selection has occurred are likelihoods assigned to those represented possibilities. This ‘filtered out’, situation-bound sample of possibilities is then what is used for purposes of explicit decision making.
Below there are links to a video introduction, the target article, commentaries, and Aaron’s replies.
Comments on this post will be open for at least a couple of weeks. Many thanks to Aaron, Jennifer, Keith, and Nicholas for all their hard work. I’m also very grateful to Sam Guttenplan, the other Minds & Language editors, and the staff at Wiley-Blackwell (especially Imogen Clarke) for their continued support of these symposia.
You can learn more about Aaron and his work here.
***
Target Article: Aaron Norby, “Uncertainty Without All the Doubt”
Aaron’s video introduction to his paper:
Commentaries & Replies:
- Keith Frankish, “Double the Uncertainty“
- Jennifer Nagel, “Remarks on Norby’s ‘Uncertainty Without All the Doubt’“
- Nicholas J.J. Smith, “Doubts About ‘Uncertainty Without All the Doubt‘”
- Aaron Norby, “Responses to Frankish, Nagel, and Smith“
[Cover image via Wikipedia.]
Very interesting theory. Impressive and enjoyable. I’m curious about your thoughts on a few different scenarios. Can I ask you a couple questions regarding the various psychological components of your research?
Many Thanks,
Please do! I’d be happy to talk about that.
I’m curious as to how individuals with various diagnosable mental health disorders would respond and/ or be measured in regards to your theory. Would those either under the influence of psychiatric medications or illegal drugs be excempt?
Hi Brad,
Although there are certainly differences in the way that different people process risk and uncertainty, I would say that my default hypothesis is that the theory applies more or less to everyone. That said, I unfortunately don’t know enough about mental health disorders to say anything with confidence about what effects different ones might have on the sorts of phenomena that my paper talks about. It would certainly be an interesting question to explore.
Hi Aaron, thanks for this super interesting paper and discussion. One thing I am wondering is how your view would model the sorts of phenomena that get talked about frequently in work on cognitive dissonance, e.g. that (as I recall) exposing a person to evidence against their belief can lead them to become even more confident in it. One might expect that your view would have the opposite prediction: exposing someone to evidence against P makes the not-P situations salient, and so should lead these to get through the filter and thereby reduce the person’s confidence. I’m sure there is a way for you to accommodate this, though, and in any case it’s such a weird phenomenon that I wouldn’t hold it too much against you even if you can’t. (It’s the kind of thing that often gets raised as a counterexample to my own views.) But I’d be very interested to hear your thoughts on it.
Hi John!
I think this is a really interesting topic. I have to think about it more, but here’s what I’d say as far as how the sort of “backfire” effect you mention (where exposure to evidence that not-P leads to more confidence that P is true) might work in my model. I think a reasonable approximation of what happens in a backfire case is that the new, unwanted, information is subject to a kind of reasoning involving selective processing, where the most troubling features of the evidence are ignored, in part by simply processing the evidence shallowly – not, for example, working carefully through exactly how the evidence creates problems for one’s own view. I think this is an interesting case for my model, because the model predicts that, all else being equal, bringing to mind a possibility in which not-P obtains should, when one is not absolutely certain that P, lead one to treat P as being at best highly likely. But this will only happen when the possibility brought to mind is something like what Richard Holton calls a “live” possibility, that is, a possibility that one is disposed to acknowledge as genuine or worth acknowledging. It seems to me that what’s happening in backfire cases is that the subject with the entrenched view that P is true is using biased processing to do whatever can be done to keep the relevant not-P possibility from becoming a live possibility. A side-effect of that defensive processing would then be a strengthening and re-entrenchment of the already-held view, either because “surviving” the evidence counts in its favor or because one comes up with new reasons for believing one’s own view as a way of fending off the opposing evidence. Thanks for this question – this is just a first stab at thinking about it.
Just to add a reference to my mention of Holton in my last comment: what I have in mind is what he says in, e.g., his paper “Intention as a model for belief” in the collection Rational and Social Agency.
Hi Aaron
Sorry for being so late with this comment. As I hope my commentary made clear, I’m sympathetic to your project, and I think your reply highlighted a lot of common ground. We’re agreed on the existence of some kind of stable implicit credence, on the relative lability of explicit credence, and on the importance of studying the set-up processes that mediate between them. I’ve two quick points I’d like to raise here.
First, I suspect that the lability of explicit credence is just an instance of the lability of explicit propositional attitudes generally. Avowals, I would argue, are often the product of decision rather than straight recollection, and are sensitive to contextual factors similar to those you mention. We decide, for current purposes, to commit to a certain premise or or goal, and our choice is influenced by many factors, including the other options we bring to mind. (Again, the role of set-up processes is crucial.) If that’s right, then it only strengthens the case for your view.
Second, a question for you. You say at the start of your paper that the storage hypothesis doesn’t assume that credences are ‘really’ stored in the mind — e.g., in a sentence-like format. This suggests that your argument against it should apply equally if we take an interpretivist view, on which an agent possesses just those degrees of subjective probability and utility that make best sense of their choices. But isn’t an assumption of stability essential to the interpretive process? The aim of interpretation is precisely to find the set of stable attitudes that best explain the behaviour observed.
Thanks for a great paper and discussion.
Keith