Imagination and delusion

A number of philosophers have suggested that delusional people do not believe their delusions; they only imagine them and then mistake their imagining for a belief (Greg Currie has a view along these lines: in his recent book, Phil Gerrans defends a related view). What follows are a few inchoate thoughts about views like this.

The main attraction of the imagination view is that it helps to explain the mismatch between patients’ professed beliefs and their behavior: the man who alleges that his wife has been replaced by an alien replica may not show much concern about sharing his home with a replicant or worry about what has actually happened to his real wife. Because imagination plays a different functional role to a belief, we can explain these apparent inconsistencies in behavior. Imaginings can be involved in inferences – hence fictional narratives, which may be faulted for inconsistency – but they don’t underwrite inferences outside their domain (an imagined scenario’s being inconsistent with states of affairs known to obtain doesn’t make it defective). But Currie’s account doesn’t seem able to explain why the imagining would continue to lack the capacity to underwrite inference once it is taken by the person to be a belief. Belief attributions seem self-fulfilling: when an agent takes herself to believe that p, she believes that p in virtue of so taking herself. Of course there are apparent exceptions, like self-deception and perhaps implicit attitudes. The point is that we need an explanation of why so maintaining doesn’t bring about the correlative belief.

In fact, I think things are worse than that suggests: not only is there theoretical reason to think that imagining oneself to believe that p (in a manner which doesn’t recognize that it is an imagining, at least) would tend to bring it about that one believes that p, there is evidence for the claim that imaginings lead to beliefs. In the recent literature on imagination, there has been some debate about the continuum thesis, according to which imaginings are continuous with beliefs (Schellenberg 2013). There are multiple reasons why we should expect the continuum thesis to be true. In fact, I think we should expect something stronger than Schellenberg’s continuum thesis to be true: imaginings are liable to become beliefs. Some reasons to believe this:

(1) cognitive dissonance. Ordinary agents are apt to alter their beliefs to explain their behavior when their behavior is inconsistent with their antecedent beliefs. They do so when they regard that behavior as voluntary. Agents often act in accordance with their imaginings (play acting, for instance). When an agent is immersed in play-acting, the fact that the scenario is imaginary is not salient to them and the mechanisms of cognitive dissonance might kick in.

(2) Spinozist belief formation mechanisms. There is evidence that tokening the thought that p tends to lead to the belief that p. This occurs even when the person is given grounds to undercut any warrant for the belief prior to entertaining it. For instance, Wegner told his subjects that they would receive predetermined feedback about how they performed on a task, and that the feedback would therefore have no relation to their action performance. They formed beliefs that mirrored the feedback nevertheless.

(3) Finally, there is direct evidence of ordinary agents transitioning from knowingly imagining that p to believing that p, even when p has a bizarre content. People who take themselves to be able to ‘channel’ aliens or spirits may begin by pretending to channel these beings. At least that’s an inference supported by the literature which channels have themselves produced on how to channel. They tell novices to stop asking themselves whether they are producing the voices and thoughts themselves: “let that thought go, and for now believe that you have indeed connected with a high-level guide”.

A second example of this same phenomenon is more tragic. Daniel Schacter recounts the story of Paul Ingram, a man accused of abusing his daughters after they ‘recovered’ memories of the events (after a member of their church told them that God had told her of the abuse). Ingram initially denied the accusations, but eventually ‘recalled’ the abuse and confessed. His daughters’ accusations became increasingly bizarre, including the claim that they had been raped and forced to bear children who were then sacrificed in Satanic ceremonies; Ingram confessed even to these bizarre crimes and was sentenced to 20 years’ imprisonment. There is little doubt that Imgram’s imaginings (in which his pastor advised him to engage to ‘recover’ the memories) played a crucial role in his coming to believe he had abused his daughters. Therapists who help patients ‘recover’ memories often ask their clients to imagine what might have happened, saying that many find this helpful to recover the memories.

Of course we might say of these cases what Currie and Gerrans say about delusions: the person doesn’t really believe what they say; they only imagine that they do. But there is little reason to think, though, that someone like Imgram (who confessed to abusing his daughters to police on the basis of recovered memories), didn’t believe what he professed. Its difficult to identify a functional difference between the states he professed and bona fide beliefs.

Imagination – at least insofar as it is appropriate to identify imagination with the mechanisms of mental time travel – seems to have developed by coopting the mechanisms of episodic memory (Suddendorf and Corballis 2007). Further, generating imagery activates the same parts of the brain involved in actual perception in that modality (Van Leeuwen 2013). This may go some way toward explaining our vulnerability to confusing imagination with belief: because we utilize mechanisms that are designed for generating belief, we are disposed to endorse the content of our imaginings under the appropriate conditions. Currie thinks that our capacity to track our own agency plays an important role in allowing us to distinguish imagined thoughts (over which we possess some agential control) from beliefs. But even neurotypical subjects may slip into believing what they initially know themselves to be imagining; for those patients who have problems tracking agency, we ought to expect the mechanism to produce full-blown beliefs.

Perhaps the vulnerability of imagination to slippage is not merely a product of the fact that evolution coopted mechanisms for belief production. Perhaps it comes with the territory. If imagination is to serve its adaptive function, it had better be set up to simulate realistic scenarios. We want to know what would actually happen if we behaved in a particular way, or if some event were to occur. Our imagined scenarios must therefore unfold in way that is as close as possible to how reality would be, given the imagined change. In fact, as Van Leeuwen argues, that’s how minimal imagination – imagination of realistic scenarios – actually works: we import into the world of the pretense background facts about the world, including contingent truths, to fill it out and to constrain our imaginings. To that extent, imagination is itself a mechanism for belief generation, which perhaps renders it less surprising that it may lead us to come to believe in the content imagined, and not just to believe that that’s how the world would be were that content the case.

So we have a problem. Imagined representations tend to become beliefs. But the attraction of the imagination account rests on its generating states that lack the functional role of beliefs. We need some explanation of why delusional subjects continue to imagine that p without ever recognizing that that’s what they’re doing and without coming to believe what they imagine to be the case.

Gerrans’ related account might fill the gap. Roughly, the story is this: delusions are default network thoughts, whereas genuine beliefs are tested for internal and external consistency by DLPC. For one reason or another, the delusional person lacks the capacity to exercise DLPFC supervision over the thought. Why don’t delusional patients come to believe (or to reject) their delusion? Because processing resources are finite. When the delusion is salient, it captures the patient’s attention and monopolizes processing resources. So the thought is not properly tested for consistency with the patient’s other beliefs.

One problem with this account is that seems to attribute more rationality to delusional patients than ordinary people. The rest of us tend come to believe our imaginings; patients don’t (we can’t appeal to the bizarre nature of the delusions to explain why they don’t – we believe bizarre things too, as in the channeling case). It is sometimes objected to two factor accounts of delusion that they postulate reasoning deficits in delusional patients, while the evidence for such deficits is underwhelming. Gerrans needs the evidence to run in the opposite direction, it seems: he needs it to be true that in one respect, delusional patients are more rational than the rest of us. Perhaps this is something he can maintain: he might argue that what is, in some respects, a deficit is nevertheless in other respects protective of the person from a defect of rationality.


  1. Alan White

    Neil–thanks so much for this. But something also occurred to me while reading this: are you on to something about the putative Karl Rovian principle that repeating a lie–making it more richly imaginable through social reinforcement–increases its credence (though not in an objectively justified epistemic sense)? Maybe there is an inverse there–individual delusional imagination has a social wall that prevents an all-in behavoral belief for some mentally ill (who will believe my spouse is a pod-person?)–but remove the social wall by socializing the delusion ala Rove and Fox News–and the delusion becomes endorsed belief both epistemically and behaviorally by that social justification. Just thinking out loud.

    • Neil Levy

      Alan, I do think that Spinozist belief formation has political implications. I’ve argued that the standard liberal conception of free speech in the market place of ideas is psychologically unrealistic. Even if it is true (which I doubt) that truth tends to out through this kind of contention, exposure to false ideas has effects on everyone, even those who explicitly reject them. I have also argued that we need to be careful in tackling opponents. I have argued that you should avoid reading or otherwise entertaining arguments for conclusions you have strong reason to think are false (e.g., climate denialism) unless you have the time and inclination to grapple genuinely deeply with them; make yourself a genuine expertise (and none of us have time to be a genuine expert on more than a few topics). Essentially this is old advice in new garb: A little learning is a dangerous thing/ Drink deep, or taste not the Pierian spring.


    A nice account Neil. You home in on a difficulty for accounts like mine ( and Kevin Mulligan’s published in Italian in Rivista di Estetica for all of those who like their philosophy in the language of Dante ). You say:

    But the attraction of the imagination account rests on its generating states that lack the functional role of beliefs.

    this is indeed its attraction when we think of the role of imagination in supposition, fantasy, speculation, hypothesis generation etc . these are cases where we don’t act on thoughts or endorse conclusions from them. But imagination is a many splendoured thing. On my view we can certainly act the basis of imagination and often do. Thus imagination can certainly have the behavioural/functional role of belief. The anxious parent worried about a teenager who is late home: phoning friends, pacing the house, driving around the suburb etc etc. She doesn’t really believe the child has been kidnapped or is in hospital, she is just acting out an imaginatively triggered thought as a way of palliating anxiety triggered by imagination.

    I think this is a ubiquitous phenomenon actually. there is a lot more imaginatively generated behaviour ( e.g. religious, social etc) than we might think. It also explains a lot of so called puzzles about self deception. Administrators imagine they are competent and have the welfare of the university at heart etc etc. There is no puzzle here about whether they simultaneously believe and disbelieve a proposition. they are just acting out a fantasy.

    of course as you then say

    We need some explanation of why delusional subjects continue to imagine that p without ever recognizing that that’s what they’re doing and without coming to believe what they imagine to be the case

    I think that the resource allocation account can deal with this. They can’t access or sustain access to the system they need to test a proposition for rational coherence. Under certain favourable therapeutic or pharmacologically induced conditions they can. but by and large they can’t. I ‘d go so far to say that because the default network is essentially a device of self referential representation then most overlearned personally significant behaviour and patterns of thinking are resistant to decontextualised evaluation. Its too hard to inhibit the default network when it is fired up from below by affective associations.

    Of course if you are a functionalist about belief then this account looks very thin. So much the worse for functionalism: if it makes you unhappy with an accurate description of the cognitive processes involved in producing behaviour and requires that they be redescribed to fit a philosopher’s account then… that is unhelpful in understanding the mind.

    That’s just one reason why I don’t like the Maher style account of delusions which equates them to scientific hypotheses. Scientists test and evaluate hypotheses in order to explain evidence. The rest of us just imagine something triggered by the senses and “act out”

    Kevin and I have invented the term “incorporation” to capture this acting out with out implicating a role for epistemic norms. Incredibly it hasn’t memetically invaded the blogosphere in the same way as “that dress” or pictures of Jose Canseco’s daughter dancing at Coachella. Funny old world.

    Personally I’d be happy to live with a every restricted notion of belief: only representations capable of being tested by decontextualised processes count. That still leaves plenty to worry about while allowing us to explain a lot of behaviour without reference to epistemic norms.

    • Neil Levy

      Phil, I’m not entirely unsympathetic to the story though I think it’s hopeless to try to map decontextualized processing onto belief. If we’re going to go that way, I think we do better to abandon the folk psychology altogether. I’m not ready to do that, for a couple of reasons. First, it seems that the story is parasitic – motivationally, at least – on the functionalist account. After all, why think that delusions involve failures to put a representation to the test of consistency with other representations and external stimuli in the first place? It’s because delusions don’t exhibit the dispositional stereotype we associate with beliefs – delusional patients may fail to experience appropriate affect, may not update their beliefs appropriately or show concern with logical consistency, and may fail to act consistently with their professed belief. Were it not for these functional differences, your story would lose much of its attraction.

      It also seems to me trust on your account, the dominance of default network thinking comes to cheap. Contrast OCD with and without insight (poor insight is not very rare, but it’s a minority of patients). If it’s true that even neurotypical subjects, like those who feature in your parent and university administrator case, are absorbed in their fantasy (Gendler’s pretense?) to such an extent that they can’t engage in decontextualized processing, then it seems miraculous that the OCD patent, who is literally obsessed by thoughts of her rituals, should be able to have any insight at all. But most do. On the other hand, if they can have good insight, it’s hard to see how ordinary people can become absorbed in their fantasies.


        All good points Neil. re functionalism. You are right that the patient’s irrational behaviour and patterns of thought, measured against some notion of consistency, bring her to clinical attention. But what has that got to do with the truth or otherwise of functionalism as an account of psychology, rather than as a conceptual analysis of what we mean by belief? So we could diagnose her using a functionalist notion but it would only be a very rough and ready way of finding out what is actually wrong. Its just an example of a high level folk concept being too coarse grained to do any explaining at the lower levels. A bit like Dennett’s Real Patterns. You could come up with an elaborate set of principles explaining the patterns one seen on the screen, based on observation of the screen, but they wouldn’t track or explain the mechanisms which produce the patterns.

        Re OCD. An OCD patient is like someone who is addicted to a pattern of thought or behaviour. And like addicts they can have insight but are unable to stop when the triggering context arises. So they can decontextualise about their problem but that doesn’t help them when the eliciting situation arises again.

        (Actually this is a but like neuropsychological patients who can decontextualise when the problem is put to them in a third person way but not when responding directly to the experience. Tony Marcel has good examples of neglect patients like this, and it seems to work for some deluded patients)

        Ordinary people might have no insight because there is no need. The fantasy might work very well. Our vice chancellor thinks he is a marketing and administrative genius whose talent is unrecognised by his surly Bolshevik staff. Where is the incentive for him to submit that narrative to decontextualised evaluation?

        I say default dominance does come cheap! Its the default setting of the human mind: to fit experience into autobiographical narratives which work. Rather than to treat it as evidence for a proposition which might or might not be consistent with other propositions believed.

        Deceontextualisation is expensive ( cognitively) painful and in general unrewarding. A nice story is installed by the mind effortlessly. A proof or hypothesis only painfully and precariously. And then you only have what Rawls called ” the melancholy consolation of being right”.

        In general I’d say that there is a lot of life lived according to narrative and very little according to decontextualisation. You can call this irrational, as Stanovich does to highlight the problem, but it isn’t ’caused by failure of some system of rational belief fixation since for most people most of the time RBF isn’t engaged .

        Of course you can argue that RBF must be engaged necessarily using the kind of examples where folk psychology and narrative don’t come apart. e.g. “the man believed there was a beer in the fridge and he was thirsty and he believed beer quenches thirst and he wanted not to be thirsty ” So his getting a beer was an act of Aristotelian practical wisdom. etc etc multiplying platitudes to make mundane actions rational.

        But that wouldn’t help if he kept compulsively getting beers when he wasn’t thirsty or became an alcoholic . Unless you are Gary Becker and posit another rational belief desire pair for every piece of behaviour no matter how crazy looking. Or you can say he is irrational which is just to redescribe his behaviour measured against a canonical norm, not to explain it.

        • Neil Levy

          My point about functionalism isn’t about diagnosis. My claim is that your views get a lot of their attraction from the fact that they explain the ways in which delusions depart from the functional profiles of belief. J’accuse! You are attracted to these views, in part, because they explain these facts (okay, that’s just speculation on my part).

          Again, if decontextualised processing is so difficult and costly, we have a mystery: why do the majority of OCD patients display good insight? Saying that the fantasy doesn’t work well doesn’t much help, since this is a case in which insight doesn’t work well either. In fact, its plausible that by reducing cognitive dissonance and distress, absorption in fantasy is by far the better option. At very least, your account should predict that when symptoms are provoked, insight should disappear. While I don’t know the literature well, I haven’t seen any evidence for that claim.

  3. Hi Neil,

    This topic lies right at the center of my interest; I’d like to try to convey the way I have come to think about it.

    There is a fundamental principle that should guide our treatment of problems of this sort: malfunctions cannot be explained functionally. In Aristotelian terms, a functional account explains a phenomenon at the level of final cause. Malfunctions can’t be explained that way. They can only be explained at the level of proximal cause.

    To make things concrete, let’s consider a particular type of delusion that is a bit easier to grasp than schizophrenia. Suppose we have a man who has had a stroke and is paralyzed on the left side of his body, but asserts that he is completely healthy and is fully capable of getting out of bed and walking if he wants to. However, he doesn’t actually try to get up, and when asked why he doesn’t, he ducks the question. What is going on?

    The most common explanation, even among neurologists, is that this is a coping mechanism: the patient is emotionally incapable of accepting his paralysis, and generates a spurious belief as a way of hiding it from himself.

    Ultimately it is an empirical question, but I believe that sort of explanation is unlikely to be correct: it commits the error of trying to explain a malfunction as though it is a function. What would a proper explanation look like? Here is a sketch. When we ask the patient whether he has a problem, the question evokes a brain process that constructs an answer. Because the brain damage extends to high-level evaluative systems as well as low-level sensorimotor systems, the answer that is returned is incorrect. There is no way for the patient to tell that it is incorrect, because any attempt to check it for consistency brings the same damaged brain areas into play; thus the patient has a high level of confidence in the answer.

    This is probably not very different from the sort of answer Phil Gerrans would give. The main caution is that it is very dangerous to bring concepts such as belief, imagination, and knowledge into the picture. Those are functional concepts, and any account in terms of them is likely to be understood as a functional explanation. The only way to keep from going astray in dealing with malfunctions is to stick to mechanistic concepts such as behavior, verbal assertions, and brain activity.

    • Neil Levy

      Bill, I’m not sure I understand your claim but if I’m getting it its not a thesis I would want to sign up to. Some cases of pathology likely the consequence of adaptive mechanism functioning as they are designed. This is the central claim of evolutionary psychiatry, of course: there is a mismatch between the environment of evolutionary adaptiveness, to which brain mechanisms are adapted, and the current environment, such that we constantly encounter cues that in the EEA would be rare. Some instances of depression might be explicable this way (mechanisms designed to navigate a social hierarchy in which everyone was known personally to everyone else might misfire in an environment of anonymous crowds). I’m not particularly sympathetic to most such explanations, but I certainly wouldn’t sign up to a program that entailed that these things were impossible in principle.

      By the way, there is plenty of evidence that anosognosia at least sometimes has a motivational component. I have defended that claim in my “Self-deception without thought experiments”. That doesn’t amount to a psychodynamic account; a lesion is a necessary condition of the motivation having the florid effect it does. But we shouldn’t be scared of psychodynamic explanations: desires, wishes, and so ons, are physically realised and have their own, physically explicable, causal effects.


      its very like the kind of explanation I would give! It is the explanation I would give, at least for cases where motivation can be ruled out. I’m sure there are some defensive cases.

      I’d put your general point a bit differently. Folk psychology and its attitudes operate at too high a level of abstraction from mechanistic functioning to explain most malfunctions. Folk psychology is a ceteris paribus heuristic which applies at the level of integrated systemic functioning. So it is very little help when sub systemic components and micro components malfunction.

      There may be some cognitive systems whose operations are analogous to FP rationality ( sometimes we do do some explicit reasoning and planning and even make mistakes) but it would be very surprising if they were the culprits in psychiatric disorder.

  4. I think *the contagion thesis* (that imagining p leads one to believe that p) is orthogonal to *the continuum thesis* (that there exists just one propositional attitude that is a continuum between imagination and belief).

    One way to see this is to see that views that deny the continuum thesis can perfectly accommodate the contagion thesis. Take Nichols and Stich’s cognitive architecture for example. On their view, there is a module that inferentially updates between belief and imagination (which they call “UpDater” in Nichols & Stich 2000). The contagion thesis can be accounted for via an over(?)active UpDater, such that whenever one imagines p, one thereby comes to believe p.

    Another way to see this is to see that just being of the same attitude is not itself likely to lead to a contagion effect. For example, all of my belief-y mental states are of the same attitude—belief. Yet, this doesn’t make it any more likely that when I believe one thing, I’m thereby more likely to believe some other thing. So, the fact that imagination and belief rests on a continuum at the attitudinal level, in itself, does not imply that there will be particular kinds of contagion relations between the mental states.

    For whatever it is worth, I don’t find the continuum thesis likely to be true. (Tyler Doggett and I give some reasons in our paper: .) However, I am fairly agnostic about the contagion thesis. I think it’s probably true in some places but not others. I just want to emphasize, though, that the reasons adduced for the contagion thesis in this post don’t seem to me reasons for the continuum thesis.

    • Neil Levy

      Shen-yi, I agree that the contagion thesis is distinct from the continuum thesis, and I agree that the evidence I cite could in principle be accommodated by the contagion thesis. But it seems to be better accommodated by the continuum thesis. Two reasons why. First, if you are going to argue, as you do, that contagion occurs as a result of an overactive updater, or some story along these lines, you’re going to have to put some flesh on them bones. Flesh would be evidence that there is such thing and it is at work in these cases. If you could identify some subset of the population who were subject to contagion, for instance, that would be evidence that there is such a mechanism (even better if we could find a nice double dissociation, or even a single dissociation). But there’s no evidence along these lines at all, to my knowledge: instead, what we see is contagion occurring in everyone in ordinary circumstances. Postulating some kind of dysfunction now looks really strained. Better to think that this is just the representation mechanisms functioning as they were designed. Second reason: if there is this phenomenon, contagion, whereby representations leap from the belief box to the imagination box, you would expect to see some kind of discontinuity in functional role of the representations. But we don’t – we see a smooth continuum. That’s some evidence that the underling processes are smoothly continuous too.

      • Re 1: I wasn’t thinking that the explanation would postulate only some members of the population have an overactive Updater, but that everyone does. Or, at least, insofar as everyone is supposed to have a continuum of attitudes.

        Re 2: I guess I don’t really see any evidence for (or against) there being a “smooth continuum”. In the paper with Tyler, we argue that what looks like an evidence for such a “smooth continuum” in the case of immersion actually isn’t—that is, it cannot be accommodated by the continuum thesis itself. So I guess I’d just want to hear more about what such evidence amounts to.

        • Neil Levy

          Well it would be odd if everyone had an overactive updater. How would that evolve? Why would something dysfunctional go to fixation?

          The evidence for a smooth continuum from entertaining an attitude to believing it is extensive. It comes from social psychology. Work by Dan Gilbert, the large literature on belief perseversaration, and so on. See this paper by Eric Mandelbaum for review.

          • I don’t know why it’s odd. If it is odd, then that’s a reason against the contagion thesis. All I am doing is describing how contagion might be accommodated at the functional level, and my point is that there are many possible implementations. (If you’re worried about the word “overactive”, feel free to substitute “active” such that contagion is still true.)

            I am familiar with the literature on automatic believing. I don’t see any of that as being clear evidence for there being a continuum between imagination and belief because, again, it can be accommodated many different ways in cognitive architecture. For example, Gendler’s alief is one way to accommodate it without positing a continuum between imagination and belief. Notice that Mandelbaum’s article does not mention a continuum between imagination and belief, nor is that claim found anywhere in other Spinozan philosophers’ claim about automatic belief. On a model where imagination and belief are distinct attitudes, one way of accommodating for the automatic belief literature is to say that beliefs are always causally antecedent to imaginings, but again, that is quite independent of the ontological status of the attitudes.

            Maybe this is a way of making my original point clearer. The contagion thesis is a nomological thesis about the causal relationship between different mental states (tokens). The continuum thesis is an ontological thesis about the metaphysical (conceptual???) distinction between attitudes (mental state types???). That’s why I think the two theses are orthogonal.

  5. Usman Amin Hotiana

    Hi all,
    We see such delusions of various types all the time in our psychiatric units. The Neurotransmitter problem of increased dopamine activity and hypo frontally activity is the most prevalent explanation to date. The drugs which reverse this process reverses the delusions too.

    From a story , we need to philosophise but then we need to delve to the psychology, neurology, neuro chemistry, molecular changes etc.

    for eg delusions are common in many tumours. The character of delusions are different in different illnesses. In some it is ingrained to the level of belief. For example pathological jealousy.

    • Neil Levy

      Usman, I’m looking for a neuropsychological explanation. Given that, e.g., dopamine has lots and lots of different roles, pointing to increased dopamine (tonic? phasic?) is only the start of the story. We need a functional explanation: what role is dopamine playing in the system? That kind of story is going to be pitched at the psychological level.


        I disagree Neil. We need to see exactly what dopamine is doing among the sub systemic components. Why would that be pitched at the (folk) psychological level rather than the multiple levels of cognitive architecture?

        Are you saying we can only explain the results intuitively at the level of folk psychology? That would be like explaining a case of nausea by saying that we vomit because we want to get rid of toxins and believe that contracting the stomach is a good way to do it. I don’t think brain disorders are in principle any different ( except that the brain realises computational functions, but to explain them you need a computational theory not a folk psychological one)

        • Neil Levy

          I don’t understand your point. We are in furious agreement that we need to see exactly what dopamine is doing. That’s precisely why I said that pointing to dopamine isn’t an explanation. As I said, I’m after a neuropsychological explanation. It’s an open question how well neuropsychology will map onto folk psychology. I think we can already see that the mapping won’t be perfect. Why do you think I’m committed to thinking differently?


          maybe this is not a big deal. you said “what role is dopamine playing in the system? That kind of story is going to be pitched at the psychological level.” I think the story has been told by Grace, Kapur, Schultz and Dayan Berridge etc. And the best explanation of what dopamine does is that it makes certain representations highly salient by changing/ reinforcing patterns of activate in various circuits. when we know the computational role of those circuits we have what we need.

          And I don’t think we need a psychological story ( if that means belief desire psychology etc etc) to help make that intelligible ( other than as a kind of folk shorthand for doctors to talk to patients or something like that). In fact as soon as we get the psychological story things go wrong. When Berridge and Robinson tried to do this with the wanting/liking distinction they inadvertently overexcited a lot of philosophers because they gave them a distinction to play with. Their idea about addiction are actually easier to understand if you just look at the experiments and the neurocomputational theory. Not easier in the sense of “able to be assimilated to a preexisting folk theory” but easier in the sense of accurately describing and explaining a phenomenon.
          Maybe we don’t disagree after all. just when I hear “psychology” I hear “belief desire psychology ” and I’m not sure that straining the evidence through that intellectual sieve is very helpful.

          • Neil Levy

            I suspect the disagreement isn’t a big one. I certainly don’t hold that no explanation is successful if it can’t be cashed out in folk psychological terms. Very often we need to abandon FP in explaining something, even when what we are explaining is something that we identify folk psychologically. I do hold out more of a hope for folk psychology than you do – I think psychological phenomena can sometimes (and only sometimes) usefully be explained at levels well above subsystemic components, including the cultural level. I think its an open empirical question how useful (where useful implies roughly veridical) folk psychology turns out to be. My own views on the role of dopamine in addiction depart somewhat from Berridge’s and turn in part on the claim that personal-level representations play a significant role –

  6. Neil Levy

    You seem to be postulating a dysfunction, whereas I am postulating a system working as it is designed. If that’s right, you need an explanation for why the system is dysfunctional so far.

    • Neil Levy

      Sorry – can’t work out how to reply to you smoothly Shen-yi (smoothly at the level of form; you can draw your own conclusions about smoothness at the level of content). WordPress was allowing me to reply to your replies, but now its not. It allowed me to comment on your first paragraph, but not on the others. Anyway…

      I think I oversold the extent to which the continuity point favours my view. I still think it favours it, but more weakly than I suggested. You are committed to the claim that attitudes are discontinuous. There ought to be evidence in the social psych literature of discontinuity. Granted its not set up to look for it, but given that massive amounts of it (more than 50 years of good research on cognitive dissonance, for instance) it is odd that there isn’t any. Saying, as you do, you’re not aware of evidence either for or against the thesis seems to miscast the burden of proof here. But burdens of proof are tricky things.

      I really don’t know what you mean when you say that continuum and contagion are orthogonal. Here’s what I understand by the claim that x and y are orthogonal: they are different questions such that an answer to one is compatible with a range of different answers to the other; better, an answer to one doesn’t entail or even strongly incline toward a particular answer to the other. That doesn’t seem to be the case here: if continuum is true then contagion is false.

Comments are closed.

Back to Top