The most recent in my series of posts on consciousness presents some examples of ambiguous stimuli. Predictably, an anonymous commenter chimed in about how such illusions will never shed light on the ‘hard problem’ of consciousness. So, for posterity, here is my response to the hard problem reflex that is evoked whenever the word ‘consciousness’ is uttered.
When I first heard Chalmers speak on the “hard” versus “easy” problems of consciousness (back in 1994 in Tuscon), I was quite impressed by his clarity in communicating what he took to be the crux of the problem of consciousness. For those unfamiliar with the ‘hard problem’ formulation of things, in his article Facing up to the problem of consciousness Chalmers puts it this way:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field.
Allow me to paint in broad strokes my problem with using such descriptions as a starting point for the study of consciousness.
As suggested in the second sentence of the quote above, Chalmers assumes that experience cannot be a matter of information processing. In his book, he goes much further. He explicitly assumes (in the Introduction) that experience also cannot be generated by neuronal activity or other garden-variety biological processes. Given that assumption, is it any surprise that he thinks experience is a really hard problem?
He sometimes suggests that his claim that experience is something over and above the biology isn’t an assumption, but definitional of experience. Indeed, he often writes as if this loaded notion of experience is pretheoretic and obvious (i.e., the ‘primary’ intension). As he says in the Introduction to the book, “I cannot prove that there is a further [hard] problem, any more than I can prove that consciousness exists. We know about consciousness more directly than we know about anything else, so “proof” is inappropriate.”
Speaking personally, such high-falutin’ notions (about causal and functional underpinnings of experience) were never part of my pretheoretic notion of experience. I’ll go with him as far as the claim that consciousness is synonymous with experience or awareness. That seems vacuous. However, adding the proviso that experience is something over and above neuronal or other mechanisms goes well beyond my pretheoretic notions, and probably beyond the intuitions of Fodor’s Grandma. However, Chalmers has the stones to claim that those not working within this loaded conception of consciousness aren’t ‘taking consciousness seriously’ (this is a chorus in his book, from the Introduction onward).
So while I admire his clear expression of an idiosyncratic view of consciousness, I personally find it too tendentious to be useful.
Despite these seemingly obvious problems with his approach, I observed with dismay as the phrase “What about the hard problem?” spread like syphilis over the amateur philosophy of consciousness landscape. It became a kind of cognitive creativity sink, an easy knee-jerk response to any discussion of consciousness. Psychologists and neuroscientists are now required, by law, to address the “hard problem” in the first or final chapter of their books on consciousness. It’s a bit ridiculous.
By analogy, when I talk to Creationists about a cool biological phenomenon, they immediately seem compelled to explain its origin in terms of God’s amazing designing powers. It is really quite strange, as they are perfectly intelligent people, capable of having good discussions of other things. However, when it comes to the topic of phenotypes, their creativity, their scientific curiosity, and (most importantly) their obsession with evidential details and brainstorming about possible mechanisms are all shut off.
I see a similar cognitive short-circuit in many people when it comes to consciousness, especially if they have acquired the hard problem reflex. No matter what is being discussed about consciousness, the reflex kicks in and everything just stops (compare to, “Wow, I can’t imagine how this happened, God is a great designer.”). They contribute nothing beyond a bumper sticker to the discussion, and aren’t interested in any more details. I might as well be whistling Dixie.
Note I don’t mean to come down too hard on Chalmers. Overall I admire his clarity and synoptic vision, I just disagree with a fundamental assumption of his work. He may be Patient Zero, but clearly it wouldn’t have spread if it didn’t resonate with people at some level.
“Predictably, an anonymous commenter chimed in about how such illusions will never shed light on the ‘hard problem’ of consciousness.”
the comment didn’t say that, it said “While I can see how these illusions provide data concerning the character of the processing in our visual system and what sorts of things we become visually aware of, and how these related to the character of the input, I don’t see how they provide data that is especially relevant to how qualia (the hard part of consciousness) works.”
which is hardly the same thing.
Nothing we /currently/ know about information processing suggests that it could be used to explain qualia.
This /doesn’t/ mean that information processing couldn’t possibly end up explaining it, so it doesn’t mean that it’s pointless to try to investigate cognitive information processing. But as the original comment asked, why do these visual illusions provide data that’s any more relevant to understanding qualia than any other details about how perception work (because the original post seemed to imply that they did)?
And BTW, this claim about information processing is hardly just David Chalmer’s view or one held only by philosophers!
I think the people who tend to be dismissive of the “hard problem” are falling into the trap outlined in the following.
You can imagine an information processing system that uses information about its own processing, to have a kind of awareness. No one would imagine that in really simple cases that this would cause the system to have qualia-imbued awareness, but when they imagine sufficiently complex, multi-leveled awareness of awareness information processing they imagine the qualia-imbued awareness just somehow arising from it. Becuase it’s just damn hard to think of such sophisticated cases without thinking of it as like what our awareness is like.
The problem with this is that there is nothing in our current understanding of information processing to suggest that adding more complexity or sophistication or feeding information about its processing to itself could possibly produce anything like qualia. Qualia /seems/ to be an entirely different kind of thing (and yes, that is always open to revision).
Hi Eric,
Thanks for this. But I think you’ve gotten the wrong end of the stick. I certainly don’t just assume that experience isn’t a matter of biology or of information-processing. Those theses are the conclusions of arguments, not assumptions (I certainly agree that if these were assumptions it would render the conclusions uninteresting). What I assume is that there is a problem of explaining experience which is distinct from the problem of explaining cognitive functions: that is, to explain experience, it doesn’t suffice to simply explain discrimination or integration or report and then declare victory. The catalog of things that need explaining includes these functions and it also includes experience. All that’s being ruled out by that assumption is the hard-line deflationism of someone like a Dennett. The assumption is certainly compatible with many mainstream materialist views on which consciousness is wholly neurobiological. Of course I think those views are incorrect but that work is done by the arguments, not by the assumption.
Anon: Yes, you are right my dirty synopsis of your quite didn’t quite do it justice.
Also, note I am fine with people claiming they personally experience a
conceptual gap between consciousness and X, where X is information
processing, causal features of the world, functional features of the
world, or even physico-chemical features of the world. Chalmers goes
further than this.
My qualm is with Chalmers claim that this gap is so obvious as to not need proof, especially
the further claim that present inconceivability implies the logical
impossibility of ever closing the gap.
Eric, thanks for this nice post. I’m with you. I’ve heard people sneer at a speaker presenting interesting work and say something like, “yeah, but that’s not gonna do anything to explain qualia”.
To be fair to Chalmers, his thesis is not the kind of thesis that admits of proof.
But to be fair to Eric, with respect to Dave’s claim that his thesis is the result of arguments (as opposed to an assumption), true, but what kind of arguments? A priori arguments, based on a priori assumptions or intuitions that are as problemantic as the thesis in question. Whether you call it arguments, an intuition, or an assumption, Eric’s underlying point — that Chalmers’s view is an unproven a priori stance — stands.
Eric, you put it nicely. On the one hand, there’s a keen/sharp intuition that no amount of switching bits on and off could amount to consciousness. On the other, we don’t understand consciousness. So it’s completely not surprising that we don’t get how bit switching could amount to consciousness. I was, like you, struck the last time I looked at the beginning of Dave’s book, how much you needed to be on board already to agree about the hard problem.
Thanks for responding David. I realize you have lots of arguments, and my big worry with this post was that I was knocking down a straw man. T
Your main argument was that zombies are
logically possible (and this implies physicalism is false). My claim is that this argument relied on contentious claims about the primary intension of ‘consciousness’ (for those following, the primary intension is roughly the pretheoretic, core meaning of a term–for instance, ‘colorless liquid that we drink’ is our primary intension for water, which turned out to be H20). My problem though is with your claims about the primary intension, as it seems to guarantee the zombie
argument will work. Let me articulate this more.
You said:
What I assume is that there is a problem of explaining experience which
is distinct from the problem of explaining cognitive functions: that
is, to explain experience, it doesn’t suffice to simply explain
discrimination or integration or report and then declare victory. The
catalog of things that need explaining includes these functions and it
also includes experience.
This seems almost reasonable, except for the ‘which is distinct from the problem of explaining cognitive functions.’ As I already suggested, I agree that consciousness is experience . But then when you add ‘which is distinct from the problem of explaining cognitive functions’ you go beyond the obvious to the tendentious.
Stepping back, let’s consider the Hard Problem Template (HPT) which you use as your primary intension.
HPT: Consciousness is experience, which is distinct from X.
Key question: what is the scope of X?
Is X just cognitive functions? If so, what cognitive functions? We don’t even know enough about the brain to delimit the scope of such functions and their implementation. You seem to assume that such details don’t matter.
From the Intro to your book, the quote I provided, and what I have seen at your talks, it is quite natural to interpret X as anything informationally, functionally, causally, or physico-chemically individuated. It may well be poor exegetical skills on my part, and on the part of the army of people that like to spit out the ‘hard problem’ bumper sticker in conversations.
To just pick one example, when you talk about acceptance/rejection of HPT as the great divide that can’t be crossed with argument, what are we to think? Clearly if you were just saying that consciousness is experience, there would be no great divide. Everyone believes that except for the most ideologically motivated person (e.g., Dennett). You even said that you can’t prove you are conscious, as if this is equivalent to proving HPT. It isn’t. Everyone reasonable thinks we are conscious (have experience). It is the X-factor within HPT that makes your definition tendentious.
Unless you think X is something uncontroversial such as ‘nonexperience.’ In that case I stand corrected.
Thanks, yes that strikes me every time too, how much is packed into the Introduction.
Note, just to be clear I really like David’s work. Hearing him talk in 1994 sent me to graduate school in philosophy to work on consciousness. My first quarter I took a class with Paul Churchland where we went in great detail over his book. I even volunteered to present his zombie argument just as an excuse to figure out all that primary/secondary intension stuff and the esoteric 2-D semantic theory.
When I said I admired his clarity and synoptic vision, his command of the literature, I wasn’t just blowing smoke. It is quite impressive.
The problem is with the core starting point, laid out in the introduction, which (unless I am a horrible exegete) is a candid laying out of the assumptions. Given his assumptions, he goes where logic leads (as he says, his book is full of conclusions in the strict sense). I think this is pretty much right. Given his assumptions you’ll end up with dualism of some sort (either of properties or substances), panpsychism perhaps wherever that falls. You will also likely end up with epiphenomenalism, as David takes head on in a chapter of his book, quite courageously I think.
If epiphenomenalist dualism isn’t enough to be a reductio of the starting assumptions, then he obviously really believes in the starting assumptions. I don’t share this conviction.
Hi Eric,
You’ve again turned my “The problem of explaining experience is distinct from the problem of explaining X” into “consciousness is distinct from X”. Those are very different claims. The problem of explaining heat differs from the problem of explaining the motion of molecules even if heat is the notion of molecules. In effect, the assumption I’m making is not that consciousness isn’t X — it’s that it’s not a priori that consciousness is X (as well as the assumption that consciousness exists). That thereby excludes “type-A” materialist approaches where one explains X (discimrination or report, say) and then says, that’s all one ever had to do to explain consciousness. But it doesn’t exclude the “type-B” materialist approaches (those that think that consciousness = some brain process or function, analogous to water = H2O) that are much more popular in the field. My article “Consciousness and its Place in Nature” is clearer about those issues.
David:
This seems a bit of hair splitting as we can easily translate between the formal and material modes. We can reformulate HPT to HTP2, and I don’t see how that helps you:
HPT2: Explaining X won’t explain consciousness.
Depending on how you cash out X, HPT2 could also be tendentious if offered as a pretheoretic, intension fixing claim. You say (in Consciousness and its place in nature):
This is now an instance of the HPT2, where X is the bag of explananda you invoke with the ‘in addition’ category. What is X in HPT2? It is pretty clear from the rest of the paper that X is all biology, neuroscience, physics, and theories of cognitive processes. It’s no surprise then, that you ultimately conclude zombies are logically possible.
Hence I don’t see how reformulating the Hard Problem Template in terms of explanations gets you any more traction.
(Incidentally one of the interesting and unique features of your book was the focus on supervenience rather than reduction, thereby getting us to think about properties rather than theories, so it seemed natural to focus on things rather than explanations. Hence, HPT seems as natural a formulation as HPT2.)
I assume, with Bergson responding to both Darwinism and Christianity, that there are two sources making us tick (he calls them The Two Sources Of Morality And Religion). One is empiricism, the other is a special case of rationalism or idealism: mysticism or intuition. These two sources have each other as object and together, produce Creative Evolution. I try to develop this further (link above) and am now at a point that I believe that in time (Duration), intuition and reality alternate, whereby intuition normally is most extended and reality only corrects the course intuition is taking. Intuition is sensing what you or I know (and discuss), while intervening reality is knowing what you or I sense (and force-feed to the other and finally ourselves). Bergson developed a whole range of methods to capture true intuition of duration: differentiation, immediacy, actuality, change, continuousness, equity (sort of), newness and simplicity (ref. Introduction To Metaphysics). Later, his thoughts seem to have lead to autopoiesis (at least, he would have heartily approved of it, I suspect).
Eric,
Thanks for jumping into this boiling pot! I’m with you.
Here’s a question I asked on another forum:
Suppose someone actually proposed a *true* solution to the “hard problem”. By what criteria would we be able to recognize it?
Eric? David Chalmers? Anyone?
Interesting question.
I will interpret ‘hard problem’ as the problem of explaining or understanding experience. Ultimately I would be impressed if someone shows how experiential properties are produced in a system composed of things that don’t have such properties. It’s like showing how a thing with action potentials can emerge from a bunch of channels in a membrane.
Note I’m leaving ‘experiential properties’ somewhat undefined and pretheoretic–part of the cool thing with this whole enterprise is that the target is continually
being refined as we get more data, just as our concept of ‘life’
becomes refined as we do more biology (but we still don’t have a good definition of ‘life’). The problem is that philosophers want us to prematurely fix the meaning of the term in a precise way, or they prematurely provide an overly precise definition so they can do their work. Unfortunately the best philosophy usually comes after the science is nearly done so I think it is fairly safe to ignore philosophy of consciousness in the meantime.
Just my first thoughts, not sure what I’ll think tomorrow.
Suppose someone actually proposed a *true* solution to the “hard problem”. By what criteria would we be able to recognize it?
Strange as it may sound, I would expect that if we have the means to explain it, we’ll also have the means to test it out. I think this is probably the way it’s always worked whenever we’ve come to understand baffling phenomena. Take ‘life’ or ‘fire’ as things that were at on stage quite baffling… but developing the means to explain them went along with the kinds of means for checking those explanations.
Ultimately I would be impressed if someone shows how experiential properties are produced in a system composed of things that don’t have such properties.
I think that ‘explain’ pretty much means describing the relevant properties in terms of things that don’t have these properties.
There’s hope here. Consider the case of life, which seemed like it couldn’t be explained by things without life-like properties (i.e. non-living matter), yet now we know it can be. So we know our intuitions may not be true guides for this kind of thing, which gives hope that ‘experiential properties’ may be similarly explained.
I think this is treating cognitive functions as if there were a kind of stuff, rather than a type of result.
Cognitive functions are types of capabilities, and are the product of certain mechanisms (whatever they are). There are various sorts of physical mechanisms that we know of (including information processing) that produces these functions.
But none of these known mechanisms seems to be a candidate for being able to produce experience. It is logically possible that they may actually play a role in producing it, but it’s also logically possible that /anything/ could actually play role in producing it.
Despite that, there seems to be a suggestion here (and in lots of discussions about consciousness) that there is some — unarticulated — reason for thinking that things like info processing might turn out to be part of the explanation. I think that unarticulated reason is this: that they are the only possible game in town for explaining it, as the only other alternative would be some wacky mystical explanation. I think that’s an invalid assumption.
Conscious experience might be brought about by some other, currently unknown, sort of physical process or effect. Perhaps one involving physical entities or effects that, while being entirely real, we currently can’t detect.
Sure, that’s an argument we could have (i.e., whether cognitive mechanisms and such are enough). The point of my post was different. The point was that Chalmers is not justified in using HPT as a starting point to fix the explanandum.
You also said:
Despite that, there seems to be a suggestion here (and in lots of
discussions about consciousness) that there is some — unarticulated —
reason for thinking that things like info processing might turn out to be part of the explanation.
I wouldn’t say the reasons are unarticulated (and describing the possible neurofunctional basis as a mere logical possibility as you did ignores all the evidence that it is the actual possibility). First, I hope it is obvious why people think it’s the brain that’s relevant for consciousness. Second, humans are evolved animals, and evolution works with standard stuff provided by the natural universe, stuff that we have worked out fairly well at the molecular and chemical levels (obviously things get weird with QM, a bit more on that below). Third, we have made great strides in explaining neuronal function in terms of these individually innocuous mechanisms (e.g., action potentials, neurotransmitter release, muscular contraction). Of course, people go to information processing and such because that seems to be one of the things brains do (and hence we end up with arguments about whether multiple realizability holds, and certain brands of functionalism).
These are all fairly standard lines of thought, perhaps not published in primary journals. From my experience in philosophy graduate school, anyway, these were the kinds of things we would talk about when the Kantians attacked us.
Anonymous also said:
Conscious
experience might be brought about by some other, currently unknown,
sort of physical process or effect. Perhaps one involving physical
entities or effects that, while being entirely real, we currently can’t
detect.
As long as you don’t assume consciousness isn’t the result of currently known physical processes or effects, I’d have no qualms. That is, you are free to go against the mainstream neural spin on things as a kind of renegade theorist. Biology isn’t a priori.
For instance, we need standard quantum mechanics to explain photon absorption in the retina. If biologists were dogmatically against invoking quantum effects of course we would not be able to explain some things. It is likely that if physics expands in the future (for instance, so it can actually explain wavefunction collapse in an intelligible way), then some of those new physical mechanisms will probably be important for explaining some biological phenomena.
That’s fine. If you think that will happen, and that such a mechanism will be required to explain consciousness, that would be a competing paradigm or whatever. My problem is when people involved in these competing stories (and I include Chalmers in this bunch) act as if they have a knock-down argument that their alternative has to be right, that the standard approach has to fail. They never have good arguments for this, but (at best) really strong intuitions.
Time will tell which approach provides the best fit to the data. I look forward to seeing some actual data from the competing paradigms.
Hi Eric,
I like the post, since my own views towards Chalmers’ rather elegant framework are very much the same. Regarding the conceivability arguments, it strikes me that, for example, the (ideal primary positive/negative) conceivability of zombies is an important consequence of the truth of dualism, but fails to provide the initial premise for an argument against materialism. Behind this is, I think, exactly the point you bring up in the post, concerning phenomenal experience and Chalmers’ (and others’) intuition that experiential facts will not admit of causal-functional characterizations (or that all physics describes is the world’s structural profile, but that phenomenal properties are intrinsic, and cannot be deduced in some important sense).
Curious: how would you characterize your own views in Chalmers’ alphabet system (Type-A, B, C materialism, etc)? For my part, I am staunchly type-C materialist. My own gut tells me that if you are type-A or type-B then you are already conceding the controversial intuitions under discussion, which is to concede too much.
Cheers
B
Brendan:
I like your way of putting it in the first paragraph.
Frankly I’m not sure where I fall in the A,B,C,Q classification. Probably not Type A, but I have sympathies with types B and C, and I think type Q as articulated by Mandik and Weisberg is quite promising (and as I sit here rereading their paper I realize it includes much the same arguments I present in my little diatribe, but with a much softer touch and with much more substance of course). My rejection of the Hard Problem Template is consistent with all the categories, I believe.
thanks for the type q love. Here’s the link
https://www.petemandik.com/philosophy/papers/typeq.pdf
(and describing the possible neurofunctional basis as a mere logical possibility as you did ignores all the evidence that it is the actual possibility).
I think “actual possibility” is just a fancy way of saying “logical possibility”. Yes it is possible that these things constitute part or all of what produces conscious experience / qualia. But it also possible that they do not. Nothing we currently know about them gives any reason to think they play that role, thus it’s a logical possibility.
Second, humans are evolved animals, and evolution works with standard stuff provided by the natural universe, stuff that we have worked out fairly well at the molecular and chemical levels (obviously things get weird with QM, a bit more on that below).
We think we have it worked out pretty well, but there may be (entirely natural, ‘physical’) aspects of it that play a role in producing qualia, but which we really aren’t aware of. I think again you’re implying that it’s either the current physical mechanisms we’re aware of or nothing.
As long as you don’t assume consciousness isn’t the result of currently known physical processes or effects, I’d have no qualms. That is, you are free to go against the mainstream neural spin on things as a kind of renegade theorist. Biology isn’t a priori.
And as long as people recognize the flip side of that: “As long as you don’t assume consciousness is the result of currently known physical processes or effects”. But people do seem to think there’s some particular reason for believing it must be constructed from these kinds of things.
I think it’s important to distinguish between two things here: what you understand are the possibilities, and what you decide to devote your research to. Your choice of what to focus your research efforts on involves other factors – e.g. you may choose to study neurology because we have lots of data at hand to investigate, whereas with other possibilities it’s less clear where to start, etc.
It is likely that if physics expands in the future (for instance, so it can actually explain wavefunction collapse in an intelligible way),…
I am no physicist, but from what i understand that wavefunction collapse business is only there in the copenhagen interpretation, which seems very dodgy (seems to require a pretty magical notion of reality recognizing an ‘observer’), and there are a lot more sensible alternatives out there which don’t involve it – like the Many Worlds interpretation. From what I gather a lot — perhaps the majority — of modern physicists don’t think that much of the Copenhagen interpretation. But as I said, I’m not a physicist, so this is all second-hand.
Anon:
I think “actual possibility” is just a fancy way of saying “logical possibility”.
No, it is a way of saying the evidence and reasons I gave suggest it [chemistry and biology] are sufficient, that we won’t need sexy new mechanisms, that there is no evidence for said sexy new mechanisms (especially their involvement in consciousness). All the evidence we have so far points to standard neuronal mechanisms. Unless there is evidence I am not aware of. I am aware of the a priori conceptual arguments, but that is different.
I think again you’re implying that it’s either the current physical mechanisms we’re aware of or nothing.
No. In fact in my post I explicitly left open the logical possibility that we might need some kind of new weird lower-level thing. That’s why I mentioned photon absorption in the retina. I even conceded we might have to go beyond standard QM. I think that 0.001 percent of research funds should go toward investigating such unlikely theoretical possibilities, and the rest should go toward more mainstream work, most of it to experimentalists.
As far as QM interpretation, I’m not gonna get sucked into that black hole so the following will be my final bit on the matter.
On the terminological point, the ‘collapse of the wavefunction’ is not special to the Copenhagen Interpretation but if you don’t like that phrase, then just say the problem of measurement. Why do we observe individual values when the Schroedinger equation evolution doesn’t yield individual values (i.e., Schroedinger’s cat problem). Invoking Multiple Worlds is one popular solution to that problem (so the cat is literally both alive and dead in different worlds), but it certainly isn’t part of any standard physical theory. Multiple Worlds is saying ‘There is no problem because every possible measurement happens. The cat is alive in one world, dead in another.’ Some people don’t find this to be a very satisfying solution, for reasons I hope are obvious. As I said, Multiple Worlds is not part of standard physics, but is an interpretation of standard physics, an interpretation without experimental justification.
My comment was merely pointing out that physics might expand so that it does include measurement. It’s also possible that physics will not expand. Fine. I’m not the one pushing the view that a new physics will explain consciousness. Indeed, I have never heard a good argument for such a claim, I was just covering my bases.
My main point was that such issues will not be settled a priori by armchair pilots, and because of cases like photon absorption in the retina, I can’t be dogmatic that lower-level physics is unimportant in biology.
Thanks Pete I should have included the link. People should read the paper even if they aren’t particularly interested in Type Q materialism, but want to see a nice examination of Chalmers’ Hard Problem Template. Rejecting HPT (for nontrivial values of X) is consistent with all the materialistic letters of the alphabet, so even those who don’t like the letter Q will still like the paper.
Brendan, what’s concessionary about type-A? I don’t get it.
> I think “actual possibility” is just a fancy way of saying “logical possibility”.
No, it is a way of saying the evidence and reasons I gave suggest it…
Saying that something is a “possibility” inherently implies that it is an _actual_ possibility. It makes no sense to talk about “non-actual” possibilities.
…the evidence and reasons I gave suggest it [chemistry and biology] are sufficient…”
Since we have absolutely no idea what conscious experience is, we have zero grounds for saying that certain evidence or reasons suggest X (whatever you want X to be) are sufficient for it.
further down in your response you say that things other than “chemistry and biology” are “unlikely theoretical possibilities” – which again is a claim we have zero grounds for making.
… that we won’t need sexy new mechanisms, that there is no evidence for said sexy new mechanisms (especially their involvement in consciousness).
When I said that there may be types of physical things that explain it that we aren’t aware of yet, these could include things that we might end up considering to be parts of “chemistry” and “biology”.
…that we won’t need sexy new mechanisms… …some kind of new weird lower-level thing…
Regarding the first of these, I don’t appreciate the implication that somehow this view is only being held because it’s fashionable. Regarding the second, either something exists it exists (or if it doesn’t it doesn’t). Nothing is inherently “weird”. Saying something is weird is just talking about your emotional response to the idea of it.
…the problem of measurement… Multiple Worlds is one popular solution to that problem.. [but] is not part of standard physics, but is an interpretation of standard physics.
you’re implying that this problem (of measurement) *is* part of “standard physics”. However, if what we currently know leaves things open to different interpretations, some of which don’t include this problem, then you can’t really say the problem is definitely there. It’s really only there relative to certain interpretations.
Anon:
You are too hung up on my cutesy phrase ‘actual possiblity.’ I gave the broad-brush reasons why the neuronal should be the mainstream, I don’t buy your arguments to the contrary, to the extent that I understand your arguments. At my neuroscience blog I’m presently going through the evidence in rather excruciating detail.
You are working in a competing system to some degree. I can’t kill it. I don’t even understand it. I encourage you to do the experiments or present the evidence that supports your view. Give a positive story. Your attempted refutations of the mainstream (garden variety biologist’s) view don’t work, and frankly aren’t likely to succeed (e.g., saying ‘we have absolutely no idea what conscious experience is’ suggest you are being a bit premature in your claims that it can’t be neuronal). If you are right, then there is an alternative story that needs to be told. Tell it. Not in this thread, necessarily, but somewhere.
As for QM, to say the measurement problem isn’t part of standard physics, as if this is an objection to anything I said, is strange. The measurement problem is the problem that measurement isn’t explained by standard physics! It has been on the mind of physicists since the theory originated. It’s not a merely philsophical worry, but based on evidence. Experimentalists observe single outcomes, but the Schroedinger equation doesn’t give you a single outcome (as pointed out by Schroedinger). If what you observe and what the theory says don’t match up, then that is a very serious problem. That is why one popular class of QM formulations includes measurement operators, as theorists wanted to be able to account for observations. Any good introductory philosophy of physics text will discuss this history.
While physics can’t tell us why we observe single values of variables that were previously in a superposition of states, if you insist that isn’t a problem, fine. We strongly disagree. To a priori say physics won’t expand to include measurement would be presumptuous. As I have already said, I was just covering my bases discussing a theoretical possibility so no worries this topic has totally become a garden path.
It seems to me that the controversy over the hard problem is related to the question of whether we really know what we ‘seem to know’ about our own qualia. I wrote an eprint that touches on the issue, ‘The partial brain thought experiment: partial consciousness and its implications’, and I would like to know what you think of it.
https://cogprints.org/6321/
Thanks,
Jack Mallah
jackmallah@yahoo.com
Jack,
I like your paper, but you need to realize that there is a difference between a partial consciousness and partial qualia. Partial qualia seems to be an incoherent notion, and you seem to somehow conflate partial or limited consciousness with some kind of “fading” of the qualia, yet you fail to convince me that the qualia themselves “fade” as they become more limited in your cases.
A partial consciousness by your terms in the paper may have inaccurate or more restricted qualia, but its qualia and its intentionality of subjective experience would still fully exist.
For example, consider that a color blind person still has fully functioning subjective experience, even if that individual does not have the same level of “color consciousness” as a person with normal color vision.
Bill,
Glad you liked it. I don’t see anywhere in the paper where I suggested fading qualia, so I’m not sure what you are talking about. In fact I specifically say “consciousness must gradually vanish in this case, not by fading but by becoming more and more partial.”
Any comments on the main points of the paper: the refutation of the “Fading Qualia” argument and of Bishop’s attack on counterfactuals, and the relevance of partial consciousness to the ‘hard problem’ or in other words the dualism/physicalism/eliminitivism debate?
Jack
.
I think you have a good opposition to the “fading qualia” argument. My issue is that I do not think your concept of “fading” consciousness is properly formed.
Consciousness, whatever it is ultimately, is expressed via two measurable axes of function:
1. Alertness. A dreaming man has qualia and is NOT conscious.
2. Content. A 6 week old fetus is alert in that it responds to stimuli, but it most likely does not have the cognitive content of qualia that we would identify as conscious content of thought, were we to have such qualia.
You are proposing that a decrease in content of consciousness is all you would need to do to create partial consciousness, but you need to address the alertness part of consciousness as well.
Alertness means that we are aware of the qualia generated by our environment, independently of our intelligence as such. So a limited brain, only able to do 2-digit addition, might be MORE conscious on the alertness axis than a drowsy brain that can otherwise do long division 🙂
.
You said of creationists: “their creativity, their scientific curiosity, and (most importantly) their obsession with evidential details and brainstorming about possible mechanisms are all shut off.”
The same might be said of materialists. When materialists are confronted with the possibility that the regress/homunculus arguments suggest that materialism is a theory that should be rejected because it is in conflict with the existence of conscious observation they just look blank. The average materialist will say that if you examine the way information travels into the eyes and on into the brain then the existence of a mind would require a homunculus therefore mind is false, if I say, yes if you use this theory of the creation of observation an infinite regress occurs so the theory is false all I get is a blank. See https://newempiricism.blogspot.com/2009/03/materialist-should-read-this-first.html
Most materialists even equate materialism with physicalism without realising they are different! It is extraordinary.
John: for one, nothing in physicalism requires a homunculus. Maybe that is why you are getting blank stares.
Second, while not required by physicalism, it is an empirical question whether there is a homunculus in a nonvicious sense. E.g., not a literal additional conscious person observing sensory representations in visual cortex, but some kind of additional neuronal system that monitors the sensory representations.
Equating physicalism and materialism is no big deal, certainly not extraordinary. As long as someone is clear and consistent in the use of their terms I’d rather not get bogged down in such linguistic quibbles.
You say: “Equating physicalism and materialism is no big deal, certainly not extraordinary.”
I think it is a big deal because materialism is a closed set of concepts ie: all physical phenomena are due to matter and the motions of matter whereas physicalism allows extra phenomena (such as qm entanglement and the curvature of spacetime in gravity, qm interference in time etc. etc.).
If materialism is equated with physicalism then we are insisting that the brain is a conceptually simple machine and that conscious experience must be an equally simple phenomenon. Indeed some eliminativist materialists go so far as to insist that if conscious experience does not conform to a materialist ideology it does not even exist!
A good example of a phenomenon that is difficult to explain within materialism but tractable within physicalism is the “specious present”. According to materialism it is actually impossible to to have an extended present moment that truly stretches through time. As a result materialism declares that experiences that contain movement, whole bars of tunes or whole words extended in time must be false or mistaken.
It’s fine if you want to use those defintions of ‘materialism’ and ‘physicalism’.
As far as your last paragraph, I don’t know about ‘materialism,’ but I do know that neuroscience is happy to deal with phenomena that are spread out over time, such as movement, temporal patterns of neuronal activity in populations of neurons, the temporal pattern of the action potentials in single neurons, etc.. Indeed, individual neurons are pretty much integrators where time is the variable of integration.
In my discussion with anonymous above, I discussed my attitude toward QM more generally, why I think it is not necessary.
Something I didn’t mention is the main error to be wary of in advocates of QM approaches is the content-vehicle confusion. That is, the property experienced (the content) isn’t necessarily displayed by the thing doing the experiencing (the vehicle). The word ‘red’ isn’t red. I see things outside my body, such as the computer on my desk. That doesn’t mean that the thing doing the experiencing is outside my body. The content-vehicle distinction is clear in such cases, but can be easy to forget (especially with temporal and spatial contents). I was once convinced that we needed QM because of the supposed field-like nature of my visual experience. This likely motivated the Gestaltists and their neuronal field theories of perception. Some people think we need QM because of the apparent unity of experience. In general, it is very tricky to make inferences from experience (the content) to experiencer (the vehicle).
However, as I said above to anonymous if someone has looked at the evidence (there is lots of neuronal data at this point), and still things QM is needed, they should by all means explore it. Do the science, get the evidence.
For me the crucial role of modern physics in this debate is not that it delivers a particular answer to the problem of mind but that it undermines materialism. If materialism is discarded then we can say that we have an observation (for instance the space and time of conscious experience) and require a testable, scientific hypothesis to explain it. If we keep materialism, a theory that is known to be false, we end up arguing on the basis of this false theory that the observation doesn’t even happen.
On the subject of the content-vehicle distinction, I can only find things arranged in space and time as my conscious experience. So, if I am to explain my experience I must explain this arrangement. Were it not for the extension of events in time I could happily ‘do a Dennett’ and declare that I cannot really know my mind but the extension of events in time is the greatest part of me. I would be nothing in no time at all (see Time and conscious experience).
I’m very tired, so this is probably a bad idea, but I’m going to have a shot at this.
Every living thing can be assigned a goal which it will defend. Usually, failing to defend this goal means death.
Alright, hit me.
This is disingenuous. The zombie argument is based on the assumption that consciousness is distinct from X — if it weren’t, zombies would be indistinguishable from us. All the metaphysical arguments against physicalism boil down to this “zombic hunch” that consciousness is distinct from X — they aren’t derived from the fact that the problems of explanation are distinct. And “discrimination or report, say” is a caricature — Dennett’s discussion on “clout” or “fame” in the brain is not simply an explanation of some feature like “discrimination”, it’s an explanation of consciousness in terms of physical processes. And such “clout” or “fame” are not brain processes or functions, they are *explanatory characterizations* of brain processes that are meant to explain *consciousness*, not some brain feature or process.
The reason why Chalmers does not feel the need to elaborate further is that he assumes that everyone has consciousness and experiences qualia, so no further explanation or proof is necessary. It seems it is not the case and consciousness is not universal. Why I believe that: Some disorders cause people to never experience emotions. Such people talk exactly like this about emotions. For example they insist that the word “sad” is a “social construct” “poorly defined” or “devoid of actual meaning”. And almost inevitably they believe it is because they are smarter than others so they can see through this “game” and how useless it is, they almost never realize they are different, they believe that others too fake emotions just to get along with other people etc.
“The reason why Chalmers does not feel the need to elaborate further”
He doesn’t feel that.
“is that he assumes that everyone has consciousness and experiences qualia”
He doesn’t merely assume that, it follows from his arguments about organizational invariance.
“so no further explanation or proof is necessary”
Does not follow.
“It seems it is not the case and consciousness is not universal. Why I believe that: Some disorders cause people to never experience emotions.”
a) No they don’t. b) Even if they did, that wouldn’t mean they aren’t conscious.
“Such people talk exactly like this about emotions. For example they insist that the word “sad” is a “social construct” “poorly defined” or “devoid of actual meaning”.”
None of that entails not feeling emotions.
“And almost inevitably they believe it is because they are smarter than others”
If they feel that, and if it is in reference to people like you, they may well be right — that follows from your absurd unsupported and unverifiable claim about what others believe.
“so they can see through this “game” and how useless it is, they almost never realize they are different, they believe that others too fake emotions just to get along with other people etc.”
The claims you attribute to them are not about faking emotions.
Zombies are supposedly physically indistinguishable from conscious humans. How then did you determine that there are zombies among us? That fact is that your post is ideology, not philosophy or science.