Next week I am heading up to SUNY Freedonia to give two talks as part of the Young Philosophers Lecture Series . Here is a rehearsal of the first talk which is my most recent attempt to show that Rosenthal’s HOT theory is committed to cognitive phenomenology
I can’t see anything. What format are you using?
It’s in flash format…here is a link to the .ogg file…
First, does anyone really say that thoughts, beliefs, etc. have no qualitative properties? If that were true, how on earth would you ever know you had a thought? It is for considerations like this that I call the redness of red the gateway drug of qualia. It’s great to bring the intuition that qualia exist to people who have never considered it before, but it is the proverbial tip of the iceberg.
Secondly, however, I still don’t see the plausibility of HOT theories. Many of the Rosenthal quotes in the video were staightforwardly circular in their definitions or explanations of consciousness. Let’s look at Jerry in pain. The HOT is the second balloon pointing at the first. OK – as naturalists, let’s talk about that arrow, the pointy part of the HOT balloon. Some information channel? Some number of bits per second? Perhaps bidirectional? It better be something like this. If the HOT is doing all the work in terms of consciousness, then we could swap out the FOT (first order thought) entirely with something like a playback tape that merely maintained the approporiate conversation on that communications channel. Then, barring anything spooky and mysterious, the HOT would never know the difference, and would still be conscious in every way it would have been in a case of “real” pain (for example). So then all we’ve done is push the Hard Problem up a level (lather, rinse, repeat). Why should the HOT be conscious based on the bits over that particular wire, and why isn’t that just the same as the original question of why should we be conscious given the bits on the wires from our senses?
In general, people who try to “naturalize” consciousness try to sneak an awful lot of magic into the innocent-seeming notion of intentionality (The HOT is about the FOT, therefore consciousness. We are self-representing in some suitably integrated way, therefore consciousness. We are embodied representational systems, therefore consciousness . . .).
Hi John, thanks for the comments!
The circularity I was talking about is evident in statements like “…it is that the HOT represents oneself as being in some state, which makes one aware of the state that they are in”. I find that we can think more clearly about things if we do not use prejudicial terminology. Let’s not talk about the FOT and the HOT. Let’s talk about A and B, and the bits-on-a-wire channel between them. If the consciousness actually happens in B, regardless of how we implement A (as long as it keeps up its end of the conversation properly), then we have the good old Hard Problem: why should B be conscious just because it gets some bits over a wire? We can say the bits are “about” A or “targeting” A if you like, but such terminology does not answer the question. “The HOT represents oneself as being in some state” – what does this mean? It has a self-model? Like the publisher’s catalog that lists itself? You say you believe that the connection between A and B is based on causal connections, which means I assume you won’t object to my bits-on-a-wire characterization. Rosenthal says no, it is really “intentional properties of the content of the thoughts”? I have no idea what this means. If we are really naturalists, we must sharpen this up a lot. Bits, bytes, and billiard balls. If he believes in some extra-physical force called “intentionality”, or some being that transcends an actual data structure called the “content” of that data structure, we have left naturalism behind on the side of the road.
“…From their point of view it will seem to them as though they are in the FOT”: if we are going to be materialists, we don’t get to say “from their point of view” anything “seems” like anything. To a computer (broadly construed as a functional/physical system of the type that materialists believe could support consciousness given the right algorithms and data structures), everything is purely dispositional. There is no point of view, no seeming. Don’t anthropomorphize computers – they don’t like it.
Basically, what I’m getting at is this: if you can explain consciousness in terms of purely dispositional states, in terms of physical systems, great. But I am extremely wary of people sneaking the ghost into the machine by loading their arguments with terms like “about”, “content”, and “seems”.
There is no ghost being slipped in here, and there certainly is no circularity. The theory employs psychological terms because it is a psychological theory…you want to talk about the neural implementation of this psychological theory, and that is fine, but we need the psychological theory…intentional properties can be perfectly natural properties; I, and many other philosophers and cognitive scientists, think so so no one has left naturalism on the side of the road…Rosenthal has written a lot on intentionality if you are interested…
I must say I’ve never read a convincing account of how intentionality might be naturalized. Of course nothing keeps us from speaking figuratively, and saying that photons from distant stars represent those stars. But defining intentionality so broadly buys us nothing beyond physical causation.
The ghost I think you are slipping into the machine, the anthropomorphism, comes from talking about talking about inanimate systems as having points of view, of them being aware, or things seeming to them like anything at all. When we describe what goes on in our minds, we naturally use psychological terms. But merely restating our own intuitions in this way is different than explaining them naturalistically. Yes, when I have a thought that I am in a certain mental state I become aware of that mental state, etc. but this doesn’t make a dent in the Hard Problem, and the explanatory gap is as unbridgeable as ever. I have no problem when you keep things in the first person (I apply concepts to my first order states, I represent the world in a certain way), but the main problem is explaining how something other than “I” could do all that (even a brain, naturalistically described).
Seeing red on one hand, and knowing things on the other, seem to us like different categories of mental stuff. Yet there seems to be no space for sunlight between seeing red and knowing that I am seeing red. They appear to be two sides of the same coin. My general complaint about HOT theories is that they simply restate this admittedly mysterious situation without actually dissolving it.