Yesterday I attended Dave Chalmers’ session of the Mind and Language Seminar where we discussed his new paper on the singularity. I have previously seen him give this talk at CUNY and I was looking forward to the commentary from Jesse and Ned and the discussion that followed.
Jesse talked for an hour summarizing the argument and making some objections. The two that stood out to me were his claim that Human extinction is more likely than the singularity (he outlined some cheery scenarios including alien attack, global pandemic, science experiment gone bad, as well as depressed teenager with a nanonuke). Jesse’s other objection was to Dave’s argument that a functional isomorph of a conscious entity would itself be a conscious entity. Dave uses his dancing qualia/fading qualia argument here. The basic idea is that if we were to actually undergo a gradual swapping of neurons for computer chips it seems counter intuitive to think that my consciousness will cease at some point, or that it will fade out. In the context of the singularity this comes up if we consider uploading our minds into a virtual environment; will the uploaded virtual entity be conscious? Dave thinks that the fading qualia/dancing qualia intuitions give us good reason to think that they will. The people who upload themselves to the virtual world will be saying things like ‘come on in; it’s fun in here! We’re all really conscious, we swear!’ so why wouldn’t we think that the uploaded entities are conscious? Jesse worried that this begs the question against the person, like him and Ned, who thinks that there is something about biology that is important for consciousness. So, yeah, the uploaded entity says that it is conscious, but of course it says that it’s conscious! We have stipulated that it is a functional isomorph! Jesse concluded that we could never know if the functional isomorph was conscious or not. Dave’s position seemed to be that when it comes to verbal reports, and the judgments they express, we should take them at face value –unless we have some specific reason to doubt them.
During discussion I asked if Dave thought this was the best that we could do. Suppose that we uploaded ourselves into the virtual world for a *free trial period* and then download ourselves back into our meat brain. Suppose that we had decided that while we were uploaded we would do some serious introspection and that after we had done this we sincerely reported remembering that we had had conscious experience while uploaded. It seems to me that this would be strong evidence that we did have conscious experience while uploaded. Now, we can’t rule out the skeptical hypothesis that we are erroneously remembering qualia that we did not have. I suggested that this is no different than Dave’s view of our actual relationship to past qualia (as came out in our recent discussion of a similar issue). So, I cannot rule out that I did not have qualia five minutes ago with certainty but my memory is the best guide I have and the skeptical hypothesis is not enough to show that I do not know that I had qualia; so too in the uploaded case I should treat my memory as good evidence that I was conscious in the uploaded state. Jesse seemed to think that this still would not be enough evidence since the system had undergone such a drastic change. He compared his position to that of Dennett’s on dreams. According to Dennett, we think we have conscious experiences in our dreams based on our memories of those dreams but we are mistaken. We do not have conscious experiences in our dreams, just the beliefs about them upon waking. This amounts to a kind of disjunctivism.
I still wonder if we can’t do better. Suppose that while we are uploaded and while we are introspecting a conscious experience we ask ourselves if it is the same as before. That is, instead of relying on memory outside of the virtual world we rely on our memory inside the virtual environment. Of course the zombie that Jesse imagines we would be would say that has conscious experience and that it was introspecting, etc but if we were really conscious while uploaded we would know it.
Ned’s comments were short and focused on the possibility that Human intelligence might be a disparate “bag of tricks” that won’t explode. A lot of the discussion focused on issues related to this, but I think that Dave’s response is sufficient here so I won’t really rehash it…
I also became aware of this response to Dave from Massimo Pigliucci and I want to close with just a couple of points about it. In the first place Pigliucci demonstrates a very poor grasp of the argument that Dave presents. He says,
Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)
Now having been at the event under question I can’t rightly recall if Dave used the word ‘extrapolation’ or not but I can guarantee that his argument does not depend on it. Dave is very clear that it is not extrapolating from the “successes” of current AI that grounds his belief that we will develop Human level AI in the near-ish future. Rather his argument is that intelligence of the Human variety was developed via the process of evolution which is a ‘blind’ process that is dumb. It seems reasonable to assume that we could do at least as good a job as a blind dumb process, doesn’t it? If we can achieve this by an extendable method (for instance artificial guided evolution) then we would be able to extend this Human level AI to one that is superior to ours (the AI+) via a series of small increments. The AI+ would be better at designing AI and so we would expect them to be able to produce an AI++. This is a very different argument from the simple extrapolation from doubling of computing speed that Pigliucci lampoons. I don’t know which colleagues that Piggliucci consulted but had he asked me I could have set him straight.
Finally while it is certainly true that Dave is in no need of defending from me and I am the last person who has the moral high ground in matters of personal conduct but I have to say that Pigliucci shames himself with his adolescent ad hominem abuse; that is truly behavior unbecoming to academic debate. So too it is bizarre to think that Dave is the reason philosophers have a bad rep when in fact it is behavior like Pigliucci’s that is more the culprit. Dave is among those who represent philosophy at its best; smart intellectually curious people thinking big and taking chances, exploring new territory and dealing with issues that have the potential to profoundly impact Human life as we know it…all with grace and humility. You may not agree with his conclusions, or his methods, but only a fool doubts the rigor that he brings to any subject he discusses.
Richard, thanks for this interesting discussion. Are there any reasons to believe that our minds can be uploaded in a virtual world? I’m not even sure about what that means unless we assume a strong version of computational functionalism (the mind as the software of the brain). That version of computational functionalism presupposes that the brain is a program-controlled computer. Is there any evidence of that?
Richard, in Chalmer’s article and his presentation at the singularity conference, he was very careful to talk about defeaters to AI+ and AI++. These defeaters would be things like human extinction that Jesse talks about (or so this is how I understand the defeaters). Did Jesse mention defeaters at all?
I think the only ‘evidence’ is supposed to be the intuition that if we duplicated the exact function of each neuron in your brain (say, by having nanomachines swarm inside your brain, attach to a neuron and learn to perfectly mimic its input/output, and then take over that function) you would still be conscious. If that is right then it is at least in principle possible that we could digitize the functions of each of those nanomachines and therefore reproduce the brain virtually. If one thinks that the nanobrain is conscious then there seems to be no reason to think that the virtual nanobrain is conscious.
Yeah, the doomsday scenarios were the kinds of things Dave meant by defeaters. The only difference (I think) was that Jesse thought they were much more likely to happen than the singularity was wheras I think that Dave offers them up more in the vein of ‘unless something catastrophic happens, or unless we actively try to stop it we will have the singularity’
So there is no assumption that uploading a mind preserves its numerical identity? It’s like making a copy of the original in the virtual reality?
Yeah that’s right. Dave wonders whether the uploaded mind will be numerically identical to the ‘meat mind’ and argues that the answer is no…we might then worry about survival (a la Parfit) instead of identity. Dave hints in the talk that he is open to the Buddhist view that every day one wakes up its a new person but one whose goals are continuous with the one that was there yesterday and if that is the way we think about the normal case then it isn’t a worry that it is true in the upload case.