I am writing a thesis on the possibility of machine consciousness (e.g., the possibility of creating a silicon-based system that has subjective, qualitative experiences in the same way we take ourselves to have them). In my informal (and very limited) polling, it seems that many philosophers are sympathetic to the project.
However, not all functions can be realized by all structures (e.g., you can’t make a car out of paper towels). Given the many fundamental asymmetries in structure between humans and silicon-based systems, it is at least plausible that human consciousness cannot be realized in silicon-based structures.
Of course, proponents of machine consciousness will reply that the properties relevant to consciousness are multiply realizable, but do they have any evidence?
I was wondering what others take to be the strongest arguments either for or against the possibility of machine consciousness. What is the strongest reason to think human consciousness is realizable in silicon? And what is the strongest reason to think it is not? Centrally, what aspect(s) of a putative realizer seem most relevant to realizing human-like consciousness?
One bit of evidence is that individual neurons (perhaps more accurate to say certain essential electrical properties of individual neurons) have effectively been replaced by digital counterparts (e.g., Dynamic clamp: computer-generated conductances in real neurons and Extended dynamic clamp: controlling up to four neurons using a single desktop computer and interface and a cool application in
Inhibitory synchronization of bursting in biological neurons: Dependence on synaptic time constant).
I know such results are far from conclusive, but it is interesting nonetheless for those thinking of neuron replacement type scenarios.
These are very hard questions. Dave Chalmers gives what seems to me one of the best arguments for the possibility of machine consciousness in an unpublished paper (https://consc.net/papers/computation.html), but I don’t think his argument work (for why, see my https://www.umsl.edu/~piccininig/Computational_Functionalism.htm). That’s not to say that machine consciousness is impossible; just that it’s difficult to argue conclusively that it is.
there is of course a huge literature on this. your point is good, that not all functions can be realized by arbitrary structures. defining just what *any* property of consciousness is, is quite a challenge right there, much less *all* properties of consciousness.
To make a long story short, I’m quite optimistic about the project. Even those (putative) properties that cannot be exactly duplicated due to some shortcoming of structure can surely be approximated to a very high degree.
Thank you, Eric. Those are intriguing studies.
I certainly agree that it is more accurate to interpret those studies as showing that, charitably speaking, some subset of electrodynamical neuronal properties (as opposed to individual neurons) are multiply realizable.
Arguably, however, those silicon-realized properties only achieve (relevant) functionality within the context of the embedding neurobiological environment, and it seems unclear how we would generate the full range of consciousness-realizing neuronal properties entirely in silico.
Of course, the key question is: what are the consciousness-realizing neuronal properties?
As you say, such studies are interesting, but are far from conclusive as regards the question of machine consciousness (which, for my purposes, involve silicon-based systems not embedded within or incorporating biological tissue).
Most likely we won’t need to go below the level of conductance-based dynamics of individual neurons to explain consciousness (e.g., Hodgkin-Huxely type dynamics perhaps with some noise thrown in and of course synaptic connections). If such dynamics can be properly replicated in nonbiological tissue, we should be able to recreate a conscious machine. The papers I cited show they can be replicated in nonbiological tissues. Obviously we don’t expect them to replicate the color of the neurons or
their weight: it’s their electrical properties that are important for
consciousness.
That’s the line of thought I was considering when I cited those studies, anyway. You could argue that the electrical properties are important, but so is the distribution of electrical properties in the neurons (e.g., in the dendritic tree), and that is something they didn’t reproduce directly. However, making a device that reproduces the relevant consequences of said geometry shouldn’t be all that hard to pull off. Maybe.
I am not sure why being tethered to a real biological system is all that big a deal. Say we left the sensory and motor trandsucers in an intact body, and replaced the parts of the brain essential for consciousness with some machine. Assume for argument this cyborg is conscious. That should be sufficient for the conclusion that nonneuronal machines can instantiate consciousness. Then, we could take that conscious cyborg and replace the biological tranducers and body parts with more prosthetics (neuroprosthetics research is one place where we clearly have made a lot of progress).
Note if you are asking whether we could replicate consciousness in a non-mobile machine without sensory or motor transducers, just a hulk sitting on a desk, that is a more tricky question. However, isn’t that pretty much what happens while we are dreaming, that dreaming is done by a system largely disconnected from the body and environment? If so, then using the logic I applied above, we should be able to create something like dreaming in a system that implements the relevant dynamics of a real brain, even if it doesn’t have a body and sensory/motor transducers.
On the other hand, someone may have a competing externalist intuition that would suggest it isn’t possible. That is, they’d agree we could recreate the relevant causal/functional architecture, but that such things are not sufficient for fixation of conscious contents. My response would be that since the machine will be built to recreate they dynamics of real human brains (or one real human’s brain) then effectively we’d incorporate the relevant historical details into the reproduction, so we get the contents for free.
Thanks, Gualtiero.
I appreciate the fact that Chalmers’ article is a direct attempt to provide positive arguments for the possibility here discussed. However I have a number of concerns with his argument. In addition to being skeptical about the general utility of the conception of computation he lays out, I also thought, as you note in your paper, that he was unsuccessful in his attempt to steer between the Scylla of causal abstraction and the Charybdis of causal precision (which, undermining his account of organizational invariance, undermines the clarity of the relation between computation and cognition/consciousness).
To take one example, he says, “if we gradually replace the parts involved in digestion with pieces of metal, while preserving causal patterns, after a while it will no longer be an instance of digestion.” This seems incoherent. As the key causal patterns in digestion are catalytic/enzymatic cascades, which would be obliterated by his proposed replacement of parts, I can’t see how Chalmers is entitled to the antecedent. This kind of ambiguity in his explication of causal pattern isomorphism infects Chalmers’ entire project.
IMO, this kind of glossing of physiological detail is what makes machine consciousness seem more plausible than perhaps it is.
Lots of interesting thoughts, Eric. I just have time for a quick reply at present.
My intuition is that the distribution of electrical properties, as you put it, is critical…and not just in individual neurons, but throughout the neuropil. There are large-scale collective electrodynamics. I am not sure how these might be reproduced. Perhaps one might argue that they are irrelevant to the realization of consciousness, but this would require substantive argument, I think, given the data on the relation between EEG recordings and levels of consciousness.
And, briefly put, my concern with your idea that it shouldn’t be too hard to create a device that can recreate the geometry of distributed electrical properties is the following. If collective electrodynamics do turn out to be critical to realizing consciousness, how would we replicate such properties given the hierarchical structure of electro-responsive elements in neurological tissue (water content, ions, membranes, neural populations, etc.)?
for what it’s worth, the discussion in that paper is just a brief summary of an argument made more extensively elsewhere, e.g. in chapters 7 and 9 of “the conscious mind”.
re the putative “incoherence”, it’s worth noting that by “causal patterns” i mean something quite specific, the abstract structures of causal interactions captured by a CSA description. one could certainly implement such patterns in a computer without real enzymes.
Hi Matthew,
If you are not already familiar with it, I recommend this paper by Daniel Dennett (especially the section entitled “Broad Functionalism and Minimalism”):
https://ase.tufts.edu/cogstud/papers/zombic.htm
The correlation of EEG signals with conscious states isn’t very convincing evidence that consciousness is constituted by such coarse voltage signals. Such large-scale signals are essentially the linear sum of voltages generated by the population of individual neurons following Hodgkin-Huxley type dynamics (and cable properties) in the vicinity of the electrode. They should reduce to conductance-based properties. Indeed, in Sejnowski’s wonderful book ‘Thalamocortical Assemblies’, they actually reduce large-scale oscillatory activity generated in sleep-wake cycles to the dynamics of populations of individual neurons following HH-type dynamics.
There is no success story (for those interested in computational and systems level phenomena anyway) in neuroscience like the conductance based models birthed by Hodgkin and Huxley. The burden of proof should probably be on those that would want to say such dynamics are not sufficient for consciousness, given that this is the dominant overarching theoretical framework in modern neuroscience.
That said, you are right that if volume conduction turns out to be important for consciousness, then my specific argument would not work. If the aqueous neuronal environment that allows for volume conduction turns out to be important, that would be against what I’ve said. I was assuming that such effects are extremely weak compared to synaptic interactions, do not constitute the basis of consciousness, and that Hodgkin-Huxley type models are sufficient (for that matter, my hunch is that the computationally simplified integrate-and-fire dynamics will be sufficient).
I had extensive arguments with Jeff Yoshimi about this topic when he was writing his article Field Theories of Mind and Brain in which he stakes out a position similar to that which you seem to be suggesting. You might want to check out his article.
What has always surprised me as an outsider to this debate is how much the study of consciousness has been reduced to the study of the authority and autonomy of the conscious mind, when that ignores most of the documented evidence we have for how consciousness functions. The theory of the unconscious does not begin with Freud, it doesn’t begin with Montaigne, and it won’t end with Philip Roth, though he makes his living denying the autonomy of the subject. His is the philosophy of the anti-expert, asking himself and anyone else: “What am I doing? WTF am I doing!?” In the supposed victory of cognitive “science” over behaviorism, the documented facts of conditioned response, of neuroses and pathological action are ignored, with no justification beyond ideology. The obvious question, refusing finally to ignore the majority of the recorded data would be this:
How would you make a neurotic machine?
Only purblind arrogance-after the 20th century maybe we should call this pathological as well- keeps people from phrasing the question this way. Cognitive Science is to consciousness what Donald Rumsfeld is to war.
If I’m running from a lion who’s pawed a gash in my leg, my body is communicating information by means of the qualia of “pain,” while a robot programmed for its own preservation will receive feedback in concrete quantitative terms. Biomechanical qualia are quanta: vast amounts of data known to us only in totality as sense.
Biological machines are capable of reason but are programmed also by conditioning, and reason and reflex can produce contradictory imperatives. If there’s a “choice” to be made, which mechanism is it that “makes” the choice?
Consciousness is not complex calculation it’s indecision. Create an indecisive computer, a neurotic computer, torn (having been given the imperative to survive) between the heuristics of conditioned response and calculation, and you’ll have a conscious non-biological machine.
The question then becomes if the choice is real at all, or if consciousness as epiphenomenon is no more than the illusion of a unity, or resolved conflict, between the fully physicalist imperatives of computation and conditioned reflex.
Consciousness is a ridiculous concept.
James warned it long ago.
See:
Axiomathes 17(2):109-136, doi:10.1007/s10516-007-9014-z
Topology and Life Redux: Robert Rosen’s Relational Diagrams of Living Systems
A. H. Louie and Stephen W. Kercel
and:
https://rsif.royalsocietypublishing.org/content/early/2009/09/23/rsif.2009.0240.abstract
J. R. Soc. Interface April 6, 2010 7:613-621; doi:10.1098/rsif.2009.0240
Morphological communication: exploiting coupled dynamics in a complex mechanical structure to achieve locomotion
John A. Rieffel*, Francisco J. Valero-Cuevas and Hod Lipson
Malcolm: it isn’t clear why you think it is ridiculous. I find it useful for describing binocular rivalry and other ambiguous stimuli, dreaming, hallucinations, etc.. It seems fairly innocuous to me, as long as you don’t infuse it with spooky stuff.
More than binocular rivalry, dreaming, etc., how else do we have our sense of being here and now in a coherent volumetric space. Our sense of being in the world is consciousness — a brain analog of the real world that is organized around a privileged egocentric perspective.
Like so many others you seem confused between the sensation of being conscious and conscious behavior. Your brain makes you feel there is a little free willed creature inside your body.
I call the process of seeking consciousness by investigating your sensations: the hall of mirrors.
I have explored these issues in depth and have rejected mimicking sensations as a sterile avenue: you will never learn to build a projector by watching movies.
I have recently completed a clear and machine suitable definition of consciousness. Check the site.
YOurs,
JETardy.