How could we cast suspicion on the semantic poverty thesis discussed in Part III? (Recall this is the view that ‘no amount of analysis, conjunction, or <insert your favorite semantic construction method here> applied to concepts about brain states will yield a concept about subjective experience as such.’).
One way is to show that the aliens could in theory come to understand, based on their purely neuronal theory, the incompleteness of their initial understanding of cats. I described the bar that I’d like to see them jump:
We aren’t asking for the aliens to know the subjective character of red, but to realize that there is some experience there, period. It seems reasonable to expect a complete theory would leave the aliens in a position analogous to Mary before she leaves her black and white jail. That is, she knows she lacks color experiences (i.e., her brain has yet to instantiate the relevant color-seeing properties), and she knows she lacks the typical cognitive reactions to having that experience (e.g., she will not recognize red when she sees it the first time).
In the rest of this post I will suggest how the aliens could surmount this hurdle.
Augmenting the alien mind
Our aliens, realizing that they do not have brains, or a smonscious representational system, wonder what it would be like to have a smonscious brain. They find the representations uniquely interesting, and are intrigued by corollary phenomena deeply integrated with smonsciousness, such as the ability to recognize when smonscious states recur, smonscious visual attention, and smonscious decision making by the cats. What benefits or costs might accrue if they were smonscious?
They realize they cannot presently know this directly, because their minds are not shaped the right way. However, they can make predictions, using the model developed for the cat, about the consequences of augmenting their minds to include a smonscious representational system. Through years of diligent effort of their best theoretical physicists, psychologists, and biologists, they model an interface between a biophysically detailed model of the smonscious brain and their off-the-shelf model of their own minds (an alien simulation they named Keanu).
There is enough detail built into Keanu that they can simulate his performance on various tasks (such as visual recognition tasks), and they can even have conversations with Keanu so he can express his reactions when augmented to be smonscious. The plan is to put Keanu through a battery of psychological tests both before and after the augmentation procedure, to determine if the augmentation indeed speeds up his visual reaction time and short-term planning when faced with complex visual stimuli.
Once all the preparations are finished, the aliens initiate stage one of the plan, running Keanu in ‘alien only’ mode to troubleshoot and get baseline values for the battery of tests. Keanu reacts as expected based on their previous experience with him. After the series of initial tests is finished, they flip the switch that hooks him up to a virtual smonscious representational system. Keanu responds strangely:
Whoa…I’m sorry, but words will not do this justice.
Consider my experience of the redness of this ball. I am sure this is all the result of a great deal of information processing. I expect to have countless new subtle differences in my behavior and cognitive reactions now that my mind has been augmented. For instance, you will find that I can learn to recognize this shade of red (this amazing shade!) when it recurs.
But if you were to focus on such behavioral/information processing changes going on right now, that would miss what is most significant here. What is amazing is that I am literally seeing this ball, in some sense, for the first time.
I think I should shut up, and just stand by my first reaction as the most appropriate…
The alien researchers initially think this must be some mistake in their code. However, after enough troubleshooting and cross-checking their models, they realize they haven’t discovered a bug in their code, but have generated one of the most incredible predictions their species has ever made. It becomes a top priority to study this process in more detail, to see what they missed when focusing exclusively on cat brains, to see what properties in the model were most directly responsible for Keanu’s response. Most exciting for them, they start the clinical research necessary instantiate the augmentation procedure in practice, so they can directly learn what it is like to be conscious .
Fallout of the simulation
The possibility of such a scenario suggests that acceptance of the semantic poverty thesis is premature, and that knowledge of subjective experience could be a natural outgrowth of a neural theory. It was a prediction that emerged when their original theory was extended beyond its original feline domain.
Scientific practice produces conceptual novelty in more ways than you might expect from analytic philosophers’ rational reconstructions or conceptual analyses. Scientists tend to place much less stock in what they intuitively think is already “contained in” their conception of the world. We tend to hold onto our concepts with a loose grip, allowing data/evidence gained from experiments, and predictions gained from models/simulations, to expand our outlook in unexpected directions. In this way, our understanding of the implications of our theories is continually expanding and enriched.
Especially in the biological study of complex systems such as brains, we do not see conceptual rumination as a particularly useful source of knowledge. While biology admittedly involves a great deal of prediction-building in ordinary language, significant stress is also placed on the importance of prediction based on quantitative models. Even something as simple as the human heart displays behavior that we would not predict from ordinary-language meditations about hearts. Why, for an object orders of magnitude more complex, would we expect to find a transparent conceptual route without the help of computers or simulations?
Note we are not talking about physical laws here, where simple thought experiments (coupled with mathematical expertise) have been so undeniably helpful as a tool for discovery and the generation of theoretical novelty. We are addressing the analysis of extremely complex physiological systems with many nonlinear parts interacting in nonlinear ways. Analytic philosophers were understandably strongly influenced by theoretical physics from the first half of the 20th century, but the philosophical picture of science that emerged was correspondingly parochial.
Why the computer?
So far I have said little about the alien’s mental structure. We are already dealing with an outlandish science fiction scenario, so why should we limit our aliens so that they must think and communicate in the clunky word-models that are standard in the philosophical musings of H sapiens? Perhaps the alien mind is much quicker at correctly seeing implications of their theories, and can calculate quantitative implications of their theories with little effort. Let’s say they think more like an analog computer than a philosopher. Instead of running simulations on a computer, they could make the predictions described above directly and quickly, using their quantum unconscious super ninja minds. That is, the very premise of the argument, that the aliens would end up not knowing the cats are conscious, could be a non-starter.
This turbo-charged computer-free response is particularly powerful as a rebuttal to semantic poverty. To shorten this post, I am putting nine potential objections in the notes .
Just to quickly finish up, and full circle with the dualist attack from Part II, what about the ‘two concepts one property’ (2C1P) approach? The scenario here is actually an instance of this approach. While I am suspicious of phenomenal concept strategy, with its peculiarly premature acceptance of the semantic poverty thesis, I am no opponent of phenomenal concepts as such, in the sense of concepts we typically use to respond to, and describe, our own experiences in phenomenological terms. And clearly one can believe that experience-concepts and brain-concepts are different without thinking that they are necessarily and forever screened off from one another. That indeed seems a premature overreaction to our present levels of ignorance.
Notes From now on, I will drop talk of ‘smonscious’ brains, and let the aliens talk about consciousness, as they have crossed the semantic gap.  List of objections, roughly in ascending order of strength (the last one is probably true): 1) This scenario doesn’t undercut semantic poverty, because the conceptual route from brains to experiences has not been provided in an explicit and transparent route. 2) The aliens haven’t really gained an understanding that the cats are conscious: they have not jumped the hurdle set at the beginning of the post. 3) The aliens have jumped the hurdle, but the hurdle is too low: you still would have Mary-type arguments that they wouldn’t know red from blue the first time they saw color. 4) The simulation required to predict the existence of consciousness would itself be conscious, and this would support dualism as much as materialism. 5) Perhaps the aliens, and Keanu, would become dualists based on the scenario, so their model actually predicts dualism is true. 6) Predicting consciousness is not the same as explaining consciousness, and this scenario achieves prediction without explanation. The Hard Problem remains. 7) This dialectic merely replicates the phenomenal concepts strategy, a strategy I criticized in Part III, but this time in the aliens. 8) You may as well have had the aliens talk to humans, which you have stipulated is cheating. 9) Why didn’t you just start your scenario this way? You made it seem as if the aliens had complete knowledge of the cats, but if they had more to learn (from additional simulations), then their knowledge wasn’t really complete. (In my defense, I wrote the first two posts a year ago, planned to defend PCS, but found it lacking, and thought of this scenario about two months ago).  This series of posts began in response to someone who, after I gave a materialist response to Mary, asked what I would say if Mary started out with no experiences whatsoever, rather than lacking only color experiences.