Conceptual emergence occurs when, in order to understand or effectively represent some phenomenon, a different representational apparatus must be introduced at the current working level. Such changes in representation are common in the sciences but it has usually been considered in connection with changes in synchronic representations. Here, I’ll consider a diachronic example drawn from recent work on convolutional neural nets for image recognition.
To start, consider the types of representation in the table below. (This classification is not exhaustive – I have omitted conscious and unconscious representations as a third dichotomy, for example).
Type of representation | Characteristic feature |
explicit | No transformations of the representation are required to identify the referent |
implicit | Transformations on the representation are needed to identify the referent |
transparent | Open to explicit scrutiny, analysis, interpretation, and understanding by humans |
opaque | Not open to explicit scrutiny, analysis, interpretation, or understanding by humans |
An example of a transparent explicit representation is the axioms for arithmetic; an example of an opaque explicit representation is the contents of hieroglyphics before the discovery of the Rosetta Stone; and an example of a transparent implicit representation is the content of encrypted messages. What about opaque implicit representations? One example is a sinogram. These are the internal representations that are ‘seen’ by computer assisted tomography machines. Below on the left is an example of a sinogram; on the right is the resulting image when an inverse Radon transformation is applied to the sinogram, producing a transparent, explicit representation. The latter representation is of the input that produced the sinogram on the left.[1]
In virtue of transforming the representation on the left into the representation on the right, we move from an opaque implicit representation to a transparent explicit representation.
One of the advantages of many diachronic accounts of emergence is that they attempt to explain how emergence occurs, rather than leaving it as a brute fact. In cases of conceptual emergence, one way to do this is to turn opaque representations into transparent representations. Consider the use of convolutional neural nets (CNN) to generate efficient image recognition.[2] This kind of artificial intelligence is of interest to us because it seems to have many of the characteristic features of a system within which emergence occurs. It has the diachronic equivalent of levels, it employs novel representations at each layer, and although each layer depends on the content of the previous layer, the representation at each layer can be considered as autonomous. The success of early CNNs was not well understood because the inner representations were opaque and implicit. But with the right kind of transformations, using deconvolutional networks to map the representations in a layer back to the pixel space, certain kinds of opaque implicit representations can be converted into transparent explicit representations. Using this type of transformation the first layer of the network can be seen to have pixels as inputs and to represent edges. The next layer has those representations of edges as input and internally contains representations of combinations of edges. Moving from layer to layer in this way results in a representation that gives a correct classification of the input with high probability. The transformations that provide this representational content also reveal that there is an important element of compositionality to the representations contained in these CNNs and this compositionality undermines the original sense that emergent processes are occurring. This kind of detailed investigation into the relations between successive layers stands in sharp contrast to using abstract dependency relations, which leave the connections mysterious and unexplained.
There is still the question, which is open as far as I know, as to whether the transparent representations are an artifact of the transformations used. If any readers have insights into these, I’d be glad to hear them. For those interested in learning more about this area, the web site of the Human and Machine Intelligence Group at the University of Virginia of which I am co-director (https://hmi.virginia.edu/ ) has slides from a number of lectures on the topic.
I’ll conclude with one important observation. There is unlikely in the near future to be a completely general account of emergence. One reason is that there is good evidence that we operate with several distinct concepts of emergence, ontological, inferential, and conceptual. Perhaps an account unifying these approaches will be formulated in the future but as things stand much time can be wasted arguing that a theory of emergence cannot be correct because there are plausible examples that do not fit it. Unlike causation, there is too much disagreement about what counts as a core case of emergence for such examples to serve as effective falsifiers. Yet despite this spectrum of emergence types, there is one feature that should play a role in any argument for or against a particular theory of emergence. It is that to be emergent, a feature must emerge from something (else). For conceptual emergence under psychologism, concepts emerge from other concepts through philosophical or scientific analysis. For inferential emergence, consequences can emerge from the original representations through ratiocination. For ontological emergence, how one thing emerges from another is reasonably clear in diachronic cases, as we saw in essays 1 and 2. But in what sense does one relatum of a synchronic dependence relation emerge ontologically from the other? Unless there is a clear answer to this question, we do not have an understanding of why cases of synchronic emergence appear and it is reasonable to conclude that synchronic dependency relations cannot capture cases of ontological emergence but only inferential or conceptual emergence.
[1] I have been told that expert radiographers can visually identify the content of some sinograms, although certainly not all.
[2] See, for example, ‘Visualizing and Understanding Convolutional Networks’, Matthew D. Zeiler and Rob Fergus, arXiv:1311.2901v3, 2013, although I note that at the current time there is no general agreement about the correct interpretation of CNN operations.
Hi Paul,
Thanks for posting about your interesting book.
Can you help me to understand your last statement: “we do not have an understanding of why cases of synchronic emergence appear and it is reasonable to conclude that synchronic dependency relations cannot capture cases of ontological emergence but only inferential or conceptual emergence”.
Are you presupposing that there are cases of synchronic emergence? By synchronic emergence, I understand cases in which a higher level property possesses causal powers that its realizer(s) lacks. Is there such a thing, in your opinion? Also, would you agree that this kind of strong synchronic emergence is not entailed by the diachronic emergence that you discuss?
One final question: Is there anyone who rejects the existence of diachronic emergence? In a previous post you respond to Chalmers’s argument that higher levels supervene on lower levels; as I understand it that argument is directed at synchronic emergence, not diachronic emergence.
Hi Gualtiero and thanks for your comment.
That last sentence reflects a view that Terry Horgan argued for in his `From Supervenience to Superdupervenience’ article back in 1993 in Mind, roughly that pure supervenience relations cannot explain ontological dependency and must themselves be explained using some kind of scientific or naturalistic theory. I think his point can be generalized to other metaphysical relations of dependency. Obviously, lots of philosophers do not agree with that view.
Are there cases of synchronic emergence. Yes, and I’ll go a little further than I have completely convincing arguments for, but they are all cases of conceptual emergence or inferential (epistemological) emergence. In the case of ontological emergence, leaving aside consciousness, qualia and the like for a moment, the skeptics were right to hold that there are no (other) examples of synchronic ontological emergence. Certainly the kind of synchronic emergence you describe is not entailed by diachronic emergence of the transformational kind. Although I should add that because the hypothesis that the level of fundamental physics is causally closed is very likely not true of our world, the appearance of higher level properties can occur diachronically and mimic closely what you want from synchronic emergence. As for the consciousness/specifically mental candidates, I think it is premature to speculate about whether such things are ontologically distinct or autonomous properties or features, but I’d be glad to hear from any readers who think otherwise.
The answer to the Chalmers part of your last question is contained in my reply to Eric Thomson’s comment on post 3, but I’ll add something about the first part. I don’t know of any contemporary philosopher who explicitly denies the existence of diachronic emergence, unless they deny that emergence exists at all, but until recently it was common for authors to present their theory of emergence in a way that strongly suggests they think they have given a complete account of what emergence would be if it existed. And many of those accounts were synchronic. Historically, after evolutionary emergence went out of favor, the skeptics often argued that there were few or no cases that fitted their preferred synchronic account and (implicit therefore) there were only a few or no examples.
Thanks for the opportunity to blog this week. It was fun.
“whether the transparent representations are an artifact of the transformations used” – I would think so, though perhaps I misunderstand your point. It is easier for me to think about the simpler neural networks that are equivalent to standard multivariate statistical models eg principal components analysis (SVD) or regression. In these the transformed data can be transformed perfectly back to the observed data. The advantage of these particular transformations is mainly that they allow construction of good low rank approximations – not particularly relevant to a model of emergence.
David Duffy — apologies for the delay in replying. I didn’t get the usual notification that you had posted a comment. You are right that with the kind of statistical techniques you mention, the availability of inverse operations that are largely understood takes them into the transparent realm, and they are also not candidates for emergence. But in the kind of deep learning networks I was discussing, and in alternative approaches for image recognition, the situation is not so straightforward, at least as I understand it. One thing I had in mind is whether image concepts (those parts of an image used in machine learning recognition) and semantic concepts (those used in language) parallel one another in a systematic enough way that semantic compositionality carries over to image compositionality. The fact that occluding a subpart of an image concept can lead to sudden failures in performance within machines, whereas humans are very good at recognizing partly occluded objects, might suggest that image compositionality is not as stable as semantic compositionality. But, of course, failure of compositionality is not sufficient for emergence.
Hi Paul. Not working directly in the area, all I can point to is the image description deep learning systems, which successfully learn which high level features from the image correspond to human language descriptions, and generative image modeling systems that can produce a novel image of a face meeting a natural language description (eg Attribute2Image). In the latter, looking at the faces resulting from changing one high level feature at a time allows us to understand what each represents. In these (presumably like the human situation), there are multiple neural nets facing each other. (PCANet is one example I can find of a “simple” multilayer PCA that does as well as more complex architectures for image classification – robust to image occlusion).
Hi David, thanks for the pointer to generative image modeling systems, with which I’m unfamiliar. The image labeling systems are an interesting case. Partly because of the training process on databases such as ImageNet, partly because for natural reasons most of them are oriented towards humanly usable outputs, it’s maybe not surprising that their internal representations tend to line up nicely with existing conceptual frameworks. What would be interesting is if unsupervised deep neural net classification systems satisfied compositionality, the representations in internal layers did not correspond in some reasonably natural way to existing linguistic concepts, and we could generate some stable interpretation of those representations. If you happen to know of any such systems, let me know. (If you’d like to continue the discussion off-line, my e-mail is on my web site).