Symposium on Bayne, “On the axiomatic foundations of the integrated information theory of consciousness”

I am delighted to announce the next symposium in our series on articles from Neuroscience of Consciousness.  Neuroscience of Consciousness is an interdisciplinary journal focused on the philosophy and science of consciousness, and gladly accepts submissions from both philosophers and scientists working in this fascinating field.

We have two types of symposia.  For primarily theoretical articles, we will have several commentators from a variety of theoretical perspectives.  For novel empirical research, we will have single commentators whose goal is to bring out the theoretical challenges and import of the results.  This symposium is based Tim Bayne’s stimulating paper, “On the axiomatic foundations of the integrated information theory of consciousness.”

We have two excellent commentaries from Hedda Mørch and Colin Klein, followed by a response from Bayne.

***

Integrated Information Theory (IIT) is one of the most influential frameworks recently introduced into the science of consciousness.  It attempts to explain phenomenal consciousness in a rigorous manner, using quantitative tools from information theory.  It is also decidedly ambitious.  IIT is intended to explain all of consciousness, both in its qualitative and quantitative aspects, wherever it occurs.  A key to having a theory of this scope is the axiomatic nature of the theory.  IITs proponents posit a core set of features that they take to be self-evidently essential for consciousness – hence any instance of consciousness will exhibit them – and uses these axioms to derive the quantitative postulates that provide the empirical content of the theory.

Bayne’s paper questions this strategy for theorizing about consciousness.  According to Bayne, any axioms on the nature of consciousness must have two properties:  they must be universally seen as intuitive and essential, on pain of not being self-evident or not explaining all of consciousness; and they must in fact provide constraints on how a theory of consciousness should be structured.  Bayne’s strategy in the paper is to go through the axioms offered by proponents of IIT, and argue that they each fail to exhibit at least one property.

Consider the first axiom of IIT:  “intrinsic existence.”  Bayne argues that there are two interpretations of this axiom.  The first argues that consciousness is a real feature of the world.  But this meets neither condition.  First, fictionalists about consciousness do not find the axiom self-evident.  And since all theories of consciousness do think it is a real feature in the world, the axiom does not place any significant constraints on the nature of those theories.  On a more strict interpretation, intrinsic existence means that consciousness does not rely on anything outside of it.  But this principle is not universally self-evidence, due to the widespread appeal of externalist views of phenomenal content. 

Bayne provides similar arguments against the other axioms.  For instance, Bayne considers a variety of interpretations of the “axiom of integration,” which he takes as basically committed to the claim that consciousness experience is unified.  But, this view is not intuitively obvious – for instance, it seems conceptually possible that a single organism might have multiple centers of consciousness.  And this is at least arguably manifest in everyday experience – consider both hearing a tone and feeling a pain on your foot.  These differing intuitions suggest that it may be beyond our introspective ability to really determine whether consciousness is unified.

In response to these concerns, Bayne suggests that the axiomatic strategy is the wrong way to go.  In its place, he proposes a strategy based on natural kinds.  Natural kinds, rather than needing to find essential features, start from noticing a cluster of correlated properties in the world and seek the causal mechanisms that explain why those properties hang together.  Bayne clarifies that one might develop a notion of consciousness based on integrated information within the natural kind approach, but that theory will be derived from essential properties of consciousness.

***

Thanks very much to our contributors for participating.  Thanks also to Jakob Hohwy and the other editors of Neuroscience of Consciousness, and to Oxford University Press.  Please feel free to comment in the discussion board below!

[expand title=”Hedda Hassel Mørch: Can IIT’s axioms be subject to disagreement? Comment on Tim Bayne’s ‘On the axiomatic foundations of the Integrated Information Theory of consciousness’“]

Hedda Hassel Mørch (website) — University of Oslo

In “On the axiomatic foundations of the Integrated Information Theory of consciousness”, Bayne criticizes the axiomatic argument for IIT. The axiomatic argument is based on five claims about the essential features of consciousness which are put forth as self-evidently true. It then “translates” these axioms into five physical postulates which specify the physical properties a system must have in order to be conscious. The postulates are then formalized in terms of maximal integrated information (Φ). The correlation between consciousness and maximal Φ is also supported by empirical evidence, but the axiomatic argument is presented as having a distinct and important evidential role.

Here, I will first suggest a response to one particular kind of criticism that Bayne raises against several of the axioms, namely that they cannot be self-evident because there are theorists who disagree with them. For example, against the first axiom of intrinsic existence, interpreted as claiming that consciousness is constitutively independent of external circumstances, he objects that:

According to externalist accounts of consciousness, an entity’s conscious state is constitutively dependent on its history and/or relations to its environment (e.g. Dretske 1995; Hurley 1998; Lycan 2001; Byrne and Tye 2006). (p. 3)

And against the fifth axiom of exclusion, interpreted as claiming that consciousness is not vague, he objects that this view “controversial and is rejected by a number of other theorists (e.g. Papineau 1993; Tye 1996)” (p. 6).

In response, I will suggest that the axiomatic argument need not be understood as claiming that the axioms are self-evident in the sense that nobody would disagree for any reason. Rather, the argument can be read as claiming that the axioms are self-evident upon introspection of one’s own consciousness. This only precludes disagreement on the basis of introspection (or phenomenological observation) alone. The externalist views cited by Bayne are largely motivated not by appeal to introspection, but rather by general explanatory considerations, the desire to integrate consciousness with various scientific theories, problems of reference and knowledge, and so on. Also, most of the philosophers cited by Bayne are physicalists (at least Dretske, Lycan, Byrne, Tye and Papineau), and physicalists typically hold that introspection is at best a highly unreliable source of insight into the nature of consciousness, and at worst a source of misinformation, given that introspection arguably presents consciousness as non-physical.[1] The axiomatic argument, in contrast, must be understood as presupposing that introspection is a reliable source of insight into the nature of consciousness.

Still, some externalists appeal to the idea of phenomenal transparency, the claim that phenomenal qualities seem—phenomenologically—to be instantiated by external objects of experience, not the experience itself. This would seem like disagreement about whether introspection presents consciousness as intrinsic. And in general, the history of phenomenology shows that almost any phenomenological claim will be disputed by some person or other.

But it is possible for a claim to be self-evident upon introspection in principle without being judged as self-evident by every introspecting person, because people can make false judgments about their own phenomenology despite perceiving it correctly (or being correctly acquainted with it). But given this, IIT should provide a justification why the axioms should be considered correct judgments but opposing claims should not. How could such a justification go?

For IIT, a natural strategy would be to appeal to mutual support from the empirical case for the theory. If an axiom is translatable into an empirically verifiable postulate which is then empirically confirmed, this would support that the axiom represents the correct phenomenological judgement. In other words, the axiomatic case for IIT could be seen as standing in a kind of reflective equilibrium (in Rawls’ terms) with the empirical case.

One might think this would render the axiomatic argument wholly dependent on, and therefore redundant relative to, the purely empirical case. But the postulates are only empirically verifiable in a limited set of cases, mainly involving behaviorally responsive humans and evolutionarily close animals, where we have common-sense, theory-independent criteria for assessing the presence or absence of consciousness (such as verbal reports or behavior we strongly associate with consciousness). These cases would arguably not form an adequate induction base for extending the postulates (as IIT does) to vastly different cases for which we have no common-sense, theory-independent criteria, such as vegetative patients, artificial intelligence and evolutionarily remote animals. But if the postulates are additionally supported by axioms, which are in turn not only empirically confirmed (to the limited extent they can be) but also supported by phenomenology, the justification for extending the postulates beyond what is strictly empirically warranted would arguably be considerably strengthened.

To sum up, the axiomatic argument can be understood as requiring only that the axioms seem self-evident upon introspection – at least to some people, given that they can also be mutually supported by corresponding postulates that are empirically verified.[2] This would deflect the part of Bayne’s criticism based on mere disagreement. However, his criticism also contains other highly substantive parts, according to which many of the axioms (or proposed interpretations of them) are either trivial or positively implausible. He also offers a general objection according to the axiomatic method as such. Here, I will suggest a response to the latter general objection.

According to Bayne, “there are good reasons to think that the axiomatic method is not well-suited to the study of consciousness” (p. 7) because axioms are never invoked outside mathematics and logic, i.e., in any other empirical disciplines. But the science of consciousness is not like other empirical disciplines because consciousness itself is not empirically observable except in our own case and, as already discussed, the case of creatures fairly similar to us such as other behaviorally responsive humans and evolutionarily close animals (at least indirectly via common-sense criteria). In order to make predictions outside of these familiar cases, such vegetative patients, artificial intelligence and evolutionarily remote animals, we must appeal to some principle or bridge law linking consciousness to observable properties (Chalmers 1998). Such a principle could not itself be empirically verified, and must therefore be justified a priori – on the basis of phenomenology, and thereby axiomatically in IIT’s sense, or conceptual analysis, or something else.

Bayne also claims that “it is debatable whether there are any (non-trivial) essential, subjective properties of consciousness (over and above the fact that there is ‘something it is like’ to be in a specific type of conscious state)” (p. 7) and even if there is, whether we have access to them. In response, one might say that if a science of consciousness (of the sort that makes verifiable predictions about creatures or systems very different from us) is to be possible at all, it must arguably be based on some a priori principle, for the reasons just sketched. If such principles are possible, it is plausible that they must be based on phenomenology.[3] In other words, one could invoke a “transcendental” argument from the necessary conditions of the possibility of a science of consciousness.

However, Bayne suggests an alternative methodological approach that could fulfill the same condition, the natural kind approach. One question I have about this approach is how the assumption that consciousness is a natural kind can be justified. It cannot be declared self-evident because in the paper Bayne himself mentions some theorists who deny it, i.e. “those who suggest that consciousness might not be a genuine scientific kind (e.g. Allport 1988; Papineau 1993; Rey 2009; Irvine 2012)” (Bayne 2018: 3). Neither does it seem defensible on purely empirical grounds. One possibility is to defend it on pragmatic grounds based on its function of enabling consciousness to be scientifically investigated, but as I have just claimed, IIT’s approach can appeal to this kind of argument as well. That doesn’t mean that one couldn’t argue that the natural kind approach does a better job as an enabling condition. I just want to note that the way in which IIT proceeds, most generally speaking, is not necessarily that different in kind from how other approaches must proceed as well.

To sum up, in general, IIT’s reliance on an axiomatic argument in addition to empirical evidence can be justified by the fact that consciousness is not empirically observable except in our own case or via common-sense criteria with limited applicability, which means that empirical evidence must be supplemented by a priori considerations if a universal theory of consciousness is to be possible.[4] Therefore, I don’t think it should be rejected on a general basis. 

References

Bayne, T. 2018. On the Axiomatic Foundations of the Integrated Information Theory of Consciousness. Neuroscience of Consciousness 2018 (1): niy007-niy007.

Chalmers, D.J. 1998. On the Search for the Neural Correlate of Consciousness. In Toward a Science of Consciousness Ii, eds. S. R. Hameroff, A. W. Kaszniak and A. C. Scott. MIT Press.

Mørch, H.H. 2018. Is the Integrated Information Theory of Consciousness Compatible with     Russellian Panpsychism? Erkenntnishttps://doi.org/10.1007/s10670-018-9995-6


[1] IIT is often interpreted as a physicalist theory as well, and if the axiomatic argument presupposes that introspection gives reliable insight in the nature of consciousness, this would commit it to denying that introspection presents consciousness as non-physical, and perhaps even positively affirming that it presents it as physical. That consciousness is physical would be very difficult to defend as self-evident upon introspection, so this would be a problem. But IIT need not be interpreted as a physicalist theory. As I have argued elsewhere (Mørch 2018), it can also be and is in some ways better interpreted as a form of Russellian monism, the view that conscious or protoconscious properties constitute the intrinsic nature of physical properties (which physics reveals as purely extrinsic and structural), and therefore would not be (purely) physical. This could be understood as compatible with IIT’s claim that consciousness is identical with integrated information, which could be interpreted to say that consciousness is identical with integrated information understood, not as a purely physical property, but as a property that may include a non-physical intrinsic nature. 

[2] It of course remains to be seen whether IIT’s postulates will survive further empirical testing, and the argument would depend on this.

[3] As noted above, it could also be based on conceptual analysis, but this raises the question of what the concept one is analyzing is based on, and one of the more plausible answers would seem to be that it must be based on phenomenology.

[4] Many thanks to Kelvin McQueen for comments.

[/expand]

[expand title= “Colin Klein: Commentary on Bayne”]

Colin Klein — Australia National University, Colin.Klein@anu.edu.au

Axioms are tricky. Here’s one good way to get yourself confused. Start with Geometry. Take Euclid’s five postulates. They seemed pretty obvious to lots of people for a long time—that is, they seemed like they not only were internally consistent, but obviously described the world we live in. As claims about the world, they seemed at once substantive and a priori, which was a pretty happy result for those of us who traffic in conceptual truths.  Generations of geometry teachers have to damp down certain annoying questions, of course: dry spaghetti and meatballs have properties that lines and points don’t have, and so maybe nothing real counts as a point or a line. But put that to one side. Euclidean geometry seems to work pretty well.

Then centuries of math happen. Non-Euclidean geometries with incompatible versions of the parallel postulate are shown to be consistent. Worse still: evidence accrues that the structure of our spacetime is best described by one of those. That’s the most familiar breakdown, but there are others. Projective geometry develops as a field in its own right.  It talks about points and lines too. It lacks proper parallels (every pair of lines defines a point). But even worse, the fourth Euclidean axiom (the one about right angles) is actually nonsensical, because projective geometry has no well-defined notion of angle.

What to do? One could insist that only one set of axioms is the correct ones, and the others are bunk. The job of geometric axioms is to describe the world (here we pound the table), so only one can do!  Yet other geometries seem to get used with troubling frequency.  Further, the correct axioms are unlikely to be the Euclidean ones—that is, the ones with the postulates that seemed so intuitive as to be a priori. That should make you worry a bit about the allure of the whole axiomatic method.

We are not Lovecraft protagonists, though; we don’t need to go crazy when confronted with non-Euclidean geometries. Rather than take the axioms to be direct descriptions of the world, we take them to define sets of systems—the models of the axioms. There’s then an interesting question as to whether some actual system belongs to the models of the system. So there are two relationships: from the axioms to the models, and then from models to the world. Thinking about science in this way has been remarkably powerful (I draw inspiration here mainly from Van Fraassen 1989 and Godfrey-Smith 2006).

Flexibility in the model-world relationship lets us use different geometries for different things, including unexpected ones for which they were not developed. Maybe Euclidean geometry is good for building houses, hyperbolic geometry for studying space-time, and projective geometry for studying combinatoric lottery problems (which, you might notice, don’t sound very geometric at all). The model-world relationship can also be fuzzier than identity — embedding, isomorphism, similarity, resemblance, and so on. This can in turn vary depending on what you care about.  Dry spaghetti is not a Euclidean line, but it resembles a line well enough that geometrically interesting things can be said about spaghetti. Expand the picture a bit to include concrete as well as mathematical models, and you get a framework that’s been surprisingly rich for capturing features of modern science. (Flexibility in the description-model relationship is similarly useful, but I’ll put that to one side.)

That’s a lot of setup. It is necessary because I’d like to say something in defense of IIT. Full disclosure: I think Bayne is basically right, and I’ve never found IIT very attractive. I used to think that the drive to axiomatize IIT was born of misguided physics envy. Yet Bayne’s discussion has, with a certain irony, showed me how the IIT axiomatizations might not be completely misguided.

When (e.g.) Oizumi, Albantakis, and Tononi (2014) present the Axioms of IIT, they’re pretty clearly intended to describe the actual world. As Bayne points out, none of them are self-evident. Indeed, as Bayne rightly notes, they seem to contradict other things IIT theorists believe. The minimally conscious photodiode violates the axiom of Composition (for it does not have “multiple aspects in various combinations.”). It strikes me that it also violates the axiom of Integration (for it does not have components, so there’s no meaningful sense in which the components can be integrated). Now, you might want to drop the photodiode being conscious. (I would.) But as Bayne rightly points out, if these are meant to be self-evident axioms, then they should feel a little more… axiomatic.

Bayne suggests that this shows that the axiomatic method is misguided, and that we should look to sophisticated theories of natural kinds to pick out consciousness instead. I’m sympathetic. But — doing rational reconstruction now rather than interpretation — perhaps the two positions aren’t as far apart as it might seem. Suppose we treat the IIT axioms and postulates as like Euclid’s postulates. That is, they specify a certain set of candidate systems, and interdefine certain notions within that system. Absent straightforward internal contradiction—and I don’t see that there’s a problem there—we can ask about the set of models those axioms describe. We can investigate their properties, derive theorems, look for invariant properties, and so on — just as we can investigate the properties of Euclidean space solely by derivation from the postulates.

Viewed that way, Oizumi et al. have done a lot of interesting, thoughtful work to elucidate the properties of their model. As with other models, though, there’s a second step that needs to be done: linking it to the world. That is, just because your model talks about ‘consciousness’, and just because you’ve developed axioms that talk about ‘consciousness’  by thinking about properties that consciousness might have, that doesn’t guarantee any particular relationship between your models and the world.

Indeed, it strikes me that this is always the weak point of IIT—it’s very hard to find a meaningful, principled way to connect up the IIT models with brain data in a satisfying way. That’s not to say that there aren’t mappings possible (mapping is cheap). But part of the problem might be an insistence on a fairly tight mapping between the relevant concepts. If there was a way to read the  IIT-world mapping in a looser way—if, for example, IIT models were meant to be embeddable in the state-space of actual models without having a nice one-to-one correspondence—then it would open up the way for a resolution between IIT and natural-kind theorists like Bayne (and me). For then it’s an open question whether and how the two systems could get mapped to one another, and how fruitful the result would be. 

References

Godfrey-Smith (2006).  The strategy of model-based science. Biology and Philosophy 21: 725—740.

Oizumi, Albantakis, and Tononi (2014).  From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS Computational Biology 10(5): e1003588.

van Fraassen (1989).  Laws and Symmetry. New York: Oxford University Press.

[/expand]

[expand title=”Tim Bayne:  Further Thoughts on Axioms, Natural Kinds, and IIT, Response to Hedda Hassel Mørch and Colin Klein”]

Tim Bayne — Monash University, tim.bayne@gmail.com

I am very grateful to Hedda and Colin for their comments on my paper “On the Axiomatic Foundations of the Integrated Information Theory of Consciousness,” and to Dan and The Brains Blog for making this exchange possible.

In the target paper—“On the axiomatic foundations of the Integrated Information Theory of Consciousness”—I tried to do two things. First, I examined each of the five axioms invoked by IIT, arguing that in a number of cases it’s unclear what the alleged axiom actually asserts, and that on their most plausible interpretations of a number of the axioms either fail to be self-evident or fail to be substantive. Second, I argued that the axiomatic method is not well-suited to the science of consciousness, and that we should instead adopt the natural kinds (NK) approach.

The two commentaries focus on the more general issues surrounding the axiomatic approach. Hedda defends its legitimacy, and argues that I have misunderstood the notion of self-evidence that it employs. Colin adopts a rather different response, suggesting that even if my criticisms undermine the axiomatic approach as it is employed in IIT, there is another—and more plausible—way in which the science of consciousness might appeal to axioms. Let me consider these two lines of thought in turn.

One of the so-called axioms that I discussed in the target paper is that of intrinsic existence, according to which consciousness exists “intrinsically.” I claimed that this axiom isn’t plausibly regarded as self-evident, and that phenomenal externalists (for example) would reject it. In response, Hedda argues that phenomenal externalism is irrelevant here, for the axioms are meant to be self-evident upon introspection, and— Hedda claims—phenomenal externalists don’t trust introspection.

I’m not persuaded. For one thing, I don’t think that phenomenal externalists are particularly sceptical about the deliverances of introspection. On my reading, the issue here is not the phenomenal externalists are physicalists and physicalists hold that introspection is “at best a highly unreliable source of insight into the nature of consciousness’ (as Hedda puts it). The point, rather, is that phenomenal externalists doubt whether introspection is a source of information about whether or not consciousness is an intrinsic property. And, for what it’s worth, I regard such doubts as well-founded. Introspection tells me what it’s like to be me right now. It doesn’t—as far as I can tell—purport to tell me what reality must be like in order to me to have the experiences that I do.

But suppose that Hedda is right to think that physicalists are in general sceptical of appeals to introspection, and that they are right—as physicalists—to be sceptical in this way: wouldn’t these facts themselves compromise the attractiveness of the axiomatic method? It would be odd (to say that least) for a science of consciousness to adopt a method that is inconsistent with physicalism.   

A second issue raised by Hedda’s comments is whether introspection provides us with information about the essential features of consciousness (as the axiomatic method seems to require).  Hedda, I take it, agrees that the axiomatic method requires introspective access to the essential features of consciousness, for although she thinks that the axioms can be provided with a kind of empirical validation via the postulates, she takes this ‘bottom-up’ support to involve only a limited set of cases (such as those involving behaviourally responsive humans), and thus concludes that it wouldn’t be able to provide the constraints that are needed for a fully general theory of consciousness.

I agree that there are real questions about whether bottom-up validation can ground a fully general theory of consciousness, but I’m sceptical as to whether an introspectively-based approach will fare any better.   Introspection, as I understand it, is primarily a source of information about the features that one’s experience actually has. Going slightly beyond that, it seems possible that introspectively-based methods might enable one to identify invariant features of one’s experience—features that one’s own experience must have. Going even further, perhaps such methods might enable one to identify the features that are required of all human experience. But could introspection reveal the essential features of all possible forms of consciousness? I struggle to see how.  

One might be tempted to respond by saying that although there is a problem here in principle, in practice we can assume that the structural features of human consciousness will apply across the board. But that, I think, won’t do.

To take just one facet of consciousness, consider conscious unity. Reflecting on one’s own experience, one might be tempted by the thought that the experiences of a single conscious subject must always be subsumed by a single experience. We might then be tempted to treat this claim—call it ‘the unity thesis’ (Bayne & Chalmers 2003)—as an axiom of consciousness. But it seems to me that we should resist this temptation. Even if introspection can justify the claim that human experience is always unified, I doubt that it could give us any reason to think that the unity thesis must also apply to (say) cephalopods. Perhaps human experience is highly atypical; maybe we are outliers in the cosmos of consciousness. Indeed, given its panpsychist leanings it seems to me that IIT has particular reason to be wary of anthropocentrism when it comes to identifying the essential features of consciousness.

Let me turn now to some of the questions that Hedda raises regarding the NK approach. She points out that the case for the NK approach is not self-evident; nor—she says—does it seem defensible on purely empirical grounds. She does allow that it might be defended on purely pragmatic grounds “based on the function of enabling consciousness to be scientifically investigated”, but—she claims—IIT can also be defended on pragmatic grounds. So why, she asks, do I favour the NK approach over the axiomatic approach?

There are two issues that need to be distinguished here. The first concerns why we should take the NK approach seriously (that is, more seriously than the axiomatic approach). My answer here appeals to the fact that consciousness is a psychological property, and the default assumption in science is surely that psychological properties are natural kind properties. There is more to be said here, but it seems to me that this is enough to get the NK approach on the table as a plausible framework for the science of consciousness.

The second issue is whether we have reason to think that the NK approach can be successfully applied to consciousness. This is a distinct issue, because even if the NK approach has a kind of background legitimacy, it may not deliver the goods. (Think here of phlogiston. It was perfectly legitimate to treat phlogiston as a natural kind, but that approach turned out to be unsuccessful.) Although some theorists have argued that consciousness is unlikely to be a natural kind, my view is that the jury is still very much out here and that we don’t (yet) have

reason to be pessimistic about the prospects of the NK approach. After all, we don’t really know whether consciousness is a natural kind unless we adopt the NK approach, and we haven’t yet done that in a systematic fashion.

Let me turn now to Colin’s comments. Instead of treating axioms as self-evident insights into the essential nature of consciousness (as the advocates of IIT do), Colin suggests that we might think of axioms as generating various models of consciousness. The five axioms invoked by IIT would give rise to one model of consciousness, but other sets of axioms would generate other models. So, just as there are multiple models of geometry, so too there might be multiple models of consciousness.

So far, perhaps, so good. The challenge—as Colin recognizes—is how to link these models to the world. Although various models might be intended to function as models “of consciousness,” it doesn’t follow that they will be equally good at describing the nature of consciousness. (As Colin points out,

The fact that one’s model contains the word ‘consciousness’ doesn’t guarantee that is has any particular relationship to the world.) Indeed, insofar as we are realists about consciousness we may want to hold tight to the idea that as best one model of consciousness is true. Certainly we will need ways of deciding between competing models insofar as different models will have different implications for questions concerning the distribution of consciousness in (e.g.) brain-damaged patients, non-human animals and artificially intelligent systems.

As Colin notes, one of the problems here is that of connecting models of consciousness (such as IIT) with brain data in a satisfying way. This is indeed a problem, but as I see it a more fundamental challenge is that of determining what kind of data (be it neural or behavioural) a model of consciousness ought to account for and be responsive to. The axiomatic approach adopted by IIT attempts to circumvent this problem by assuming that introspection is compatible with only a single model of consciousness, but if we reject that assumption—as I think we should—then we need to confront this challenge head-on.

As far as I can see, the only viable strategy for dealing with this issue involves the natural kind approach. In a nutshell, we look for clusters between the various marks of consciousness, and identify the mechanisms and processes that explain those clusters. The choice between rival models can then be guided by the information about the distribution of those processes and mechanisms. But if we’ve gone down this route, then I wonder just how much is left of the axiomatic approach, and whether it hasn’t in effect been stripped of most of what was meant to be distinctive of it.  

Colin comments that he once put the allure of the axiomatic approach down to a kind of physics envy. I share that suspicion, and my scepticism about the axiomatic method is guided by the thought that a mature science of consciousness won’t look much like physics (or geometry). Instead, I suspect that it will look much like any other special science. Rather than trafficking in crisp concepts, precise postulates and strict proofs, I suspect that it may always be characterized by open-ended concepts, messy generalizations and piecemeal explanations.

References

Bayne, T. & Chalmers, D. 2003. What is the unity of consciousness?, In A. Cleeremans (ed.) The Unity of Consciousness. Oxford: Oxford University Press, pp. 23-58.

[/expand]

10 Comments

  1. Hi Tim, thanks for a great response! A few comments on some of your specific points:

    “I don’t think that phenomenal externalists are particularly sceptical about the deliverances of introspection … The point, rather, is that phenomenal externalists doubt whether introspection is a source of information about whether or not consciousness is an intrinsic property”

    Not sure I fully get this – even if externalists take introspection to be unreliable about intrinsicality only, they still reject what I suggested is a fundamental presupposition of the axiomatic argument, namely that introspection is reliable about the essential features of consciousness including whether or not it’s intrinsic?

    “It would be odd (to say that least) for a science of consciousness to adopt a method that is inconsistent with physicalism.”

    I agree of course that this is highly controversial. But it could be motivated by the same observation that I suggest supports the axiomatic approach, namely that consciousness is not observable from the third person (except arguably in a limited set of cases), whereas physicalism as standardly understood requires that all facts can in principle be captured by physical theory and hence observed (at least indirectly) from the third-person.

    Also the method would still be compatible with Russellian monism, which some regard as a kind of physicalism in a broad, non-standard, sense. But the natural kind approach would have the advantage of being compatible with standard physicalism.

    “Could introspection reveal the essential features of all possible forms of consciousness? I struggle to see how … Perhaps human experience is highly atypical; maybe we are outliers in the cosmos of consciousness.”

    I agree this is a difficult issue. The axiomatic argument would have to presuppose not only that introspection reveals the nature of particular experiences, it also gives insight into the nature of consciousness as such. Maybe this presupposition could be supported indirectly in the same way I suggested, as a necessary condition for the possibility of a theory of consciousness. But such an argument would then be challenged by the natural kind approach, if it fulfills the condition equally well or better.

    One worry I have about this claim would be that it’s hard to see how the natural kind approach would allow straightforward generalizations from human cases to cases like evolutionarily remote animals, AI and so on. I can imagine that the physical correlate of observable human consciousness would fall under many potential natural kinds (functional kinds, neurological kinds, and so on), some of them broad enough to extend to remote animals, AI and so on, but others more narrow, and the natural kind approach wouldn’t by itself determine which kind is relevant. So the axiomatic approach arguably has the advantage in being more selective.

  2. Tim Bayne

    Hi Hedda,
    Thanks heaps for these comments – let me say a few things in reply to two of your points. Re the issue about introspection: distinguish between two classes of introspective judgments. One class focuses on phenomenal character – the ‘what it’s likeness’ of an experience. I take it that the reliability of these kinds of introspective judgments isn’t in question here. A second class focuses on the metaphysics of phenomenal properties: are they intrinsic, purely subjective properties (‘qualia’, in one sense of that problematic term), are they represented properties (‘contents’, in one sense of that problematic term), or are they instantiated properties? The theorists in this dispute often appeal to introspection, but they don’t agree on what introspection says. And – for what it’s worth – many externalistists (Harman, Tye, Dretske, etc) simply don’t think that introspection presents phenomenal properties as ‘qualia.’ If anything, they think that introspection presents phenomenal properties as representational contents. (Or course, some disjunctivists argue that even representationalists fail to take perceptual transparency seriously!) So, one can regard the first class of introspective judgments as reliable without thinking that the second class of introspective judgments is reliable – which would be roughly my view. Seems to me likely that introspection tells me what it’s like with me right now, but is silent on the metaphysical nature of what determines what it’s like. (See Maja Spener’s “Mind independent and visual phenomenology” for relevant discussion.) So maybe the disagreement between us on this issue is whether introspection is completely reliable, but is really about the *scope* of introspection: what kinds of information is it set up to provide. (Of course, externalists themselves might be inclined to agree with you on the scope of introspection, but argue that introspection presents phenomenal properties as representational properties rather than intrinsic properties.)

    Re the point that you make in the final para: “it’s hard to see how the natural kind approach would allow straightforward generalizations from human cases to cases like evolutionarily remote animals, AI and so on.” I agree wholeheartedly. This is one of many tough challenges here. 🙂

  3. Niccolo Negro

    Thanks for the great exchange! Really helpful and thought-provoking. Just a couple of questions on some issues that, I think, need to be clarified.

    1) Why should axioms be useful constraints for a theory of consciousness?

    One of the two criteria individuated by Tim to assess the validity of an alleged axiom is being a useful constraint for a theory of consciousness. However, as far as I understand, the axioms are supposed to capture the essential features of my own phenomenology (as, I think, Hedda Hassel Mørch suggests). In doing so, they do guide scientific research, but in a descriptive, not normative, way. Why, then, useful?

    2) Let’s say axioms are essential to every conscious experience and internally consistent. But are they complete?

    I agree with Colin Klein in saying that the postulates give us a model of consciousness. Anyway, I think it is important, in this regard, to distinguish the role of the axioms from that of the postulates. It seems to me that axioms provide us with a picture of what consciousness is, not with a model of it. That’s the job of the postulates and the mathematical apparatus that comes with them.
    It seems to me that the problem with IIT is not that the axioms are not essential to consciousness (and not even their internal consistency, but this is not a problem, as pointed out by Klein), but that they are not complete. They deal with a pretty ‘passive’ way to capture phenomenal consciousness: it’s just a scene, which is supposed to exist, be structured, informative, integrated, and definite. But my phenomenal experience is also constituted by a feeling of being a potential modifier of that scene. How would the theory change if we included this thesis as an axiom?

  4. Hedda

    Hi Niccolo,
    I assume the first question is primarily for Tim, but I’ll quickly add something. You say the axioms guide scientific research (in a descriptive way), but it seems to me the ability to guide research just is the kind of usefulness that Tim is after, and that IIT also presupposes (otherwise the axioms couldn’t support the theory as it claims).

    About the second point, in IIT, it seems the property of being a modifier of one’s own experience, understood roughly as having causal power or agency over it, is included not in the axioms but in the postulates. The first axiom, according to which consciousness is intrinsic, is translated into a postulate according to which consciousness has causal power on itself. If it was included in the axioms instead it seems to me the theory would look roughly the same in the end. I would think IIT’s justification for not putting it in the axioms is that both epiphenomenalism (where conscious creatures may have a feeling of power/agency but no real power) and Weather Watchers (Strawson 1994) (where conscious creatures have no feeling of power/agency either) are conceivable.

  5. Tim Bayne

    Hi Niccolo, Thanks for your comments and questions. I’m not sure that I understand what you’re getting at with your first point. Perhaps you could say more about what you mean for something to guide research in a descriptive rather than a normative way. As I understand it, axioms are meant to function a bit like design specifications for consciousness, telling us what kinds of properties has so that we can think investigate what kind of structures might support/generate consciousness.

  6. Niccolo Negro

    Thank you all for your answers!

    The first point is clearer now, and I think that the idea of design specifications is similar to what I meant by descriptive constraint. However, I’m still a bit confused about the reason why, for example, Tim writes that “the claim that consciousness exists might indeed be axiomatic but it fails to impose a substantive constraint on theories of consciousness” (target article, p.3). I thought the idea was to highlight how the axiom of existence doesn’t really help in shaping the scientific modelling that is supposed to derive from the axioms, given that basically everyone but eliminativists agrees on that (perhaps, we can put it as ‘not being informative’?).
    But, if we take axioms as purely descriptive theses, the only criterion that matters is their correspondence with what a subject has access to through introspection (a very problematic word, indeed😊). In my previous comment, I just pointed out that the fact that axioms constrain our scientific model is just a consequence of this correspondence, and not a criterion through which we assess their validity as axioms.

    As for the second point, wouldn’t such an axiom rule out epiphenomenalism in the same way as the existence axiom rules out (some forms of) eliminativism? Why would that be a problem for the axiomatic approach?

  7. Hedda

    Ni Niccolo,
    I think the problem (if any) with ruling out epiphenomenalism in the axioms is that arguably it is not axiomatic that consciousness isn’t epiphenomenal (because it seems conceivable and coherent that it is, and so on). But one can still *infer* that it is not epiphenomenal, and this inference can be part of the translation, given that the translation can be interpreted as not purely deductive but as relying an additional premises that may not be absolutely certain. But if one were to regard it as axiomatic that consciousness is not epiphenomenal, I think this could be put down as an axiom without this changing the end result. So it wouldn’t be a problem having this as an axiom except that many will object that it is actually not axiomatic.

  8. Garrett Mindt

    I’ve always been of two minds regarding the axioms and their role within IIT. In the one sense I see the motivation for adopting the axiomatic approach from the IIT-side of things. They offer clear constraints on the process of abductively inferring the postulates for mechanisms and systems of mechanisms, with the goal of keeping an eye on what our phenomenology presents to us. I think (as Hedda points out) the science of consciousness is slightly different in that the phenomenon of interest (consciousness) is one which lends itself to first-person observation and not exclusively third-person observation. That being said, I agree with Tim, that there is room for disagreement about the extent to which the axioms are “self-evident.” And presumably, as Tim points out, many would disagree that the character of their experience perfectly matches the axioms as IIT has formulated them.

    We could however ask the question in a different way. Say we want a scientific theory of consciousness that takes a consciousness first approach, that is, we decide that given the unique nature of developing a scientific framework to tackle consciousness we want one that takes as a constraining factor the character of our own phenomenology. We then ask ourselves, what are the characteristic features of our phenomenology (I see no reason why these can’t be defeasible as well)? We develop a list and set forth to abductively infer postulates for mechanisms and sets of mechanisms that might satisfy these constraints (again, I see no reason why they can’t be defeasible). We then design experiments to test these postulates and they undergo a process of refinement, change, etc. Those characteristic features just play a sort of heuristic role in the development of the theory, as a way of constraining the possibilities for the postulates. Would should a method be much different than IIT as it is now? I would think not, with the only difference being we don’t invoke the problematic word “axiom.” But if that’s all that’s the problem it’s not even clear one would need to take a stand on whether these are axioms or natural kinds or something else. The problem just seems to boil down to they shouldn’t be called axioms. But that in itself doesn’t seem too problematic for IIT as it’s just a matter of poor word choice.

    Say we just call these “defeasible characteristics” they give us a sort of backboard on which to bounce our theoretical ball against while we determine through conceptual and empirical tango how best to approach developing a scientific and mathematical framework to approach phenomenal experience. There are equally a number of people who would disagree that there is such a thing as phenomenal experience in the first place, but should that prevent us from even attempting to develop a phenomenologically constrained theory of consciousness? I would hope not! Otherwise we would always be made impotent in the face of disagreements. In a similar vein, I think it would be incorrect to say we shouldn’t attempt to define a list of essential characteristics for constraining our development of a theoretical framework for phenomenal experience, merely because they’re disagreements about what those characteristics might be. Should said framework gain empirical backing through experimentation then the burden of proof seems to increasingly be placed on those that think because of disagreements about the characteristics laid out in the beginning the method should be abandoned.

  9. Hello All, thanks for this interesting discussion. I also share skepticism about IIT axiomatic approach to consciousness. Tim Bayne in his paper shows weaknesses of such approach. Here is my short commentary where I also argue against IIT axioms, although form a different perspective: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00101/full

    Reading the discussion above I would like to propose yet another argument against IIT axiomatic approach to consciousness. This is the argument which I develop in my latest paper (Pokropski, in review). In short, I think that the axiomatic method IIT proposed is based on misunderstanding of axiomatic method in general. IIT shares a common myth about the role of axioms in science: it is often thought that mathematical axiomatic theories are derived from a set of self-evident axioms. However, as Philip Kitcher argues in his “The nature of mathematical knowledge” (1984), axioms are not self-evident and do not precede a theory: they are a method of systematizing scientific knowledge. Consider, for example, Euclidean geometry, whose main theorems were already known before it was formulated by Euclid of Alexandria around 300 B.C. Euclid formulated five axioms which, however, were not considered self-evident for centuries. The least obvious were the fifth axiom, the so-called parallel postulate, which rejection resulted in the discovery of non-Euclidean geometries in the 19th century. The contemporary view on Euclidean geometry consists of not 5 but 20 axioms, introduced by David Hilbert in 1899.
    To sum up, following Kitcher (1984), axioms should not be understood as self-evident basic principles of a theory, but as a method of systematization of a domain of an already existing theory. Axiomatization systematizes the domain and unifies the theory, showing that theorems are derivable from a certain group of basic principles. These axiomatic principles are not self-evident, but they justify themselves by doing the work of systematization. The set of axioms can also change along with development of the theory they systematize. If that is the case, then we first need a good phenomenological theory in order to introduce phenomenological axioms. IIT does not offer such phenomenological theory, therefore proposed axioms are implausible.

Comments are closed.

Back to Top