Over at Brainhammer, Pete Mandik set out the following problems for info-semantics:
If it’s determinate specifications you are worried about, it’s worth keeping in mind that causal/informational stories haven’t been without their own problems. Regarding specificity, there are all sorts of problems concerning where in the causal chain to locate the content (e.g. proximal or distal causes). Also, there are serious problems about how to represent things that couldn’t enter into causal chains with our minds because, eg., they are outside of our light-cone, or they don’t exist, or are too abstract.
Pete points out two serious problems with informational semantics:
1) Where in the information chain do we locate the content. For instance, why say that LGN represents what is happening in the world rather than what is happening in the retina?
2) How would you represent things that don’t exist, such as unicorns? No neuronal state has ever carried the information that a unicorn is present. Note I’m presently not even going to try to address the problem of knowledge of abstracta such as ‘Two is even.’
3) Pete could have added the problem of individuating coextensional contents such as ‘square’ and ‘four-sided polygon.’
In what follows, I’ll outline what Dretske says in response to the first challenge. He (attempts to) deal with the second and third on pages 229ff and 215ff, respectively, of Knowledge and the Flow of Information.
Note I’m not sure Dretske is right, but at the very least he is being reasonable. I hope to impress that Dretske is simply a bad-ass, adeptly presaging many potential problems with his entire framework. Indeed, I have not run into a single objection to Dretske’s work that he hasn’t forcefully expressed and dealt with (tried, at least) in his own writings. Since it is already lengthy, I am just trying to charitably expound his view here (I can’t beat his felicity of expression without a ton of work, so I’m being lazy and relying heavily on quotes).
On to the problem of the information chain.
There are really two problems here, both nasty for Dretske.
I. The problem of hearing buttons
For one, you could end up going too far out in the causal chain that generates a representation. E.g., consider the chain of someone pushing a button that closes a circuit which causes a doorbell to ring. From pages 156 ff, “Why does he not hear the button being depressed? Why does he not hear the membrane vibrating in his ear? … What makes the bell so special that we designate it as the thing heard?…What is the information-theoretical basis for this distinction? Why is one the object and not the other when the auditory experience (by hypothesis) carries specific information about both?”
Here he invokes a technical term, the notion of a primary representation as opposed to a secondary representation. Basically, the brain represents the bell as a primary representation and the button as a secondary representation, and it is only the former which we explicitly represent. First I’ll give the text where he fleshes this out in the doorbell example, and then give the general definition of primary and secondary representation.
“Our auditory experience represents [carries information about] the bell ringing and it represents the button’s being depressed. But only the former is given a primary representation because the information the experience carries about the depression of the button depends on the informational link between the button and the bell while its representation of the bell’s ringing does not depend on this relationship. If we short-circuit the doorbell wires (causing the bell to ring periodically when no one is at the door), the informational tie between the bell and the button is broken. When this tie is severed, the auditory experience continues to represent (carry information about) the ringing bell, but it no longer carries information about the depression of the button.
“If, contrary to hypothesis, the auditory experience continued to represent the button’s being depressed…even when its informational link with the ringing bell was broken, then we could speak about the button’s being depressed as itself receiving primary representation…But in precisely this situation we would speak of our ability to hear the button’s being depressed…If the button was very rusty, for example, and squeaked loudly whenever it was depressed, we might be able to hear the button being depressed whether or not it was connected to the bell. The explanation for this fact is that, in these altered circumstances, the button’s being depressed is no longer being given a secondary representation in terms of the bell’s ringing” (160-1).
Note his technical definition of primary representation is roughly ‘State S gives primary representation to property B (and secondary representation to property P) iff S’s representation of something being P depends on the informational relationship between B and P but its representation of B does not depend on the informational relationship between B and P’ (p. 160). Here S is the internal representational state, B is the bell, and P is the button push.
II. The problem of hearing eardrums
The second problem is, why should we say the person hears the doorbell and not his eardrum vibrating? That is, you could end up pushing the representational contents too close to the subject, so you end up representing transduction mechanisms such as retinal activity or ear drum vibrations rather than things out in the world. Dretske says (p. 162) “The distinction between primary and secondary representations serves to explain why we hear the doorbell ringing and not the door button being depressed. But it does not help explain why we hear the doorbell ringing and not, say the vibration of the membranes in our ear. Isn’t the ringing of the bell given secondary representation relative to the behavior of the membranes in our ear?”
Dretske has two potential solutions to this problem. The first (p. 163ff) averts to perceptual constancy mechanisms. E.g., our visual experience of an object remains quite similar even through drastic changes in peripheral mechanisms. “Size, shape, and color constancy testify to the fact that it is the properties of objects and not (say) the properties of the retinal stimulation (or the firing of neural cells), that is represented by our visual experience under normal viewing conditions. The visual experience that constitutes our sensory encoding of information about ordinary physical objects can, and generally does, remain unchanged in response to quite different proximal stimulation…Our sensory experience is sensitive to (hence carries information about), not the behavior of our receptors or neural pathways, but the behavior of more distant elements in the causal chain.”
His second attempted solution (p. 187 ff) is that a true semantic structure is mute about its own causal origins. Again, quoting Dretske (from p. 188): “A semantic structure’s insensitivity to its particular causal origin, its muteness about the particular manner in which the information (constituting its semantic content) arrived, is merely a reflection of an important fact about beliefs. Our belief states do not themselves testify to their causal origin. The fact that someone believes that Elmer died tells us nothing about how he came to believe this, what caused him to believe it. He may have read it in the newspaper or someone may have told him; he may have seen Elmer die, or he may have discovered this in some more indirect way…It carries this information, yes, but it says nothing about how this information arrived…
“This plasticity in extracting information from a variety of different signals, a plasticity that accounts for a system’s capacity for generating internal states having information about a distance source as their semantic content, is a plasticity that most information processing systems lack. [In a voltmeter] the only way information about voltage differences can reach the pointer is via current flow, induced magnetic field, consequent torque on the armature,…Since, given the construction of the instrument, this is the only way the pointer’s position can register the voltage, the pointer’s position carries information about all these intermediate events…This is why voltmeters do not believe anything.” [Phew]
Note also that Dretske discusses this issue extensively in his beautiful article Misrepresentation.
The big problem with Dretske’s proposal for solving the distality problem (the problem of hearing eardrums) is that both representational and informational content can be disjunctive. Dretske is basically saying: the representational content is the non-disjunctive content, but that isn’t in general true. (Plus there’s no single objective disjunctive/non-disjunctive line anyway.) He admits this, and this is one of the reasons he moves to full-blown teleosemantics, with his operant conditioning story in Explaining Behavior. (Not that this proposed solution to the distality problem works either!) That said, I agree with you entirely that he’s “a bad ass” as you put it. He’s often two or three steps ahead of his (future) critics.
Dan: I’m not sure what you mean, especially how the disjunctive problem is supposed to apply to his solution to the distal/proximal content problems (buttons and eardrums).
I think you are pointing to a different problem which sounds like problem number 3 I mentioned in my post (how to individuate coextensional contents in an informational account). Or, are you referring to the slightly different problem of how to individuate things like ‘black speck moving about such and such a way’ versus ‘fly’?
I think the latter may be one of Dretske’s toughest challenges. He discusses it briefly in Knowledge and the Flow. If you let me know more specifically which disjunction problem you mean, I’d like to know so I can gnaw on it some more.
Sorry, that was pretty cryptic! But I don’t think I’m talking about a different problem. With respect to eardrums, Dretske’s saying that many different eardrum configurations can cause the activation of the same mental representation, due to constancy mechanisms; similarly, many different retinal activations can cause the activation of the same mental representation. So, he says, “our sensory experience is sensitive to (hence carries information about), not the behavior of our receptors or neural pathways, but the behavior of more distant elements in the causal chain.” Fair enough: given a choice between saying that the mental representation carries information about a *particular* receptor activation pattern and saying that it carries information about the distal object, a better choice is the distal object. But given the choice between a *disjunction* of receptor activation patterns and the distal object, there’s no reason to choose the distal object – the informational link is equally reliable for both. (Actually *more* reliable for the disjunction of receptor activations, given the possibility for misrepresentation, but set that aside here.)
Perhaps one could say, “Well, given a choice between a disjunctive informational content and a non-disjunctive informational content, the representational content is always the non-disjunctive one.” But that’s no good, because sometimes mental representation *is* disjunctive. (Think: jadeite/nephrite, or more radically, colour concepts.) Ruling out disjunctive content by fiat gets the contents of a lot of mental representations wrong, at least according to our intuitive assignments.
Dan: I understand now. That is indeed a tricky problem. Perhaps this is one reason I don’t like Dretske’s appeal to constancy mechanisms, but prefer his second solution to the eardrum problem (the representational structure is mute about its causal origins).
But then again, it isn’t really mute about its origins, but could be said to be disjunctive as well (e.g., the thought ‘Elmer is dead’ could be evoked by hearing, seeing, feeling (braille), etc..). We are left with a similar problem.
Let’s assume that the simple F (bell) and the disjunction s1, s2, … sx (eardrum movements 1 through M, visual inputs from the bell M+1 through L, etc..) are informationally indistinguishable, such that any disjunct would activate representational state R (which Dretske would want to call a representation of F, not the disjunction).
But we should keep in mind that the disjunct will likely be infinite and depend on specific learning histories. E.g., learning to see the bell through water, learning to hear it underwater when its acoustic properties are different. While it seems we could use this infinite disjunction, could considerations of parsimony push us to say it is a representation of F, the only thing that all of these possible disjuncts have in common?
Along a similar vein, in his article Misrepresentation, Dretske says “[I]f we are to think of these cognitive mechanisms as having a time-invariant function at all (something that is implied by their continued–indeed, as a result of learning, more efficient–servicing of the associated need), then we must think of their function, not as indicating the nature of the proximal (even distal) conditions that trigger positive responses…, but as indicating the condition (F) for which these diverse stimuli are signs. Hence, the occurrence of R meansf that F is present. It does not meanf that s1 or s2 or … sx obtains, even though, at any given stage of development, it will meann this for some definite value of x.”
The n and f subscripts refer to natural and ‘functional’ (basically semantic) meaning, respectively.
Basically he is arguing that, if we assume that R has a constant representational function over time, then since at different times different disjuncts will activate it (e.g., before learning: perhaps hearing the bell underwater will not activate R until you learn it is the bell), and perhaps other disjuncts will cease to activate it (e.g., once you move to the underwater cavern [yes, this is getting fanciful] the same sound will no longer activate R), the only natural way to carve out a role for R is in terms of F, not the disjuncts.
I’m not convinced this works. I need to think about it some more.
Spring break, and an opportunity to follow up…
I think you’re right not to be convinced that this works. Three points:
1) The disjunction may be large, but infinite?
2) I don’t see why we ought to keep representational functions constant across learning. First, it’s quite common to refine one’s concepts with learning, such that their content actually changes (e.g. moving from an equivocal concept of mass/weight to properly separating these two). Second, contra Dretske, having a time-invariant informational function is not at all “implied by… continued… servicing of the associated need”. All that is implied is a continued function to serve that need! There are obviously multiple ways of servicing the same need.
3) And in any case, there will be a good candidate for a time-invariant disjunctive informational function, at least if learning involves adding ways of detecting F – the time-invariant one is just the longest one. No doubt that’s why you suggested that “other disjuncts will cease to activate it” – but how do we distinguish this from the common process of refining a representation by removing sensitivities that previously led to misrepresentation?