An Information Taxonomy

Below is a tentative and rough taxonomy of notions of information relevant to psychology, neuroscience, and computer science, and specifically, to whether computation is information processing.

1. Shannon information.  The notion defined by his communication theory.  The more unlikely an event is relative to its alternatives, the more Shannon information it carries.  This notion may be used to quantify the amount of information (of any type) carried by a signal.  As Dretske puts it, it’s equivalent to measuring the size of a bucket:  it won’t tell you what’s in the bucket, but it will put an upper bound to how much there can be.  Information in this sense says nothing about whether an event has meaning or semantic content.  There can be no misinformation.  It is used by neuroscientists like Dayan and Abbott 2001, chap 4 to measure quantity of information carried by neural signals about a stimulus and estimate efficiency of coding (what forms of neural responses are optimal for carrying information about stimuli).

2. Natural semantic information, i.e., what Drestke’s indicators and Peirce’s indices indicate, i.e., Grice’s natural meaning, i.e., what neuroscientists’ detectors detect.  Roughly, it is what a variable reliably correlates with.  It is a kind of semantic content (in Drestke’s sense), but it’s different from meaning in the ordinary sense.  There is still no misinformation possible.  This is the notion used by Lettvin et al. 1959, followed by generations of neuroscientists to this day, to analyze neural processes.  This is also the notion of information that Dretske (1981) analyzed.  It can be used (together with the notion of function) to define a notion of representation (which makes misrepresentation possible).

3. Nonnatural semantic information, i.e., Grice’s nonnatural meaning, i.e., (for language and linguistic concepts) conventional meaning, i.e. what Peirce’s symbols carry.  This is what concepts and words presumably carry, what many psychologists might appeal to when talking about information processing, and surely what is often meant when talking about information processing in computers.  Misinformation, bad information, false information are possible.

Does this taxonomy sound reasonable?  What am I missing? 

NB: I am not particularly interested in other technical notions of information (besides the Shannon one), such as Fisher information and algorithmic information theory.  They don’t seem especially relevant to my concerns.


  1. Malcolm

    So you’re interested in apples, but the fact that they are fruit is irrelevant?

    What you are missing is a specific physics of Information. Without that, you’re simply discussing ideas “about” Information.

  2. Hm, have a look at John Collier’s review paper on information concepts. He’s able to link many of these accounts together by looking at Barwise/Perry situation logic (which is something you’d subsume under 2, but more generic). Nb. algorithmic information theory is quite relevant to the subject then because it makes easier to talk about complexity of information in more intuitive terms than Shannon information, if you look at complexities in situations.

    Anyway, computers also do process information of the 2nd time, e.g., when they control the shop floor, by detecting changes on the assembly line etc. If you cut the link between 2 and 3, you get a symbol grounding problem 😉

  3. I don’t like the Dretske take on Shannon.

    Information transmission is reduction in entropy of a random variable. Say variable S has 10 outcomes (ten sided di), one S1 with probability 1/10000, another S2 with probability 1/10, but the overall entropy of S is 3 bits. How much information is transmitted when you are told that S1 happens? Or how much when told that S2 happens? The exact same amount–the entropy of S, conditioned on this message, is zero, so 3 bits of information are transmitted by either message. Their priors don’t matter.

    So while I have seen many popularizations (e.g., Dretske) say that individual events have associated information log(1/p) for that event, I think this is a mistake, or sloppy at best,

    I am happy to talk about the ‘surprise’ associated with that event, and entropy as the average surprise. But the idea of ‘information’ really doesn’t make sense until you have already built up to the concepts of entropy and entropy reduction. If you look at Shannon’s original work, he is pretty careful in his use of the term.

    This is discussed superficially here“>”>here.

  4. Nice post, Gualtiero.

    (BTW, I just put something up about this over at Brain Hammer: [link“>”>link])

    I think your taxonomy sounds pretty good.

    Two tiny quibbles:

    TQ1. Re: Shannon info, it strikes me as just a touch odd to say that misinformation is impossible in this context. My iPod may have way more information on it than your flashdrive even though I only store falsehoods on it.

    TQ2. I’m not sure that Shannon & Weaver supply a different notion of information than what you get, for instance, in comparing Peircean indices and symbols. S&W give a means of quanitfying amounts of informtation while remaining relatively silent on what information is. Consider, by analogy, the compatability of the proposal to measure mass in terms of kilograms and the Einsteinian proposal that mass is energy. It need not be the case that different notions of mass are in play here.

  5. Here’s my candidate definition of information. I find it useful in thinking about brain mechanisms for cognitive processes:

    [ Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way. ] 

    In what sense might it be inadequate?

  6. Eric Thomson

    Typically information is a diadic relation, such that X carries information about Y (it is that way in Shannon, for instance, in his definition of mutual information between variables X and Y).

    In your sense, it isn’t clear what the X and Y are.  For instance, if my car has the property ‘moving 20 mph’ is that information? I’m not sure, and it isn’t clear how that would be useful for looking at neural information processing until you add another element in there, that there is something else in my brain for instance that is responsive to this property. But once you add that, it will likely end up being a special case of one of the three in the original post.

    At any rate, I’m of the opinion that the important thing to do with any notion of information is to a) be clear and b) show it’s usefulness in practice with an example.

  7. Eric Thomson

    No comment, just approving the only way I know how in this site, by pretending to comment. Others are like that too except my response to arnold.

  8. Daniel Weiskopf


    Let me put in a plug for Fred Adams’ article on information and meaning in recent philosophy of mind:

    Adams, F. 2003. The Informational Turn in Philosophy. Minds and Machines, 13, 471-501.

  9. Eric Thomson

    Shannon info, it strikes me as just a touch odd to say that
    misinformation is impossible in this context. My iPod may have way more
    information on it than your flashdrive even though I only store
    falsehoods on it.

    But (Shannon) information gives you no way to discriminate such situations. Imagine the exact same amount of mutual information in two communication channels, between variables X1 and Y1 in one system and between X2 and Y2 in the other system. (We could say X1 is the variable that describes the output of your IPods CPU, X2 someone else’s, and Y1/Y2 are the noises that come out of the headphones or something like that).

    The only factor that mutual information depends upon is the conditional probabilities of Xi and Yi (i can be 1 or 2). Technically I(Xi,Yi)=H(Xi)-H(Xi|Yi), and these entropy measures depend only on the probabilities of variable Xi and Xi|Yi.

    So while it is possible to send lies and  information at the same time, Shannon only tells you how to measure the latter, is silent on the former.

    Incidentally, I wrote up a rather extensive quantitative introduction to information theory as part of a larger project, that can be found here. It compares information theory to ideal classifier performance for quantifying neuronal discrimination of stimuli (the latter is more relevant for notion 2 used in the original post here, though natural information falls out as a special case of perfect ideal observer performance).

    General things to remember about (Shannon) information:
    1. It isn’t the same thing as the entropy of a single random variable.
    2. It describes a relationship between two random variables, how much entropy is reduced about X once Y is given (total entropy of X minus entropy of X given Y).
    3. It says nothing about the truth-values of the variables (to the extent that is well-defined in his theory in the first place), but only depends on probabilities and conditional probabilities regardless of their semantic meaning, truth value, and the like.

  10. Here’s the way I think about it. If an electronic measurement system detects the car moving at 20 mph then that property is information for the electronic system (kinetic information). If you determine that your car is moving at 20 mph then that property is information for you (manifest information). If the car moves and nothing detects it then you might say that the car has potential information. The transition from potential information to kinetic information or manifest information) requires an information system (the dyadic relationship). Understanding the transition from kinetic information to manifest information in the human brain is the very hard problem that many of us are working on.

  11. Eric Thomson

    I think your ‘kinetic information’ is type 2 in the original post while ‘manifest’ information is type 3.

    (Though speedometers are interesting, as they can be incorrect, so some would say they are type 3–

    but since thermostats and thermometers probably aren’t of type 3, then we might need a fourth, an intermediate between 2 and 3.)

  12. Eric, I think we may be talking past each other here.

    Maybe this will help.

    I think you and I agree on the following point: That a signal is a lie doesn’t preclude it from being chock full of Shannon information.

    If I were to put my initial point a different way it might be like this:

    The degree to which a signal is a lie is the degree to which it fails to carry natural information. So it makes sense to say of natural information that there can be no misinformation. With respect to nonnatural information, being a lie in no way precludes the carrying on nonnatural information. Thus is it possible to speak of misinformation with respect to nonnatural information. But, when it comes to Shannon information, since being a lie doesn’t preclude having information, it’s a bit odd to say that misinformation is impossible. (But it should of course be granted that there is no such concept as “misinformation” in S&W information theory.)

  13. Eric Thomson

    I was more trying to give a charitable reading and expansion of the original point. I think you are right as I tried to say with my comment that “it is possible to send lies and information at the same time, Shannon
    only tells you how to measure the latter, is silent on the former.” Using your example, my headphone output carries a lot of information about what is on my
    ipod, even though everything on my Ipod is lies (e.g., a set of quotes
    about Iraq  from the Bush administration 2002-2003 is all that is on my

    Analagously, learning the mass of a particle doesn’t tell you if it is negatively or positively charged. But to say ‘no charge is possible’ would be a mistake, and that’s sometimes what it seems the ‘no misinformation is possible’ people are saying. However, that isn’t what they are saying, as they are univocally using the word ‘information’ in Shannon’s sense, and ‘misinformation’ is not defined in his theory. However, they should be more explicit and clear rather than provocative.

    On a side note,  do people ever give probabilistic spins on ‘natural’ information? Smoke means(n) fire, but not with probability 1.0. It always really bugged me that Dretske requires a conditional probability of 1.0 for something to count as carrying semantic information about something else. It seems driven by philosophical background desires that uncharacteristically blinded his good sense.

  14. I cannot contradict Piccinini he is one of the most promising philosophers of computation since Copeland.
    But i devise only a two broad categories of information: semantic propositional information and Shannon type information.
    Though always with the problem of searlian genuine or derived intentionality or how to percieve information: it is the waving of a tree information of the direction of wind? I´m not so sure. It depends of the observer capability to account for information.

  15. Anibal, I’d agree that Searlian genuine/intrinsic intentionality is only in the eye of the observer, if the observer is Searle. So there is no reason to try to accommodate your information taxonomy to account for this kind of information, as Searle has no systematic criterion (apart from hand-waving) that distinguishes genuine from derived intentionality in case of, say, insects and artificial Brook’s style insects. If you try to apply it to, say, a Roboroach, you’ll see that it simply collapses.

  16. Marcin, information alone cannot create minds.
    The information of a given signal is only aparent if there is a brain that creates it. Only when a signal reaches the higher areas of the cortex (multimodal association areas) the signal is confered with meaning.

  17. Eric Thomson

    I agree with Dretske and Searle that the distinction between derived and “intrinsic” (not interpreter dependent) is sound. If all interpreters were murdered, there would still be things out there representing and thinking about the world.  Searle doesn’t have a good account of this, Dretske in my opinion has a very good start, but I don’t see how to avoid some such distinction unless the idea of representational system is to become uninteresting (i.e., interpretivism).

Comments are closed.

Back to Top