Below is a tentative and rough taxonomy of notions of information relevant to psychology, neuroscience, and computer science, and specifically, to whether computation is information processing.
1. Shannon information. The notion defined by his communication theory. The more unlikely an event is relative to its alternatives, the more Shannon information it carries. This notion may be used to quantify the amount of information (of any type) carried by a signal. As Dretske puts it, it’s equivalent to measuring the size of a bucket: it won’t tell you what’s in the bucket, but it will put an upper bound to how much there can be. Information in this sense says nothing about whether an event has meaning or semantic content. There can be no misinformation. It is used by neuroscientists like Dayan and Abbott 2001, chap 4 to measure quantity of information carried by neural signals about a stimulus and estimate efficiency of coding (what forms of neural responses are optimal for carrying information about stimuli).
2. Natural semantic information, i.e., what Drestke’s indicators and Peirce’s indices indicate, i.e., Grice’s natural meaning, i.e., what neuroscientists’ detectors detect. Roughly, it is what a variable reliably correlates with. It is a kind of semantic content (in Drestke’s sense), but it’s different from meaning in the ordinary sense. There is still no misinformation possible. This is the notion used by Lettvin et al. 1959, followed by generations of neuroscientists to this day, to analyze neural processes. This is also the notion of information that Dretske (1981) analyzed. It can be used (together with the notion of function) to define a notion of representation (which makes misrepresentation possible).
3. Nonnatural semantic information, i.e., Grice’s nonnatural meaning, i.e., (for language and linguistic concepts) conventional meaning, i.e. what Peirce’s symbols carry. This is what concepts and words presumably carry, what many psychologists might appeal to when talking about information processing, and surely what is often meant when talking about information processing in computers. Misinformation, bad information, false information are possible.
Does this taxonomy sound reasonable? What am I missing?
NB: I am not particularly interested in other technical notions of information (besides the Shannon one), such as Fisher information and algorithmic information theory. They don’t seem especially relevant to my concerns.