Symposium on Martin & Le Corre, “Sensory Substitution Is Substitution”

I aTVSSS PHONEm glad to kick off our latest Mind & Language symposium on Jean-Rémy Martin and François Le Corre‘s Sensory Substitution Is Substitution ,” from the journal’s April 2015 issue, with commentaries by Kevin Connolly (Mellon Postdoctoral Fellow, Penn), Ophelia Deroy (Center for the Study of the Senses/Institute of Philosophy, University of London), Julian Kiverstein (Amsterdam), and Michael Proulx (Bath).

Sensory substitution devices (SSDs) convert images from a video camera into patterns of vibrotactile stimulation (White et al. 1970; Bach-y-Rita 1972, 2004), electrotactile stimulation (Sampaio et al. 2001, Ptito et al. 2005, Chebat et al. 2011), or auditory stimulation (Meijer 1992, Capelle et al. 1998, Renier et al. 2005, Amedi et al. 2007, Kim & Zatorre 2008) that visually-impaired individuals can use to perform tasks ordinarily guided by non-prosthetic vision, including object recognition and navigation. In addition to these abilities, SSD-users often report a dramatic experiential shift, sometimes referred to as “distal attribution” (Loomis 1982, Epstein 1986): they come to experience the stimulations produced by the SSD as transparently revealing the spatial layout of objects in distal, 3D space.  Bach-y-Rita et al., discussing their experiments on visual-to-tactile sensory substitution, write: “Our subjects spontaneously report the external localization of stimuli in that sensory information seems to come from in front of the camera, rather than from the vibrotactors on their back” (1969, p. 964; quoted by Martin & Le Corre 2015, p. 210).

Martin and Le Corre distinguish between two main approaches to understanding the representational end-product(s) of training with an SSD.  According to what they call the “Deference View”, SSDs that substitute for vision give rise in trained users to modally visual perceptual states (e.g. Bach-y-Rita et al.1969; Hurley & Noë, 2003; Noë, 2004; O’Regan & Noë 2001; Renier et al. 2006). By contrast, according to the “Dominance View,” these devices give rise to perceptual states that belong to the substituting modality, e.g., touch in the case of a visual-to-tactile SSD.

Martin and Le Corre propose an alternative that they refer to as the “Metamodal Spatial View.” Like proponents of Deference and Dominance, they accept that the substituting modality plays a causal role in producing the experiential shift characteristic of distal attribution. However, they argue, contrary to both views, that the resulting phenomenology belongs neither to the substituted (deference) nor to the substituting (dominance) modality.

There is a large body of evidence that many processing areas in the brain perform metamodal rather than modality-specific perceptual functions, that is, they perform the same types of information-processing tasks in response to sensory information received from different peripheral input systems (Pascual-Leone & Hamilton, 2001, Merabet et al. 2007). Increasingly, researchers have defended the view that the brain, in general, is metamodal. On this view, as Michael Proulx writes in his commentary, “the brain is not a sensory-organized information processing device, but a task-based computational device.”

Martin and Le Corre argue that just as certain brain areas can be properly characterized as metamodal, certain kinds of information can be characterized as metamodal as well. Metamodal information, as they put it, “is a kind of information that is potentially accessible by any modality.” Information about distal 3D layout, in particular, is not, strictly speaking, visual information, on their view, but rather metamodal information. Hence, the fact that trained subjects experientially access such spatial information, Martin and Le Corre maintain, doesn’t favor the Deference View. On the other hand, they argue, the phenomenology of distal attribution speaks against the Dominance View: “if SSD-users after the experiential shift have distal experiences and do not feel the substituting stimulations anymore, then Dominance seems to be in a delicate position. It is indeed difficult to claim that tactile stimulations, in the case of a visuo-tactile device, are projected in the external space” (p. 211). This leaves the Metamodal Spatial View, which Martin and Le Corre defend on the basis of phenomenological, psychological, and neuroscientific grounds.

(I might note that Martin and Le Corre’s use of the terms “deference” and “dominance” in their article departs somewhat from that of Susan Hurley and Alva Noë  in their influential “Neural Plasticity and Consciousness” [Biology & Philosophy 18: pp. 131–168, 2003]. Hurley and Noë use these terms to distinguish between cases in which the qualitative character of a subject’s experience in SSD-perception is determined by the area of cortex that is activated by input from the SSD [this is what they call “cortical dominance”] and cases in which it is determined instead by the source of input to that cortical area [this is what they call “cortical deference”]. According to Hurley and Noë, in visual-to-tactile sensory substitution, activity in somatosensory cortex takes its qualitative expression  from the character of the “externally rerouted” visual input that it receives. Hence, visual-to-tactile sensory substitution, as they see it, is a case of somatosensory cortical deference.  By contrast, if occipital processing areas are activated by inputs from SSDs as suggested by current research [for a review, see section 3.2 of the target article] and visual phenomenology results, then this would be a case of cortical dominance as Hurley and Noë use the term.)

Comments on this post will be open for at least a couple of weeks. Many thanks to Jean-Rémy, François, Julian, Kevin, Michael, and Ophelia for all their hard work. I’m also very grateful to Sam Guttenplan, the other Minds & Language editors, and the staff at Wiley-Blackwell (especially Imogen Clarke) for their continued support of these symposia!

You can learn more about Jean-Rémy and François here and here.

Below there are links to the target article, commentaries, and Jean-Rémy and François’ replies.

Target Article: Martin and Le Corre, “Sensory Substitution Is Substitution

Commentaries:

Jean-Rémy Martin and François Le Corre’s replies:

Cover image: Pietro Paolini’s “Allegory of the Five Senses”, via Wiki Commons

4 Comments

  1. Very interesting stuff, this comes up fairly frequently in discussions over beer in my lab. I build sensory substitution systems for rats, projecting information about their infrared (IR) environment directly to S1 via microstimulators. Similar systems, that also bypass the native set of sensory tranducers, have been used in people (e.g., a magnetic sense transmitted via vibrotactile inputs: https://www.ncbi.nlm.nih.gov/pubmed/16317228). Note in our animals it is arbitrary that we use IR light: it could have been radioactive decay or anything we had the wherewithal to miniaturize a detector for, and the animals would learn to discriminate its intensity.

    What would the advocates of the deference view say to the projection of qualitatively new types of information into the brain? There is nothing to defer to! It seems that the only viable options in these cases are either the dominance view or something else (perhaps metamodal view, or perhaps the subjects simply experience a new modality altogether that depends on the type of information received: this seems unlikely, frankly, given that it doesn’t matter if we give an animal information about IR levels, radioactivity, or the number of left handed people in the room).

    Potentially relevant is the work of Sur and others: data from ferrets with A1 rewired to receive visual information suggests they are having visual rather than auditory percepts (https://www.nature.com/nature/journal/v404/n6780/full/404871a0.html). This is somewhat hard to interpret in the present context, though, given what they had to do (very early in development) to get the ferrets rewired that way. My work is in adult rats, so is more relevant to this discussion, I think.

    (I discussed my work here when it came out: https://philosophyofbrains.com/2013/02/13/i-can-feel-the-light.aspx). I discuss the issue of “What do the rats actually experience” a little bit, but with much less nuance than you philosophers are sure to attack it.

    • Francois Le Corre

      Dear Eric Thomson,

      Thank you for your post! Very exciting research you are performing! We agree that a deferentialist who would like to argue that rats undergo a visual experience would be in trouble for explaining the experience of distality given that (i) the information being transduced is arbitrary and (ii) only S1 computes the information.

      We also agree that it is unlikely that rats acquire a new modality given that their experience of distality can be undergone through sense modalities the rats already possess, such as olfaction for instance.

      What about the metamodal spatial view and the dominance view? For our metamodal spatial view, distal experiences are not sensory experiences. There is no such thing as a sense of space, or so we argue. So we would be willing to say that the rats acquire spatial distal experiences associated with or caused by tactile sensations in the background. The main difference with a dominance view is that we are not committed to show that tactile experiences can be distal—as much as proximal—or that distal experiences may count as tokens of tactile experiences.

  2. Hi! Thanks for a very interesting paper, Jean-Remi and Francois.
    I have a question which does not (directly) concern the view you are defending (that the distal experience using SSD is multimodal), but is simply an attempt to understand what is supposed to be going on in the case where the brain region usually thought to be responsible for calculating visual-tactile spatial information (LOtv) is instead calculating auditory spatial information via an SSD. Now it seems clear that the brain mechanism in question is not *meant* to (i.e., designed by natural selection for the purpose of) process auditory information. Assuming (as it is seems natural to me, but I may be way off) that the representational “format” (the way information is encoded, the “language” so to speak) used by the auditory perceptual system is different from the one used by the visual perceptual system, it is difficult for me to grasp how the brain mechanism in question (LOtv) can *understand* the information coming from the SSD. The brain mechanism is expecting (is designed to process) information in a certain format , but is receiving information in a different format. So how is “communication” between the SSD and the brain mechanism in question possible?
    This may be simply a case of ignorance on my part about the way the brain works. So I thank you in advance for clearing up this issue for me.
    Perhaps the answer is that during a learning phase, the subject learns to associate certain auditory information with visual or tactile spatial information. This creates the required “translation” between the “languages”. But if this is the idea, then it seems that LOtv is not multi-modal (analogy: if one associates a bell ringing with a taste of chocolate, then one will form a desire to eat chocolate upon hearing the bell, but this does not make the desire-forming mechanism processing taste multi-modal). This is why I assumed learned associations of this sort are not involved.

    Thanks again!

  3. Francois Le Corre

    Dear Assaf Weksler,
    Thank you for your post!
    There is no easy way to answer your question, “how is “communication” between the SSD and the brain mechanism in question possible?” In fact, if one substitutes the expression ‘SSD’ in your question with the expression ‘senses’ one would end up with a traditional issue for neurophysiologists that is still waiting for an answer: Given that information being computed in the brain is fundamentally information taking the form of action potentials, how does our brain make us know, so to speak, which sense is being at work? We are not neuroscientists and we do not want to pretend knowing *how* the brain does what it does. In our paper we propose to follow Pascual-Leone and Hamilton’s (2001) attempt to reconceptualise the function of areas usually referred to as sensory areas. They argue that such brain areas do not specifically process sensory data, but are computing specific kinds of information transmitted by senses. As Pascual-Leone and Hamilton (2001) claims: ‘[I]n this view, the “visual cortex” is only “visual” because we have sight and because the assigned computation of the striate cortex is best accomplished using retinal, visual information’ (p. 428). Within this framework, the computations performed by LOtv may primarily involve brain areas associated with vision and touch because these computations are best accomplished using information typically conveyed through these modalities, but it doesn’t follow that LOtv shall count as a multimodal sensory area. In our paper, we say that LOtv should be treated as metamodal area instead, to borrow the terminology of Pascual-Leone and Hamilton (2001) (see the section 3.2 in our paper and our reply to Deroy for the distinction we make between ‘multimodal’ and ‘metamodal’). To borrow your terminology, we would say that SSDs show us that the senses can share a similar representational ‘format’, in that all the senses that are used as vehicles of information by SSDs (audition, touch, for instance) are associated with a “spatial signature” that can be shared by all the senses (however characterized).
    François and Jean-Rémy

Comments are closed.

Back to Top