EPISTEMIC NORMS AND RATIONALITY

The featured image is a painting by Ulysses Belz, entitled “”Guided Tour Cézanne” (2013).

Let me start by heartfully thanking John Schwenkler for inviting me to participate to this great blog. Thanks also to those who responded with  great comments. On this occasion, I will try to respond to the initial question raised by Santiago Arango about the distinction between instrumental and epistemic agency, an important issue that should be a central part of any dialectic of noetic feelings. Reflecting on this distinction may help us understand the strange fact mentioned, in Post 4, that socially fluent ways of reducing uncertainty seem to frequently override people’s effort to extract truth and meaning.

As indicated in the first post, a philosophical motivation for studying metacognition is to clarify the relations between instrumental and epistemic agency. It may be conceptually useful to emphasize what agency has to do with control. When acting on the world, an agent needs to select an appropriate type of action, know whether she can perform it in this particular circumstance, attend to how the action develops, and check whether the desired outcome occurs. When acting mentally, similar functional steps are structurally required. For example, when trying to solve a problem, you need (a) to first evaluate whether the problem is worth being solved, (b) whether it is solvable in the present circumstances; (c) monitor the time and effort worth spending on it, applying possibly a stopping rule  if no progress to a solution is in sight; finally, you need (d) to check whether the outcome is likely to be correct or not.

These conditions manifest the interplay between instrumental and epistemic considerations. Steps (a) and (c) are driven by practical reasons. Utility, in particular, determines the type of cognitive action you need to perform at any given time. For example, you may try to solve the problem for yourself, or identify potential helpers, or find relevant information on the internet. Determining which action to perform is inevitably subject to rational constraints as to which kind of information best corresponds to your present needs and resources. For example, considerations of reliability and trustworthiness may leave you with only one solution of the three above. The rational constraints also include the time and resources  available for the individual’s action, salient local habits and patterns of epistemic practices. (As briefly discussed in the preceding post, conforming to them is an instrumental goal in itself.) Current research suggests that children acquire a repertoire of mental actions through conversational and collaborative interactions with older children and parents. For example, attempts to explain observed events, or to inform others accurately  can be encouraged, discouraged or subject to prudential rules.

The other two steps are purely epistemic: present utility does not drive your ability to solve the problem (although it can raise your motivation to do so). Nor does it determine whether a solution arrived at is correct or not. Epistemic conditions of correctness are exclusively determined by the informational goal pursued. For example, trying to perceive a visual item, trying to visualize it or trying to remember its name have different conditions of correctness .

With this distinction in mind, it is intriguing that agents sometimes substitute an epistemic action with another, believing, for example, that they report what they saw, while reporting what they believe others want to hear they saw. This case is exemplified in the perceptual study described in the Asch experiment: eight participants, including seven confederates, are invited to express a perceptual judgment about the length of a reference line as compared to three segments. The naïve subjects are always the last to give their own perceptual appraisal. The confederates, however, always unanimously give an answer that, from the third trial on, is incorrect in the majority of cases. Surprisingly, the naïve participants tend to conform their judgment to the other confederates’ previous decision in 38% of the cases, going against their own perceptual judgment. In this case, presumably, a social norm of conformity for a socially acceptable report, (just as the equality bias discussed in Post 4), overrules the norm of validity that is constitutive of a perceptual judgment.

Another interesting case of confusion about the type of mental action relevant to a given situation is the substitution of an individual action with a collectively-driven epistemic practice, exemplified in viewers of art exhibits. Scrutinizing what an individual canvas presents with (brush strokes, color patterns, light contrast, etc.) requires from laypersons a demanding cognitive work, most likely associated with being puzzled, a sense of being lost, and, finally, the impression of being a poor viewer. Listening to an audio-guide, in contrast, offers visitors only congruent feedback (“this is what needs to be looked at”) and a higher sense of competence in viewing art. Ulysses Belz’s “Guided Tour Cézanne” (at the top of this post), however, vividly exposes how headphones rather disable viewers’ own sensitivity to paintings.

What these examples suggest is that several motives compete for the control of our attention. Social cues may entice us away from individually learning how to perceive; they may prevent us from discovering for ourselves the best way of extracting information. Adjudicating between several ways of acting cognitively, and resisting misleading attractors, however, are (again) a matter of repertoire, education, time and resources.

5 Comments

  1. Hi Joelle,

    Thanks so much for this insightful series of posts!

    I was hoping you could help me understand one of the points you were making with the Asch experiment. You write that sometimes agents believe “that they report what they saw, while reporting what they believe others want to hear they saw.” And you take the Asch participants to exemplify this case. Now it seems clear that many participants are, as you say, reporting what they believe others want to hear. However, it does not seem to me (I’m going off of the summary you link to) that they believe that they are reporting what they saw. Instead, the summary claims that “most of them said that they did not really believe their conforming answers”.

    So, my question is: in what sense do you take the Asch experiment to exemplify a case where agents believe that they report what they have seen? Am I misunderstanding the point that is supposed to exemplified, or the relevant results of the Asch experiment that are meant to exemplify it?

    • Joelle Proust

      Dear Robert,
      Thank you for both your nice appraisal, and for your accurate critical remark. You are quite right: Asch does not explain his own data as I propose, namely as a case where subjects are changing their epistemic actions (and the associated correctness conditions), from a perceptual report to a collective perceptual decision. There was a missing argument in my post, which your question allows me to articulate. Asch acknowledged that some of his participants later recognized that they really did believe that the group’s answers were correct. Further research has tried to explore this possibility. From a theoretical viewpoint, Asch’s results are compatible with three different hypotheses: subjects’ responses might result from (a) a deliberate decision to conform, with no specific influence on their perceptual decision; (b) a bias of perceptual input toward the majority responses (“wishful seeing”) (c) a higher response criterion for supportive information needed to counter-act majority responses. Asch favored (a). However, Germar et al., 2014, using an Asch-like paradigm, have offered evidence that social influence effects are primarily due to a perceptual bias, (b), rather than to a judgmental bias. The majority response, regardless of whether it is correct or incorrect, influences subjects’s perceptual uptake. In addition, a study performed in our group by Eskenazi et al. (2016), has shown that confidence in one’s own perception is also affected by implicit social signals of agreement or disagreement by peers even in the absence of any normative pressure. This non-verbal and implicit social information has been shown to affect confidence even when the social cues are presented as uninformative and irrelevant to the task at hand.
      So completed, I hope that my argument makes more sense: The studies generated by the Asch paradigm suggest that agents may report what they perceived, and be influenced in their input selection by top-down social effects, as well as in their felt confidence in the validity of their perception, without realizing that they have been so influenced. They may be motivated to conform in perceiving (as it’s also known that people have a different sensivity to distance for desired objects (Dunning & Balcetis, 2013), or have an epistemic motivation to form consensual acceptances, with the sense that consensus tends to be right. In both cases, cognitive actions respond to additional norms that are usually not consciously identified, and hence, differ from what the agents think they are.

      • Terry Eskenazi

        Dear Robert and Joelle,

        This is only meant to be a small contribution to your discussion. The theory of social influence has distinguished two different motivations that underlie conformity behavior (Deutsch & Gerard, 1955). One is the normative motivation, the need to appear as part of a group. The other is the epistemic motivation, which captures our heuristic tendency to make use of information already acquired by others in pursuit of reliable decisions. Others are rich sources of information and it is, often (but c.f. , cost-effective to utilize them. This distinction is also referred to as the public vs private acceptance.

        Robert was right to point out that in the specific case of the Asch experiments, the participants, driven by normative motivation, displayed only the so-called public acceptance and did not revise their perceptual decisions. However, as Joelle explains, there are other cases in which social input can indeed alter evidence accumulation, as demonstrated by Germar and colleagues (2014). In the empirical work we have conducted, we showed that humans’ particular sensitivity to gaze cues, which are known to be very potent, can be utilized as relevant information which in turn affect decision making. It must be noted, however, it is very difficult to experimentally demonstrate which underpins (normative or epistemic motivation) an alteration in behavior as a result of social input. I would recommend the works of Morgan, Laland and Rendell for in-depth insights to the matter (e.g. Rendell et al., 2004, 2010; Morgan et al., 2012).

  2. I’ve very much enjoyed your pieces as well, and I thank you for your penetrating responses and questions. Let me end with a short, eliminativistic rendition of the problem as you pose it above, if only to show the potential parsimony of the approach.

    I frequently misplace my shoes. So when I go to pick up my daughter from the bus stop, my shoes are missing, and I do a circuit of the house, find them, then go to the bus stop. Since I generally do this without thinking about it, while reading texts on my phone, for instance, it certainly doesn’t seem like I’m engaging in any of the steps you mentioned. Since I need only solve the problem *once* to lay a down an ‘active inference routine,’ say, it need not be the case that I do any of these steps subpersonally either.

    So your four stages seem to pertain to a very narrow class of problems, namely, novel problems, those requiring deliberative cognition to solve.

    To even cognize a problem as such, some kind of sensitivity to ability is required, so I’m not sure of the distinction between a) and b). This sensitivity is likely very basic. ‘Cognizing ability’ could be as simple as the *problems themselves* exhibiting patterns that automatically cue confidence, given past learning. (This could be why, as is often the case in philosophy, problems themselves can be problems. ) The upshot is that even in instances of deliberative problem-solving, nothing so extravagant as ‘evaluation’ is required. The system given its history performs what appears to be complex evaluations, but the cognitive work is distributed across that history (personal and evolutionary).

    And it is precisely this ‘cognitive distribution’ that perhaps explains why individuals are so disposed to endorse otherwise untrue social determinations. In normal circumstances, interpersonally farming out cognitive load is the most cost effective. Perhaps a sensitivity to ‘majorities’ is all that’s required. Cognition need only recognize that pattern, and it can avoid any kind of balance of considerations–which is to say, anything ‘rational.’

    • Joelle Proust

      Dear Scott,
      Thanks for this clearly formulated objection, based on an “eliminativistic rendition” of the problem I’m raising. It really helps me to understand the view about metacognition you expressed in your prior comments. I fully agree that there is an important contrast to be emphasized between habitual and strategic forms of action. They seem to be subserved by different systems, respectively called “model free” and “model-based” by learning theorists. We definitely differ, however, in our appreciation of why and when information processing is required. From the viewpoint of cognitive science, a habitual system also crunches information. Each system, on my view, is relying on a specific type of information when guiding action (see my 2014 “Time and action: Impulsivity, habit, strategy?”).
      You object to these kinds of analysis of habitual actions that you need only to have, once and for all, laid down an ‘active inference routine’, e.g., for seeking your shoes.
      This looks far from sufficient, however. Even if you are not conscious of this activity, it cannot be carried out without (subpersonal and/or explicit) types of information processing, such as selecting the proper compiled routine, adjusting its time of execution, etc. Your brain also needs to monitor the routine perception-action sequences in executive memory, and stop searching when and only when your search is successful. You may not find this an interesting thing to do, and indeed there is little novelty attached to it. But the environment is slightly changed. Your search cannot be handled by something like a mere spinal cord reflex. You need to model your ways of acting, even in these routine circumstances.

      In summary: Our debate seems to stumble on our different appreciations of whether, and in which case, information processing is needed to guide behavior. To me, “evaluation” is not an extravagant requirement. Evaluation is instantiated in very basic bodily regulations, such as the sense of hunger or of uncomfortable temperature.
      Now cannot ‘Cognizing ability’ be “as simple as the *problems themselves* exhibiting patterns that automatically cue confidence, given past learning” as you suggest? Yes, in a sense. But the devil hides in the details. For there is no such thing as “the problems themselves”, independently of the embodied patterns of information that have been extracted and stored to deal with them, including distributed information in a number of evolutionary, developmental and social dispositions. Similarly, confidence in one’s own cognitive ability (in a given task) can only be calibrated when executive and monitoring capacities are present and unimpaired; if novel feedback becomes unreliable, so does confidence. Clearly, then, confidence is an information-based process.
      Wittgenstein’s question: “what is left if I subtract the fact that my arm goes up from the fact that I raise my arm?” was intended to be rhetorical: the implication was that there was nothing left to be discovered. Raising the problem, rather, was supposed to be “the” problem. Notwithstanding Wittgenstein’s verdict, raising it allowed a wealth of results to be generated in the last twenty years (see for example research by Marc Jeannerod and Patrick Haggard, along with their huge philosophical impact). This might give you pause in diagnosing an absence of problems in the area of epistemic agency. Questions such as “what is left if I subtract the fact that I have a search behavior when I search?” may be very important to address, after all.

Comments are closed.

Back to Top