This is a part of the symposium on socially extended knowledge
Replies to Commentaries
By Duncan Pritchard
I am very grateful to Mirko Farina, Orestis Palermos and Mark Sprevak for their insightful commentaries on my paper. I here give a short response to each, to promote further discussion.
Farina usefully offers the case study of software engineering to illustrate the three-tiered taxonomy I set out regarding types of social cognition/knowledge. In particular, he argues that such a case study can lend support to the idea that often what we are dealing with in cases of highly integrated collaborative inquiries is merely socially extended knowledge and not (as I call it) distributed knowledge. He also points out, however, that depending on how one understands the details of the case, it could go the other way and lend support to distributed knowledge. I welcome the consideration of concrete cases like this. As I say in my paper, I am not opposed to the idea that there genuinely is distributed knowledge. My point is rather that once the three-tiered conception of social cognition/knowledge is in place we need to be wary about being too quick to treat highly integrated collaborative inquiries as resulting in distributed knowledge rather than a weaker kind of social cognition/knowledge, such as socially extended knowledge. The details of the case are thus all-important.
This issue also crops up with regard to Palermos’s contribution, as he also questions whether the details of a particular case study—in this case the highly collaborative scientific enterprise as described by Knorr-Cetina (and which I discuss in my own paper)¾can only be understood such that, as Palermos puts it, the “scientific cognitive successes—produced by highly distributed scientific research teams such as those within HEP experiments—are attributable to group-level processes and agents” As I note in the paper (and Palermos acknowledges in his reply), I’m not claiming that distributed scientific knowledge is impossible, so it could well be correct that this is an instance of it. My point is just that once we have the three-tiered distinction between socially-facilitated, socially extended, and distributed scientific knowledge in play, then it becomes clear that most cases of apparently distributed scientific knowledge can be accommodated by appealing only to the weaker notions of social scientific knowledge.
Palermos argues, however, that one consideration regarding the case as described by Knorr-Cetina that might make us specifically opt for distributed scientific knowledge is that when the “collaborations fail, attributions of individual responsibility should not exhaust our inquiry into the collaboration’s shortcomings.” Palermos’s thought is that focusing on individual responsibility for cognitive failure, rather than on the collective responsibility, would blind us to the structural shortcomings of the wider cognitive system.
While I agree that we should be alert to both structural and individual responsibility for cognitive failure in a social scientific system, I don’t believe that this in itself motivates the thought that distributed scientific knowledge is on display. After all, in cases of socially extended knowledge we also have a socially integrated cognitive system, and hence we could quite properly look for structural failures when there is a cognitive failure resulting in such a system. For example, perhaps there is something about the institutional setting in which the scientific collaboration is taking place that accords some agents more (or less) epistemic credibility than they warrant. In short, that there might be structural failures in an integrated cognitive system needn’t entail that the system is a distributed cognitive system in the sense that I identify (i.e., such that it gives rise to distributed knowledge that is non-reducibly attributable to a group subject rather than to individual subjects).
Sprevak’s commentary also focusses on the case of cognitive failure and the lines of epistemic responsibility, though his concern is instead with my treatment of Wray’s objection to socially extended cognition. Sprevak gives a case of socially extended knowledge whereby an individual (Head) who leads a research team effectively employs other members of a research team (such as RA) as proper parts of his extended cognitive processes. When things go well, Head thus gains socially extended knowledge, where this (I suggest) is individual knowledge. I argue that when things go wrong in this collaborative enterprise, then patterns of individual epistemic responsibility for the error emerge, thereby indicating that there were lines of individual attribution present all along, even if they were obscured by the day-to-day practice. For example, if it turns out that there is cognitive failure in this collaborative enterprise due to RA falsifying results, then we will identify RA as a culprit.
Sprevak’s puzzle about this case, however, is why RA is held responsible for this cognitive failure, rather than Head. More specifically, if the RA is really a proper part of Head’s cognitive processes that lead to his socially extended (but individual) knowledge, such that the credit for the cognitive success flows naturally to his cognitive agency, then why doesn’t the blame for the cognitive failure flow to Head’s cognitive agency too? I want to make two points in response to this objection.
The first is to note that some of the epistemic responsibility for the cognitive failure may well lie with Head—I wasn’t intending to suggest otherwise. Just as Head’s use of a scientific instrument might be sloppy and inattentive, such that the resulting cognitive failure is partly his fault, so Head’s epistemic interactions with the RA in this collaborative scientific endeavor might be epistemically sub-optimal and so deserving of epistemic censure. Similarly, insofar as Head is partially culpable for this epistemic debacle, then it follows that RA isn’t solely epistemically responsible for the cognitive failure in play.
The bigger point, however, is that Sprevak misunderstands what socially extended knowledge in this case amounts to. He clearly supposes that since the cognitive success is attributed to Head’s socially extend agency, such that he is credited with socially extended knowledge, that it is therefore only attributed to his cognitive agency. But this need not be the case. Indeed, I take it that what will usually happen in collaborative scientific endeavors of this kind is that several members of the team will end up gaining socially extended knowledge of the target cognitive successes. Knowledge acquisition is not a zero-sum game, after all—every individual in the team can in principle gain the knowledge on offer. Once one understands this, then it becomes clear why the flow of epistemic credit need not be solely directed towards Head. Notice too that this point remains even if, as it happens, Head is the only socially extended knower in the research team. For even if none of the other participants in that team clear the threshold for socially extended knowledge, this doesn’t mean that they are excluded from the web of epistemic attribution for the target cognitive successes. Each of them will be playing an explanatory role in the cognitive success of the group, and so attributions of epistemic credit will be appropriate even if they do not rise to the level of socially extended knowledge.
With this in mind, when things go wrong, as in the case of the RA fabricating results, these lines of epistemic attribution will flow in reverse to highlight attributions of epistemic blame. Head might be deserving of some of this epistemic blame, as we’ve noted. But if the lion’s share of the explanatory burden for this cognitive failing lies on the contribution made by RA, then it is to be expected that the lion’s share of the epistemic blame will fall on him too. This is entirely compatible with the thought that, when things were going well, Head was acquiring socially extended knowledge from this collaborative endeavor (perhaps the only person in the team to do so).
Duncan Pritchard
University of California, Irvine
Thanks for the reply, Duncan! I just wanted to come back with a few words to clarify what I had in mind. Just to be clear, I am not suggesting that Head’s knowledge should only be attributed to his (extended) cognitive agency. Head gains knowledge through a process that includes some of the goings-on inside RA (what RA does with their head, hands, and eyes). I would definitely accept that RA can gain knowledge in virtue of those activities too. Knowledge acquisition does not need to be a zero-sum game with Head as the only winner.
My point is rather that, on the socially extended view, because Head’s knowledge depends on RA on a very specific way, Head is always on the hook for responsibility and credit. Whatever knowledge RA gains or does not, Head’s (extended) cognitive agency is always also being exercised at the same time. If the socially extended hypothesis is correct, the goings-on inside RA’s head do double duty: they are instances not just of RA’s cognitive agency, but also of Head’s cognitive agency.
If the extended cognitive process goes bad, the fault might lie variously inside Head, RA, or in the interaction between them. If it arises inside Head, then one would expect, as in a case of non-extended knowledge, for the blame to lie with Head and to have little to do with RA. But if the fault lies with the RA-based portion of the extended process, or in the interaction between RA and Head, then something has still gone wrong with Head’s cognitive agency. For on the socially extended view, these are all parts of Head’s cognitive agency.
In this respect, there seems to be a relevant disanalogy between testimony and socially extended knowledge. A consumer of false testimony may themself be blameless — they may have been fed false information and have justifiably believed after all reasonable checks had been performed. In the equivalent case of socially extended knowledge where RA is falsifying data, it is hard to see how Head could be blameless for consuming that false information — for it is their (extended) cognitive process that is doing the poisoning. They are not merely a passive consumer of rotten information, it is their extended cognitive process that has gone rotten. That rotten element might also underlie cognitive process in another agent (RA) and their knowledge claims may thereby be questionable too, but it is hard to see how this would get Head of the hook.
The puzzle I had in mind is why we would ever exclusively blame RA. But this is typically happens (RA takes all the hit and Head gets off scot-free) and you suggest that it is also compatible with socially extended knowledge. I am not sure I see this. I find it hard to imagine a scenario in which (a) Head’s knowledge acquisition literally include some goings-on inside RA’s head (e.g. RA’s data-handling processes); (b) those goings-on go systematically bad; (c) Head walks away without being responsible. Maybe there are purely internal cases that would make this scenario more plausible though. Can one fail to be responsible for one’s own cognitive agency going bad?
Thanks for the reply to the reply Mark! I think everything probably hinges on your last remark about whether one can fail to be responsible for one’s own cognitive agency going bad. I’m inclined to think that this is possible. Think about a case where a subject’s memory starts to malfunction (due to an undiagnosed brain lesion, say), albeit in undetectable ways from the subject’s point of view. If we know that this is the reason why the subject is led into cognitive error, then I don’t think we would blame the subject but the malfunction, even though their memory is clearly an integrated part of their cognitive processes. The same goes for cases of technologically extended cognition. If the technology is malfunctioning in ways that are undetectable to the agent (something has affected its calibration, say) and this leads to cognitive error, then I think the blame would naturally flow to the technology rather than the subject. Of course, it’s important that this is undetectable by the agent, just as in the memorial case. It can’t, for example, be a malfunction that the agent ought to have spotted (in a suitably robust epistemic sense of ‘ought’). If the foregoing is right, then there shouldn’t be anything problematic about there being analogous cases when it comes to socially extended cognition. In particular, the following should be possible: (i) Head is normally a socially extended knower, and (ii) RA’s information-processing is effectively a component of Head’s socially extended cognitive processes, but (iii) cognitive error results as a consequence of RA’s actions (fabrication of results, say) in ways that are undetectable (in the strong sense just noted) to Head, and hence (iv) RA is blamed for the cognitive failure rather than Head.
Thanks, Duncan! That makes sense, and I can see how in the internal memorial case you suggest it might work. I’m wondering, however, if questions of epistemic responsibility might have a slightly different cast in the setting of scientific knowledge acquisition. A lot of work is being done by the ‘undetectable by the agent’ condition. I’m not sure I fully understand the modal strength of this condition. On a weak reading (perhaps suitable for everyday knowledge claims), it might be something like ‘unnoticed by the agent’ or ‘not detected by their standard epistemic routines’. On a strong reading (one perhaps more suitable for defending knowledge claims in science), I think it might be something closer to ‘impossible for the agent to detect’.
Consider the purely internal case you describe. Suppose that some of Head’s internal memory or data-handling processes fail due to some undiagnosed lesion in his brain. Say that this leads Head to publish, or attempt to publish, faulty results. Head’s knowledge acquisition processes have gone bad, and in a way that he has not noticed, or aren’t easily detected by him from introspection or cursory checks. The fault is still however — both in principle and in practical terms — detectable by him. If he had been properly examined and tested, the brain lesion and cognitive deficit would have been revealed. Alternatively, if he had given more careful scrutiny to his claimed results, or checked the results via multiple independent routes, that may well have revealed that they were faulty or unjustified.
I tend to agree that if we were evaluating epistemic responsibility in the context of everyday knowledge, we would let Head off the hook — if he were to claim ‘There is milk in the fridge’ not because he saw it, but because of brain misfiring caused by the lesion, he isn’t likely to be held responsible for the bad results. However, the epistemic standards seem substantially higher in the scientific knowledge context. Head could have detected that his cognitive agency had gone bad — if he’d been careful enough or thorough enough. I suspect some measure of blame would be allocated to Head: he should have been more careful, conducted more checks, etc. before making his claim. He is responsible for the reliable functioning of his cognitive agency, and for rooting out any faults.
If one takes this (admittedly rather perfectionist!) view about the internal case, then it’s hard to see how the same won’t apply to the extended case too.
One more thought, perhaps to help make more credible the claim that higher (perfectionist?) standards apply in the scientific case. In instances where scientific results are published that are faulty due to an unnoticed bug or issue in the software (e.g. in Excel) or hardware (e.g. in Intel chips — the FDIV bug), there is never any suggestion that this gets the relevant scientific researchers off the hook. Rather, they are held fully responsible for their use of a faulty data analysis process — they should have checked the results more carefully, used independent routes to verify them, guarded more carefully against error, etc.
Thanks for the reply to my reply Mark! I entirely agree with you that what is involved in the ‘undetectable’ requirement can vary from case to case, and be more demanding in the scientific case. Moreover, notice that I was already construing this requirement quite strongly even in the non-scientific case. As I put it in my reply, it’s not just that the agent is unaware of the relevant consideration, as it further matters whether this is something that “the agent ought to have spotted (in a suitably robust epistemic sense of ‘ought’).” What one ought to have spotted can vary from case to case, however. This fits in with the account of ignorance that I have offered elsewhere. I claim that there is more to ignorance than simply lacking knowledge; it is rather lacking knowledge that one ought to have had. Crucially, however, what one ought to know can vary depending on such factors as one’s professional responsibilities. This is why a scientist could count as ignorant for failing to know something that a layperson could fail to know without being ignorant. With this in mind, I think it’s quite plausible that we would incorporate further normative requirements on scientists in terms of detecting cognitive failure than we would on laypeople—e.g., they are expected to be extra vigilant, make additional checks, and so on. Accordingly, there will be cases where RA’s fabrication of results would still result in epistemic blame to Head, even if Head couldn’t have easily spotted such a fabrication. Even so, however, all that matters for my response is that there could be cases where Head doesn’t get the blame for the cognitive failure, and that seems to still hold. In particular, it would hold in cases where Head met the relevant normative requirements (he was suitably vigilant and so forth) but the cognitive error resulted regardless.
The only way in which a case of this kind might be prevented by the reasoning that you set out is if there is some kind of ‘strict liability’ rule that operates in the scientific case, on the model we find in certain legal scenarios (whereby one is obliged to accept liability even in cases where there is no negligence). (I think this model might be suggested by your ‘perfectionist’ remark in your second comment). I’m sceptical that anything so austere would be operative in the scientific case, as opposed to there simply being high standards of vigilance imposed, but if they are in play then obviously that would entail that Head should get the blame, in the relevant sense, regardless of the circumstances. Notice, however, that this conclusion would seem now to be rather orthogonal to our concerns, as this is not meant to be capturing our intuitive judgements about cognitive responsibility (any more than legal judgments about strict liability capture our intuitive verdicts about legal responsibility), but would rather be simply reflecting a curious feature of scientific practice. That is, I would still contend (just as one would, mutatis mutandis, in the legal case), that there can be cases where Head is not in fact cognitively responsible for the cognitive error, even if in a technical sense he will be treated as such professionally.
Thanks, Duncan. That makes sense and I think we are very close to agreeing about the case.
I’m not sure if I’d entirely accept the ‘strict liability’ characterisation as there are motivations in the legal context for introducing strict liability that I don’t think transfer over to the scientific one (e.g. pragmatic reasons to simplify legal disputes about who pays whom when things go wrong). I’d prefer to characterise the scientific case as one in which the epistemic standards are just much higher and there isn’t really an excuse for exercising faulty cognitive agency in obtaining results (or maybe very few excuses).
On the orthogonality, I think that we both agree that the right outcome is that in some cases RA should be held responsible, in some cases Head should be held responsible, and in other cases the responsibility should be distributed between the two. What my original comment suggested was that there is likely to be a degree of divergence between what a theory of extended knowledge says about how responsibility should be allocated in these cases and our actual practice and intuitions.
I wanted to know your view about this. If the extended knowledge theory is true, are there then new reasons for revising our existing scientific practice and our intuitions (in my view, we should start holding Head responsible in many more cases than we currently do). Do you see your position as having practical implications for scientific practice, or does it leave everything more or less as it is?
That’s a helpful way of clarifying what is at issue here Mark. Insofar as we focus our attention on the specific issue of responsibility for cognitive failure, then it may well be that treating scientific knowledge as socially extended doesn’t have revisionary implications. Indeed, one of the things I was trying to do in the paper was show how a commitment to socially extended knowledge, including in the scientific case, is far less revisionary to our ordinary ascriptions of cognitive responsibility than a commitment to (what I was calling) distributed knowledge (i.e., where the knowledge is attributable, in a non-reducible way, to a group agent). One of the reasons why this is so is that when things go awry we can usually identify clear lines of responsibility for the cognitive failure–socially extended knowledge is compatible with this kind of straightforward allocation of individual cognitive responsibility in a way that distributed knowledge isn’t. (Indeed, if there is distributed scientific cognition, then it would be mysterious if we were able to identify individual lines of cognitive responsibility when things go awry). Moreover, as I pointed out in the paper, there might be good reasons why scientific practice is structured so as to ensure that individual lines of cognitive responsibility can usually be identified in these cases of cognitive failure, even when the cognitive success would be naturally understood as socially extended cognition (precisely because scientific practice has a more defined epistemic structure than other kinds of practices). This relates to the wider goal of the paper, which is to show that we should be wary about too quickly concluding from the fact that scientific inquiry is highly collaborative in nature to concluding that scientific knowledge is distributed knowledge (rather than merely socially facilitated knowledge or socially extended knowledge).
Thanks, Duncan. I completely agree that saying that scientific knowledge is extended knowledge is far less revisionary than saying that it is distributed knowledge. I also take the point about lines of individual responsibility being identifiable, which would be a bit puzzling on a distributed knowledge picture.
I’m not sure if I’d be on board with saying that extended knowledge is entirely non-revisionary to current scientific practice though. I suspect that there is likely to be some degree of mismatch with existing practice (e.g. we tend to go easier on Head than we should for what are failures in his cognitive processes and which, if replicated in a purely internal case, we would have blamed him). Maybe some revisionary implications are not a bad thing though — those existing practices are not sacrosanct and they may be mistaken or incoherent in various ways. In certain respects, it would be surprising if the actual way in which epistemic responsibility tends to get allocated in science matches precisely how how it should be allocated according to extended knowledge theory.
Thanks Mark. Just one final point of clarification. In your last response you characterized my view as being that “extended knowledge is entirely non-revisionary to current scientific practice”. Note that this is a much stronger way of putting it than what I said. My point was that treating scientific knowledge as extended knowledge can be compatible with non-revisionary interpretations of what’s going in actual scientific practice, but that doesn’t mean that it’s always non-revisionary in its implications—indeed, I would expect it to lead to some kind of revisionism (albeit far, far less than would be required by treating scientific knowledge as distributed knowledge in the sense that I specified).
Thanks, Duncan. That make a lot of sense and helps me understand the paper better. Your claim is the view is compatible with a non-revisionary interpretation of scientific practice, but not committed to it. Thank you!