Welcome to the Brains Blog’s Symposium series on the Cognitive Science of Philosophy! The aim of the series is to examine the use of methods from the cognitive sciences to generate philosophical insight. Each symposium is comprised of two parts. In the target post, a practitioner describes their use of the method under discussion and explains why they find it philosophically fruitful. A commentator then responds to the target post and discusses the strengths and limitations of the method.
In this symposium, Eric Schwitzgebel discusses his advice about how to pursue a successful adversarial collaboration, with Justin Sytsma providing commentary.
* * * * * * * * * * * *
Adversarial Collaboration
Eric Schwitzgebel
_________
You believe P. Your opponent believes not-P. Each of you thinks that new empirical evidence, if collected in the right way, will support your view. Maybe you should collaborate? An adversary can keep you honest and help you see the gaps and biases in your arguments. Adversarial collaboration can also add credibility, since readers can’t as easily complain about experimenter bias. Plus, when the data land your way, your adversary can’t as easily say that the experiment was done wrong!
My own experience with adversarial collaboration has been mostly positive. From 2004-2011, I collaborated with Russ Hurlburt on experience sampling methods (he’s an advocate, I’m a skeptic). Since 2017, I’ve been collaborating with Brad Cokelet and Peter Singer on whether teaching meat ethics to university students influences their campus food purchases (they thought it would, while I was doubtful). The first collaboration culminated in a book with MIT Press and double-issue symposium in Journal of Consciousness Studies (Hurlburt & Schwitzgebel, 2007, 2011). The second has so far produced an article in Cognition (Schwitzgebel, Cokelet, & Singer, 2020) and hopefully more to come. Other work has been partly adversarial or conducted with researchers whose empirical guesses differed from mine (Schwitzgebel & Cushman, 2012, 2015; Moore & Schwitzgebel, 2018).
I’ve also had two adversarial collaborations fail – fortunately in the early stages. Both failed for the same reason: lack of well-defined common ground. Securing common ground is essential to publication and uniquely challenging in adversarial collaboration.
I have three main pieces of advice:
- Choose a partner who thrives on open dialogue.
- Define your methods early in the project, especially the means of collecting the crucial data.
- Segregate your empirical results from your theoretical conclusions.
To publish anything, you and your co-authors must speak as one. Without open dialogue, clearly defined methods, and segregation of results from theory, adversarial projects risk slipping into irreconcilable disagreement.
Open Dialogue
In what Jon Ellis and I have called open dialogue (Schwitzgebel & Ellis, 2017), you aim to present not just arguments in support of your position P but your real reasons for holding the view you hold, inviting scrutiny not only of P but also of the particular considerations you find convincing. You say “here’s why I think that P” with the goal of offering considerations C1, C2, and C3 in favor of P, where C1-3 (a.) epistemically support P and also (b.) causally sustain your opinion that P. Instead of having only one way to prove you wrong – showing that P is false or unsupported – your interlocutor now has three ways to prove you wrong. They can show P to be false or unsupported; they can show C1-3 to be false or unsupported; or they can show that C1-3 don’t in fact adequately support P. If they meet the challenge, your mind will change.
Contrast the lawyerly approach, the approach of someone who only aims to convince you or some other audience (or themselves, in post-hoc rationalization). The lawyerly interlocutor will normally offer reasons in favor of P, but if those reasons are defeated, that’s only a temporary inconvenience. They’ll just shift to a new set of reasons, if new reasons can be found. And in complicated matters of philosophy and human science, people can almost always find multiple reasons not to reject their pet ideas if they’re motivated enough. This can be frustrating for partners who had expected open dialogue! The lawyer’s position has, so to speak, secret layers of armor – new reasons they’ll suddenly devise if their first reasons are defeated. The open interlocutor, in contrast, aims to reveal exactly where the chinks in their armor are. They present their vulnerabilities: C1-3 are exactly the places to poke at if you want to win them over. Their opinion could shift, and such-and-such is what it would take.
In empirical adversarial collaboration, the most straightforward place to find common ground is in agreement that some C1 is a good test of P. You and your adversary both agree that if C1 proves to be empirically false, belief in P ought to be reduced or withdrawn, and if C1 proves to be empirically true, P is supported.
Without open dialogue, you cannot know where your adversary’s reasoning rests. You can’t rely on the common ground that C1 is a good test of P. You thought you were testing P by means of testing C1. You thought that if C1 failed, your adversary would withdraw their commitment to P and you could write that up as your mutual result. If your adversary instead shifts lawyerlike to a new C2, the common ground you thought you had, the theoretical core you thought you shared, has disappeared, and your project has surprisingly changed shape.
In one failed collaboration, I thought my adversary and I had agreed that such-and-such empirical evidence (from one of their earlier unpublished studies) wasn’t a good test of P, and so we began piloting alternative tests. However, they were secretly continuing to collect data on that earlier study. With the new data, their p value crossed .05, they got a quick journal acceptance – and voilà, they no longer felt that further evidence was necessary!
Now of course we all believe things for multiple reasons. Sometimes when new evidence arrives we find that our confidence in P doesn’t shift as much as we thought it would. This can’t be entirely known in advance, and it would be foolish to be too rigid. Still, we all have the experience of collaborators and conversation partners who are more versus less open. Choose an open one.
Define Your Methods Early
If C1, then P; and if not-C1 then not-P. Let’s suppose that this is your common ground. One of you thinks that you’ll discover C1 and P will be supported; the other thinks that you’ll discover the falsity of C1 and P will be disconfirmed. Relatively early in your collaboration, you need to find a mutually agreeable C1 that is diagnostic of the truth of P. If you’re thinking C1 is the way to test P and C2 wouldn’t really show much, while your adversary thinks C2 is really more diagnostic, you won’t get far. It’s not enough to disagree about the truth of P while aiming in sincere fellowship to find a good empirical test. You must also agree on what a good test would be – ideally a test in which either a positive or a negative result would be interesting. An actual test you can actually run! The more detailed, concrete, and specific, the better. My other failed collaboration collapsed for this reason. Discussion slowly revealed that the general approach one of us preferred was never going to satisfy the other two.
If you’re unusually lucky, maybe you and your adversary can agree on an experimental design, run the experiment, and get clean, interpretable results that you both agree show that P. It worked, wow! Your adversary saw the evidence and changed their mind!
In reality of course, testing is messy, results are ambiguous, and after the fact you’ll both think of things you could have done better or alternative interpretations you’d previously disregarded – especially if the test doesn’t turn out as you expected. Thinking clearly in advance about concrete methods and how you and your adversary would interpret alternative results will help reduce, but probably won’t eliminate, this shifting.
Segregate Your Empirical Results from Your Theoretical Conclusions
If you and your adversary choose your methods early and favor an open rather than a lawyerly approach, you’ll hopefully find yourselves agreeing, after the data are collected, that the results do at least superficially tend to support (or undermine) P. One of you is presumably somewhat surprised.
Here’s my prediction: You’ll nevertheless still disagree about what exactly the research shows. How securely can you really conclude P? What alternative explanations remain open? What mechanism is most plausibly at work?
It’s fine to disagree here. Expect it! You entered with different understandings of the previous theoretical and empirical literature. You have different general perspectives, different senses of how considerations weigh against each other. Presumably that’s why you began as adversaries. That’s not all going to evaporate. My successful collaborations were successful in part, I think, because we were unsurprised by continuing disagreement and thus unfazed when it occurred, even though we were unable to predict in advance the precise shape of our evolving thoughts.
In write-up, you and your adversary will speak with one voice about motivations, methods, and results. But allow yourself room to disagree in the conclusion. Every experiment in the human sciences admits of multiple interpretations. If you insist on complete theoretical agreement, your project might collapse at this last stage. For example, the partner who is surprised the by results might insist on more follow-up studies than is realistic before they are fully convinced.
Science is hard. Science with an adversary is doubly hard, since sufficient common ground can be difficult to find. However, if you and your partner engage in open dialogue, the common ground is less likely to suddenly shift away than if one or both of you pettifog. Early specification of methods helps solidify the ground before you invest too heavily in a project doomed by divergent empirical approaches. And allowing space at the end for alternative interpretations serves as a release valve, so you can complete the project despite continuing disagreement.
In a good adversarial collaboration, if you win you win. But if you lose, you also win. You’ve shown something new and (at least to you) surprising. Plus, you get to parade your virtuous susceptibility to empirical evidence by uttering those rare and awesome words, “I was wrong.”
* * * * * * * * * * * *
Commentary: Stacking the Cheese
Justin Sytsma
_________
Writing comments on a post about adversarial collaboration feels like a place where I should be adversarial (if in a collaborative spirit). But I agree with basically everything Eric says here. Frankly, this is all spot on. You probably don’t want to read 500 words from me just saying “yep, this” and agreeing with his excellent, sensible advice, though. So, let me attempt to be provocative: Eric doesn’t go far enough! (Not that he was trying to, of course.) All philosophers should be asking themselves what empirical evidence would actually test their views.[1] Collaboration should be the rule, not the exception. And we should expect collaborations to have an adversarial element, treating this as a feature, not a bug.
Each of us is like Swiss cheese: we’ve all got our holes. We’ve got biases and blindspots. There are things we miss, problems that just don’t occur to us, possibilities we don’t see. And, further, we’re generally bad at recognizing exactly where our holes are. If we want to get things right, we need to deal with our holes. Unfortunately, too often philosophical practice instead accentuates them, enshrining our prejudices (Machery 2017). In my opinion, what is needed is a fundamental shift toward methods that minimize our holes or otherwise cover them over.
Embracing empirical and experimental philosophy is one key step in this process. We should all be asking ourselves what actual evidence would support our views. (And if we can’t come up with anything that would do the trick, then maybe we need to pause and reflect on what that tells us about our views and why we hold them.) Good scientific practice helps deal with our holes. And collaboration is part of this. There is a reason why work in the experimental sciences tends to be heavily collaborative. As Eric says, it can help keep you honest.
The promise of collaboration is not that it will get rid of our holes, but that it will help cover them over… like stacking multiple slices of Swiss cheese on top of each other. Of course, this will only work if the holes are in different spots. And this is most likely if you’re taking slices from different places in the block. For this (increasingly tortured) metaphor, distance between slices corresponds with how adversarial the collaboration is. The further apart – the more adversarial – the more likely that your holes will fall in different places.
Most of my own work is collaborate collaborative[2], some of it deserving of the label “adversarial” (e.g. Fischer, Engelhardt, and Sytsma forthcoming; Sytsma, Schwenkler, and Bishop ms). In my experience, though, all collaborative work will have an adversarial element: even those philosophers I’ve worked with who I most agree with, I still disagree with on a lot of details. There’s a tradeoff in how adversarial a collaboration is: as above, the more adversarial, the more likely you’ll cover up each other’s holes; but on the flip side, collaboration can be difficult, and in my experience the more adversarial the project, the more work it involves and the more likely to run the risk that you won’t be able to come to a conclusion that all parties are willing to endorse.
In general, collaboration is hard work (although worth it). Anecdotally, it seems that many in philosophy harbor the feeling that collaboration is a cheat or a shortcut; that joint-authored papers shouldn’t count for as much as solo-authored papers in hiring or promotion because the work has been split up between multiple people. I believe this is a mistake. Certainly, it hasn’t been my experience. If anything, I think I end up spending moretime on collaborative projects than on comparable solo papers. But I think it is worth the effort, and my work is better as a result of the collaboration.
Collaboration, especially adversarial collaboration, gives you built-in critics. It forces you to get clear on exactly what you’re arguing, increases the likelihood that you’ll identify and address questionable assumptions and live alternative possibilities, and as a result should increase your confidence in the finished product.
[1] Well, excepting formal work where proofs will replace empirical evidence.
[2] Collaborators also help catch typos before you submit something.
* * * * * * * * * * * *
References
_________
Fischer, E., P. Engelhardt, and J. Sytsma (forthcoming). “Inappropriate stereotypical inferences? An adversarial collaboration in experimental ordinary language philosophy.” Synthese.
Hurlburt, Russell T., and Eric Schwitzgebel (2007). Describing inner experience? Proponent meets skeptic. MIT Press.
Hurlburt, Russell T., and Eric Schwitzgebel (2011). Describing inner experience? Symposium, ed. Josh Weisberg. Issue 1-2, p. 7-305.
Machery, E. (2017). Philosophy Within Its Proper Bounds. OUP.
Schwitzgebel, Eric, Bradford Cokelet, and Peter Singer (2020). Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat. Cognition, 203, 104397.
Schwitzgebel, Eric, and Fiery Cushman (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind & Language, 27, 135-153.
Schwitzgebel, Eric, and Fiery Cushman (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127-137.
Schwitzgebel, Eric, and Jonathan E. Ellis (2017). Rationalization in moral and philosophical thought. In J.-F. Bonnefon and B. Tremoliere, eds., Moral inferences. Psychology Press.
Schwitzgebel, Eric, and Alan Tonnies Moore (2018). The experience of reading. Consciousness and Cognition, 62, 57-68.
Sytsma, J., J. Schwenkler, and R. Bishop (ms). “Has the Side-Effect Effect been cancelled? (No, not yet.)”
Fantastic contributions, Eric and Justin!
While Justin takes Eric’s claims further, I wonder if we could also benefit froma softer proposal : perhaps agnostic consultations rather than adversarial collaborations should be the norm. By agnostic arbitration, I mean that those with empirical training could partner with philosophers who lack such empirical training to test some of their views. The empirically-trained arbitrator should be agnostic in that they do not have a horse in the race—and they may not even have a consistent intuition about the topic/case. That way, the one who will likely do most of the design, implementation, and analysis of the experiment(s) is not prone to any particular conclusion. So agnostic arbitrators are a bit like the quantitative social psychologists that are often recruited by other social psychologists to ensure the rigor of their design, implementation, and analysis of their papers, grant proposals, etc.
Many experimental philosophers will fit the criteria of agnostic arbitrators, especially when investigating cases/topics outside their areas of expertise. Moreover, many philosophers lack the empirical training to do rigorous experimental philosophy. So there seems to be many opportunities for a kind of collaboration that is not necessarily adversarial, but offers many of the same benefits that Eric and Justin seek in this post (e.g., testing philosophers’ claims/views empirically). (Indeed, agnostic arbitration could be a great job description for many postdocs in philosophy.)
In my limited experience as an agnostic arbitrator, I have found the following benefits:
So perhaps agnostic arbitration should be added to our best practices for collaborations in philosophy. Or maybe I am overlooking something that more experienced experimental philosophers can tell us about.