A New Trend in Philosophy Journals’ Editorial Process?

Philosophy journals are notoriously slow in processing submissions.  They often take a year or so to decide whether to accept a paper, and when they reject, they often give no useful feedback to the author.  At the same time, editors complain about the increasing number of submissions – most of which will be rejected – which place an increasing burden on the editors and referees.

One solution is the Analysis model:  the editor rejects most papers directly, without giving feedback to authors but deciding very quickly, and s/he sends to referees only the relatively few papers that she might be interested in publishing.

Is there a trend towards the Analysis model? 

That is, are more high quality philosophy journals (such as Mind and Language, Synthese, Canadian JP) rejecting more papers quickly – within a few weeks of the submission – on the basis of a fast vetting by the editor(s)?  Anecdotal evidence suggests that several journals are moving in this direction.  If so, this might be a good development:  although some good papers are likely to be rejected on spurious grounds (the topic is not trendy, the author is not famous enough, or whathaveyou – but that happens with referees too!), at least authors will know relatively quickly, instead of wasting a year (plus or minus a few months) every time.

Does anyone else have evidence or comments on this?

Update [2/2/09]:  Thanks to those who have posted the very informative and interesting comments.

21 Comments

  1. Dale

    I have evidence of this happening (both to me and to others with whom I’m familiar) at the Journal of Political Philosophy, Philosophical Quarterly, Economics and Philosophy, and Ethics.

    I think there are very good reasons for this practice, but also a number of good reasons against. First, rarely are reasons given for these sorts of rejections. In the end, though it’s not much wasted time generally, it still amounts to wasted time. Second, because of this I can’t imagine that it would be good for the journal: if I know that a journal definitely sends things out to reviewers and more often than not comments are returned, I’m more likely to send my best papers there. I suppose this heightens the workload of those editors and journals–but these things all need to be traded-off I think.

    Furthermore, unless the editor sees it blind, which is rare I would imagine, this subverts the point of blind submission, which I think is quite important, if sometimes imperfect.

  2. Small Liberal Arts Prof

    I have mixed feelings about this development. In the case of Analysis, my – admittedly totally anecdotal impression – is that the editors automatically reject papers from institutions that they haven’t heard of, e.g., from small liberal arts colleges in the United States. (When was the last time you saw a paper in Analysis authored by an assistant prof from an outstanding small liberal arts college?)

    On the other hand, my experience with Synthese and CJP, to cite two examples, has been outstanding — this is not to say they always accept my work, but I do manage to clear the first round and receive excellent comments.

    The fear is that a pedigree bias might creep in at the level of the initial vetting — something I fear is already occurring at Analysis, though not at Synthese or CJP.

    Again, this comment is totally based on anecdotal evidence; take it for what you will.

  3. A “friend” of mine had a (revolutionary!) paper in metaphysics rejected by Nous within a month, with no reviewer comments. This friend guesses that this process reflected an Analysis model. (The editor specifically mentioned the tradeoff between time to decision and presence of reviewer comments.)

  4. Peter Vallentyne

    I just stepped down as Contact Editor for Economics and Philosophy and we were definitely trying to reject quickly (without refereeing)as many papers as we could with confidence. I strongly believe that this is a desirable way to do things: It protects referees and provides fast decisions for papers without promise. Of course, an occasional mistake is probably made, but it is well worth the cost.

    In general, I was probably rejecting about 30% of the papers. I did this on the basis of: inappropriate topic for journal, extremely poor writing style, obviously unsophisticated content, and insufficient promise for papers in one of my areas of competence.

    Peter

  5. Gualtiero,

    This is a very interesting question. I agree that anecdotal evidence suggests that more and more top and medium journals are adopting the prompt decision no comments model, but I don’t think this necessarily means that they are adopting the “Analysis” model. Rather I think many journals are no longer requiring referees to submit detailed comments for rejected papers.
    Even if I understand where editors are coming from and makes my life easier when I referee, I am afraid this makes the refereeing process even less transparent and fair. I think it is only fair to give authors some reasons for rejecting their papers even if the process has taken “only” two months.
    Moreover, having to write some comments for the author is an incentive for referees to be more conscientious and clear about their reasons for rejecting a paper.

  6. Doug Portmore

    Whether it would be good or not depends, I think, on whether the initial vetting by the editor is blind or not. If the editor knows the name and institutional affiliation of the author when doing the initial vetting, then I suspect that some authors will be more likely to pass the initial vetting merely because of their names and/or institutional affiliations. And I suspect that some will be less likely to pass the initial vetting merely because of their names and/or institutional affiliations.

  7. ezio

    I had the Analysis-kind of experience with Analysis, Mind & Language and, more recently, APQ. In all three cases it was made obvious in the rejection email that the submission had been turned down by the editor before being sent out to referees.

  8. This is what they do in all the big science journals which have a zillion submissions every month. Good or not, it’s gotta happen in a field with lots of submissions. The wait times of over a year are simply unacceptable. No scientist would send to such a journal for fear of being scooped among other reasons.

  9. There are intermediate possibilities. At least some journals use two-tiered reviewing: reviewers are asked to report rather quickly whether the paper is worth a close look; if it is, then they’re given more time to prepare comments. If it’s not, it will be rejected without comment rather promptly. I know that Nous does this at least sometimes.

    This, it should be made clear, is NOT the Analysis model. The manuscripts go to blind reviewers.

  10. It’s worth bearing in mind that editors have a massive input anyway, even if they send out the paper for reviews first. If the editors are not sympathetic to a paper, then unless the paper gets a glowing review with no reservations (which rarely happens), then the paper is unlikely to go anywhere. So at least in principle, it’s nice to know the inclination of the editors at the start. If they really hate it, or don’t see it fitting with the journal’s profile, I’d rather know that from the start. However, any judgement should be based on the content of the manuscript, not on being pals with the editor, so as Doug says, there is an argument for making a first-pass editorial filter blind.

  11. Larry McCullough

    In house editorial review is commonplace at medical journals, where philosophers regularly publish in bioethics. Turnaround for initial review is usually 4 weeks. One is rejected or informed that one’s paper has been sent for review. A significant advantage: one can start with the best journals and learn quickly about whether one’s paper gets past internal review. If not, on to the next journal on one’s list. BTW: This policy is labor-intensive for editors and medical journals usually have several.

  12. I have been editor of the Journal of the History of Philosophy since 2003. I can report that during my tenure the Journal has substantially increased the number of submissions it rejects without sending to referees. (During the same time we have cut our acceptance rate roughly in half, from around 10% to around 5%). Last year we rejected over 60% of our submissions without external review. The average time for the review of those submissions that we did send to referees that year was 66 days.

  13. PPR also uses the Nous model that Jonathan mentioned (unsurprisingly) — in fact, in this model a referee can do a quick review without committing to the full review. (That is, they can take two weeks to decide whether the paper should be sent to someone else, if they can’t do the full review.) This may get the quick rejections done quicker by making it easier to find a referee; I know that I’ve sometimes done a quick review when I wouldn’t have been able to commit to a full review.

  14. John Lamont

    A question for Tad Schmalz – no doubt your halving your acceptance rate was due to an approximate doubling of submissions, rather than to a halving of the number of papers you publish. Did the rise in the number of submissions go along with a decline in their average quality, or did the quality stay roughly the same? If there was a decline in the average quality, that would seem to provide some justification for rejecting a higher proportion of papers without subjecting them to external review – there would be a higher proportion of papers whose unpublishability was immediately obvious.

  15. Over the past few years the number of submissions has remained fairly steady—around 200 per year. We needed to cut our acceptance rate because JHP used to have over a three year backlog of accepted articles. Currently our backlog is about a year. I would say that the overall quality of submissions has remained roughly the same; we have simply raised the bar for sending out submissions for review and for acceptance.

  16. M.M.

    I have mixed feelings about this. I have had papers rejected from JHP without going out for external review, and I felt compromised by this. I took it that the journal editor would not know my identity, and so I was surprised to receive a rejection (within a week or so of submission) from him directly. I would feel differently about this practice if practices of blind review were followed more closely.

    I’ve recently had a rejection on the PPR model that Matt Wiener noted above. It wasn’t so bad. They were quick, and I have every reason to believe that practices of blind review were followed, as the message was passed on to me from a managing editor or editorial assistant. I would rather get a quick answer and move on than have it go out for review when it is just not the sort of thing that the editor is going to accept in the end.

    That said, I do appreciate getting comments, and I will continue to send my work to journals that provide good comments in a timely fashion (CJP and Phil Imprint come to mind).

Comments are closed.