The other day I was procrastinating on preparing for class, so I
went over to Chalmers’s MindPapers. It didn’t take all that long
before I did a search on “Aizawa” and found a few papers. I also
noted that most were not cited. That reminded me of a discussion not
too long ago in Science regarding
a kind if “impact” measure, namely, the h-index. A person’s
h-index is the number of papers n that have been cited n times. So,
Stephen Hawking has an h-index of around 150, if memory serves me
correctly. I don’t bring this to your attention to impress you with
my mighty h-index. It is impressive for how low it is, namely,
3. I mention it because it is the kind of topic, rating
philosophers, that is probably second in bloggability only to posts
on the job market. I, of course, think that the h-index is totally
worthless as a measure of anything, but what do I know? I only have
an h-index of 3.
Jeff Zacks and I did a project involving the h-index in psychology, applied to topics instead of individual authors. Along the way, we found that h varies widely among different academic disciplines, so it’s not very useful to compare the h values of individuals across such disciplines.
Our paper is here, and it includes a short bibliography that’s worth looking into (if you’re interested in issues involving quantifying researchers and their research).
https://www.psychologicalscience.org/observer/getArticle.cfm?id=2154
Corey
This is good. First, maybe my h-index of 3 and Hawking’s h-index of 150, or whatever, is just due to a difference in discipline. That would be good, if I cared about the h-index, which I don’t. Really. Second, maybe this is the kind of approach/study that would interest Leiter and his curiosity about what’s hot in epistemology now.
Not true! Your h-index appears to be at least 4. One’s h-index is the largest n such that one has n papers with at least n citations, and according to MindPapers, you have 4 papers with at least 4 citations (48, 34, 10, and 4 respectively).
Holy moly! My meteoric rise to stardom! I submitted some entries at the beginning of the week and lo and behold some folks had actually cited one of them.
I’m no programmer, but I would think that Bourget could probably add a little h-index calculator to MindPapers. It could complement the “Most cited” stuff. Fun, fun.
Here’s a little trick to boost one’s h-index. Write a commentary in BBS and refer to one of your own papers. Then every commentator counts as citing your paper.
I do think h-index is not a great metric in philosophy, in part because we don’t publish nearly as much as folks in the sciences. A good e.g. physicist or computer scientist will have 6-8 publications or more per year — partly because co-authorship is so much more common than in philosophy.
So philosophers’ h-indexes will stay low, because we don’t have hundreds of papers. For example, I’m guessing Kripke’s h-index would be about 15 or so, even though _Naming and Necessity_ has over 2000 citations in Google Scholar — and Steven Hawking himself only has one publication with more citations than that. Kripke is of course in many ways an anomalous case, but the general point that philosophers tend not to have their names on as many articles as physicists (for whom the h-index was initially introduced) holds nonetheless.
I agree. The following scenario is fairly common in philosophy:
– Joe has about 15 papers published with a relatively high citation average of, say, 40.
– Bob has about 15 papers published with a significantly lower citation average of 15.
Joe and Bob would have the same h-index (15) even though the quality of their work would be judged significantly different (probably, Joe’s articles are in far better journals than Bob’s). To make a more extreme (but less likely) case, give Joe an average of 100, 1000, etc.
I fail to see what interesting factor the h-index measures. Average citations are interesting because they give you a sense, prior to reading a paper by a given author, of how likely it is that you will find yourself wanting to cite that article. Indirectly, they indicate the average worth of an author’s articles, which is what you need in order to put an expected utility on reading them. Total number of citations for an author is interesting because it gives you an indication of their overall contribution to the field–how much people benefited from reading their stuff. The median and standard deviation can be used to give you a sense of how the quality of an author’s work is distributed. But the h-index? It’s not clear what important variable this measures.
One reason for not publishing citation statistics for authors is that our citations numbers might not reflect actual citations well enough. The numbers are almost certainly significantly lower than actual citations, and the fact that they are from google scholar introduces potentially important biases (some people’s actual citations might be better represented than others).
Even if we made it clear that the data are shaky, lots of people would probably take them far more seriously than they should. So I share your curiosity regarding these statistics, but we have to consider the significant negative impact that letting them out could have.
I agree with David’s comments about the possible dangers of relying on citation statistics. However, I think there is some value in exploring alternatives for quantifying the importance of research; we aren’t going to stop people from making quantitative analyses (it’s quite easy to do with online databases), so we should at least look into better, more nuanced statistics.
It seems to me that the h-index is an attempt to overcome the limits of some of the variables David mentions. Total and average citation counts fail to distinguish those authors who have consistently produced popular articles from those who have published one or two “hits”, plus a lot of mediocre work. And although examining multiple statistics, such as total, median, and standard deviation can be informative, it is also useful to try to capture all of this in a single variable.
That said, I think David is right with respect to philosophy: the h-index is probably more useful in fields where publication rate is much higher than in philosophy. In the example, if Joe and Bob continued to publish papers at the same level of “citability”, one would expect Joe to eventually have an h of 40, and Bob to retain the h of 15. Of course, many important philosophers have published very little.
I meant to say that Joe’s and Bob’s papers all have 40 and 15 cites each, respectively.
Alas, I find myself suffering from the fault that if you argue that p, then I feel some pressure to argue that not p.
As Corey notes, the h-index is meant to rate scholarly productivity away from “one hit wonders” and folks who pump out lots of papers that no one reads. Whether one does or should care about this “improved” rating is another matter.
But, up the ante. Imagine getting a MindPapers site for the whole of philosophy. That’s in the works, right? Then you could compute h-indices for all philosophers and even compute the h-index and average h-index for departments. Then you could have an h-index ranking of departments alongside the Leiter Report rankings. What, if anything, might that show? What would an agreement or disagreement between these rankings mean?
Yeah we’re working on a bigger site for the whole of philosophy. I should say it will not exactly be a MindPapers, though, because we will only have online material. That will limit what we can do with citation counts. For example, we will have almost no books.
Though this might not be feasible any time soon, I agree that it would at least be interesting to compare the gourmet rankings with “objective” measures like department-wide citation averages (or h-indexes). I suspect we would find discrepancies due to a number of factors, e.g. the fact that a department’s reputation often outlasts the good days, or the fact that many really good people don’t have that many publications.
And I guess that, in principle, there is no reason why such numbers have to be generated through your project. I would guess that a programmer could just develop something that would work with Google Scholar.
I guess I’ve heard that some departments have used the Leiter Report rankings to make a case for more resources. But, suppose one had a physicist, or some other sort of dean, for whom citation numbers mattered, but the Leiter Report not so much. An h-index type measure or ranking of departments might be appealing to some deans.
And, once you have the software for such rankings up and running, I imagine it would not be so hard to apply it to many more departments than are covered by the Leiter Reports. (I’m no programmer of course. ) Then you could rank many more departments. Maybe that would be useful for grad schools since they would have some (additional) measure of how good an undergrad program some of the applicants have had. Maybe.
Just a side note about relying on Google Scholar: for proper names with non-English characters (like mine) the results differ when I use the Latin-only version of my name and the Polish version of it (different papers show up). Moreover, Google ignores draft papers that do not contain any names (i.e., the versions submitted for review).
I don’t need to mention that it ignores all the citations you can get in non-English journals, though it finds non-English papers.
Shows you how much I know about the resources and the technology. Even if the h-index were an idea whose time had come, it would be technically some time off.
Leiter is now taking up the theme of “impact factors” here. It’s not the h-index, but close enough maybe.
Here, Kvanvig is now looking at how the h-index and other measures might be used, not to rate individual academics, but to rate journals. Mind, Phil Rev, and JP, have high rejection rates and high h-indices. For journals, the h-index seems to be a measure of how much the journal is read, but not it does seem to favor interdisciplinary journals. Both, biologists and philosophers, for example, might read Biology and Philosophy. Lots of folks apparently read Mind and Language. It’s kind of a surprise to me that Philosophy of Science is rated below the Southern Journal of Philosophy on h-index, g-index, and rejection rate. And it looks not to be merely a matter of a specialized journal losing out to a general journal, since Philosophy of Science is similarly rated below British Journal for the Philosophy of Science. Maybe I need to adjust my priorities about where to send papers.
Er, Philosophy of Science is better than SJP on g-index. Still surprising it’s so close.
Google Scholar citation lists need to be verified and supplemented. They include things they should exclude and exclude things they should include.
They INCLUDE
a) Self-citations. An honest person would want to exclude these when compiling his or her citation statistics.
b) Citations in on-line Masters and PhD theses. I don’t think these count, or at least, that they don’t count as much as citations in refereed journals. I keep track of my PhD and MA citations but list them separately
c) Citations in blogs, in on-line but as yet unaccepted papers, etc. These don’t count.
d) Bibliographical listings, ‘books received’ (but not reviewed), advertisements for one journal that are reproduced in another. These are not genuine citations but are counted as such by Google.
e) Duplicated citations as when a paper has been published in a journal but persists in a different version on the author’s website.
The Google Scholar citation lists usually EXCLUDE citations in books. When the books have been digitized (which often takes some time) citations can be accessed via Google Book Search, but at present the two services are not properly integrated.
It’s not that Google Scholar is BAD – it’s a lot better than many similar services. For years I relied Humanities citation index on the Web of Science which missed more than half of my citations. But it has to be used CRITICALLY if you want reliable statistics either for yourself or for public purposes.