The Deciders and Philosophy Rankings

Many people regularly visit this blog of mine to see what’s being said here about philosophy rankings, namely, the infamous Leiter report. Some say that if a philosophy Ph.D. program isn’t “Leiterrific” — if it doesn’t score well on the Leiter report — then it’s objectively not a terrific program. I beg to differ. The best evidence for objecting to the report’s credibility is the narrowness of the specialties of those who conduct the rankings. The evaluators simply do not represent the profession as a whole and so could hardly speak for what’s best in the profession as a whole. Look here to see the list of evaluators who filled out the reputational rankings for Leiter’s 2006-2008 report. It looks like fully half the rankers specialize in metaphysics and epistemology (M&E). Ninety percent of the rankers are male. And few, if any, work in continental philosophy, pragmatist theory, feminist philosophy, or Africana philosophy. And for the most part, the rankers are from the very same set of schools that are ranked at the top. True, they’re not allowed to pick their own institutions or alma mater, but they certainly pick their kissing cousins. Calling all you philosophers of science out there: does this look like good methodology? Doesn’t it assume what it’s supposedly trying to prove?

The day that deans and provosts stop taking the Leiter report seriously is the day that I’ll stop writing about it.

By Noelle McAfee

I am professor of philosophy at Emory University and editor of the Kettering Review. My latest book, Fear of Breakdown: Politics and Psychoanalysis, explores what is behind the upsurge of virulent nationalism and intransigent politics across the world today. My other writings include Democracy and the Political Unconscious; Habermas, Kristeva, and Citizenship; Julia Kristeva; and numerous articles and book chapters. Edited volumes include Standing with the Public: the Humanities and Democratic Practice and a special issue of the philosophy journal Hypatia on feminist engagements in democratic theory. I am also the author of the entry on feminist political philosophy in the online Stanford Encyclopedia of Philosophy and well into my next book project on democratic public life.

15 comments

  1. On the off chance that you are interested in facts, here are some:

    1. “M&E” is a broad label covering not only metaphysics and epistemology, but also philosophy of language, philosophy of mind, and philosophical logic. 30% of those who participated in the overall evaluations have a specialty exclusively in one of the M&E areas, which arguably underrepresents that group relative to its representation in the profession. Another 22% have M&E as one area of specialization, along with other areas of philosophy, most often some part of value theory or the history of philosophy.

    2. About 6% of the respondents work in Continental philosophy, which makes Continental philosophy one of the best-represented areas in the history of philosophy in the entire PGR. Arguably, Continental philosophy is overrepresented relative to its coverage in the profession as a whole.

    3. Certainly if one individuates areas as narrowly as you have done–e.g., “pragmatism”–it is easy to complain that there aren’t enough of of these people participating in the surveys. (Why aren’t there more respondents working on philosophical issues about string theory? Or on Duns Scotus?) As you must surely know, most PhD programs in the English-speaking world have no one who specializes in American pragmatism, yet the respondents to the survey include a number of the most prominent scholars of American pragmatism, including Cheryl Misak (a member of the PGR Advisory Board), Christopher Hookway, Robert Talisse, and Charles Guignon (there may be others, those were the ones who lept out at me when scanning the list).

    Deans and Provosts take the PGR seriously because it provides a useful snapshot of the profession. This obviously puts weak departments, who have been conning their administrations, in a tough situation.

    In any case, I just wanted to correct the record.

  2. Thank you for responding, Brian. The question remains: what is the methodology for deciding that this set of philosophers represents the profession? Is it random? Is it based upon some preconception about who has the credentials to gauge what is good philosophy? If the latter, then there is a built in bias at the very root of the rankings.

  3. Also, Brian, you didn’t respond to the claim that 90% of the rankers are men. That seems like a pretty big problem right there and that you didn’t address it seems rather conspicuous.

  4. The methodology behind the Gourmet Report has been discussed ad naseum on Leiter’s blog and the PGR site. None of this ever says that the report lacks any interest in developing itself and improving.

    In any event, I am struck by the fact that this post seems to want to derail the PGR while at the same time arguing for some suggested improvements, such as a greater proportion of women involved in ranking departments. These two strands run against each other: which is exactly the point? Does “gonepublic” dislike the PGR in toto or think it should be tweaked? If the latter, then I am sure some concrete proposals might be helpful.

    However, a better question might be the flip side of what appears above — so who on the list of PGR advisors who you *not* have on the list, gonepublic? If all should remain, then what you are arguing for is not what you think and, instead, that this already large number of advisors should grow. (Then the question is ‘by how much?’)

    A final point. I am old schoolmates of Talisse and know Hookway. I would most definitely *not* pigeon-hole them as “only” pragmatists. Without any shadow of a doubt, each is far, far, far more than this.

  5. To Thom’ s question: my claim is that the report does not provide an objective measure of the best departments. It is only a reputational ranking of what a pre-selected and nonrepresentative group thinks and so only tells you what that group thinks. It has no objective validity. By objective I mean some measure that can be generalized to represent the field as a whole. Can what a certain sample S of philosophers says be said to hold for the larger population P of philosophers? This is a technical point in survey research methodology regarding when a set of findings can be generalized beyond that small set. In order to generalize, S would need to perfectly reflect P. There are third-rate and first-rate ways to get S to reflect P. The third-rate way is to employ quotas, e.g., to make sure that S looks like P in terms of demographic or other factors. But note that a quota-produced S could look like P in terms of any number of factors (race, gender, geography, etc.) and still S might differ from P in other ways. Maybe S has the same percentage of women, pragmatists, and history people as P, but S happens to prefer the early Wittgenstein over the late Wittgenstein much more than P does. This is the pitfall of quota-produced third-rate samples. The first-rate way to get S to reflect P is to get a random sample of a threshold size (usually at minimum about 300). A properly produced random sample should look like P in any and every respect, including various philosophical points of view.

    But of course the Leiter report isn’t really interested in what the profession as a whole thinks (and neither am I, for that matter)–it only wants to capture what the supposed philosophical gourmands think. It should stick to that and make vary plain that the report does not give an objective measure of which programs are best.

    My pointing out that the Leiter report evaluators are 90 percent male, etc., is simply to show that the sample doesn’t reflect the discipline in even a third-rate way.

  6. Your main claim is that the PGR is not objective because of those who do the judging (e.g., they are from some places and not others, more male than female, etc.).

    Then who would you remove from the board to make it more ‘objective’?

    My guess is no one and, instead, you would prefer some odd vote by all persons with jobs (or their elected representatives) which would not do what the PGR sets out to do.

  7. Thom, I think you’re missing the point. For the results of the surveys to be generalizable, the sample should be randomly generated. If it’s not, in all likelihood the sample does not reflect the larger population. The PGR is not designed to have general validity. The evaluators are chosen because of some preconceptions about what counts as good philosophy. There’s an underlying bias at the root of the PGR. The way it is designed, all one can take away from it are the preferences of those who are surveyed. If you are content to know what just those people think just because you like and trust them that’s fine. But no one should mistake this for an objective measure of the worth of the various programs.

    There’s a nice website that explains what a biased sample is.

    http://www.nizkor.org/features/fallacies/biased-sample.html

    Here’s an excerpt:

    This fallacy is committed when a person draws a conclusion about a population based on a sample that is biased or prejudiced in some manner. It has the following form:

    1. Sample S, which is biased, is taken from population P.
    2. Conclusion C is drawn about Population P based on S.

    The person committing the fallacy is misusing the following type of reasoning, which is known variously as Inductive Generalization, Generalization, and Statistical Generalization:

    1. X% of all observed A’s are B”s.
    2. Therefore X% of all A’s are Bs.

    The fallacy is committed when the sample of A’s is likely to be biased in some manner. A sample is biased or loaded when the method used to take the sample is likely to result in a sample that does not adequately represent the population from which it is drawn.

  8. This is in response to the gender issue emphasized by an anonymous poster, above. I did not respond to it because there has never been any evidence that gender is a determinant in the evaluations; there are modest differences in evaluations between, e.g., men and women who work mainly in ethics versus men and women who work mainly in philosophy of science and math, but none that tracks the gender independent of the areas of specialization. This is why we have always aimed, successfully, for a diversity of areas of specialization in the response pool. Women do respond at lower rates to the survey than men, but the numbers are so small, I’m not sure one can meaningfully generalize.

    As Thom notes, the other objections, to the extent they are coherent, have been dealt with ad nauseum, and I wish Noelle had bothered to inform herself on these issues before beating the same dead horse.

    That’s my final word on the subject here.

  9. Where exactly have these objections been addressed? Not on the PGR itself. I’d be happy to provide a link here to something that actually addresses these legitimate objections.

  10. The respones are as interesting as the original post and attendant explanations.

    Margaret Atherton

  11. Thank goodness gender doesn’t make a difference in the professional philosophy industry! My fears have been put to rest. Oh, wait…

  12. I find this all incredible. First of all, what *exactly* is the argument? If, Noelle, you are convinced that the wrong people are involved in the PGR, then who in the world would you *remove*? I imagine your response —if you offer it— is that no one should be removed (which then runs contrary to much of what you argue), but even more should be included as judges with this “more” equal to “everybody,” so that everyone with a job votes.

    If you mean this, then this is nuts. The PGR is meant to be a guide for prospective graduate students, informing them of the relative standing of various departments both in general and in specific subjects. If so, then those who are research inactive and/or in dept’s without any graduate programme are not well placed to advise. Thus, the PGR’s board must be selective. You may be right to say that this or that person would make a great fit. If so, then you should tell Brian who these people might be and make a case.

    Moreover, in response to “randomgradstudent,” Brian never once said there was no gender problem in the profession at large or anywhere else. If you look closely at what he says he (and at great length elsewhere), his position is only that there has been no evidence yet to suggest that gender is a variable that affects how respondents to the survey score departments. Indeed, recall how his blog recently called some attention to Sally Haslanger’s piece on sex discrimination in the profession: it should be quite clear that he holds a different (and I think correct) view about the question of gender and “the profession.”

    I’ve said my final word here, too.

  13. Thom, I never said that the wrong people are involved in the PGR. Nor have I ever said that anyone should be removed from it. I don’t know where you get these ideas.

    My argument, as stated amply above, is that the PGR, in the way that it is designed, only provides information about what the evaluators think are the top programs. That’s all one can take away from it. To say that only those programs deemed “Leiterrific” are indeed objectively terrific is to make a leap based on a logical fallacy. See my reply on this point earlier.

  14. Thanks for the clarification, Noelle. However, nowhere do we find anyone associated with the PGR claiming that any one department is “objectively” better than any others. Therefore, this “logical fallacy” is not committed by anyone with the PGR, but only those who misunderstand its project. Indeed, the PGR is quite clear that it is providing information about “what the evaluators think are the top programs” and not this objective for all time standard.

  15. I just looked back over this long thread of comments and I am glad that to note that the consensus seems to be that the PGR is simply a report of what the evaluators think, not anything of any objective merit.

Comments are closed.

%d bloggers like this: