Philosophy Rankings

The other day someone named Ann posted a comment to an earlier thread about philosophy rankings, including Brian Leiter’s Philosophical Gourmet Report. The upshot of her comment is that (1) she recalls a paper “statistically analysing the feedback and showing near total consensus amongst faculty from the entire range of depts assessed as to who was top and who bottom” and (2) she thought that “at least Leiter’s methodology is explicit and based on up-to-date information. Thanks to the statistical analyses it’s possible for people to be fully informed of the fact that having metaphysicians will count for more than having historians of philosophy (and know exactly how much that counts). Whatever else you think of the PGR, it at least allows people to think clearly about these matters.”

Let’s do think clearly. Years ago I took 3 semesters of graduate level statistics, including survey research methodology. And subsequently I worked on some deliberative polling projects in which some of the nation’s top survey researchers participated. I saw how careful and exacting they were about survey methodology. This doesn’t make me an expert by any means, but I do know the basic fundamental principles, including this one: If done right, a survey of x number of people will tell you what that x number of people thinks. We are tempted to think that we can generalize from that sample to the larger population, just as it’s tempting to think that we can generalize from the people that Leiter has enlisted to do the rankings to the discipline as a whole. But the only way this can be done is if, at minimum, the original sample is (i) randomly selected and (ii) a large enough sample size, which is generally at least 350 people. Given that Leiter’s sample fails either criteria, all his rankings tell us are what those people think. So, Ann’s remark that because Leiter’s analysis is statistical it can fully inform us that one type of philosophy counts more than another is wrong. Leiter’s analysis only pertains to what counts for that specific group. It tells you nothing more than that. Nothing. And, as Ann herself suggests, (1) could be so—and I’d love to see that report—only because the profession at large has come to believe the conventional wisdom.

Granted, Brian Leiter selected his group because they are accomplished in their fields, but again the result is only a reputational ranking of what those particular folks think of the schools that teach their own particular fields. If you analyze the 2006 report, you will see that not a single professor at a “top ten” department had a Ph.D. from a Catholic university. The vast majority of professors teaching at top-ranked departments got their Ph.D.’s from the very same set of departments. Of course, we would expect that Ph.D.s from “top” departments would get jobs at “top” departments. But the problem is that the Leiter report provides no objective measure for ascertaining what are in fact the top departments. Hence it commits a classic logical fallacy. The Leiter report presumes what it sets out to prove. There is no objective measure in the report for ascertaining what in fact are the “top” departments.

And notably underrepresented in the group of rankers and the departments ranked well are outstanding departments such as Michigan State University, Vanderbilt University, SUNY at Stony Brook, the University of Oregon, Emory University, the University of Memphis, Penn State University, and CUNY Graduate School—despite the productivity and influence of their faculty and the success of their graduate students.

Potential philosophy graduate students have good reason to seek out objective ranking of departments. First it’s important to find a good place to study with good faculty where one can fruitfully pursue one’s interest. Second it’s important to find a graduate school with a good placement record. The first is often accomplished with a little sleuthing and good advice, identifying who is doing interesting work in one’s field, or if one is not quite certain yet, what department has broad, plural research taking place. The second requires some study of actual placement success over the years.

For students interested in studying in the areas in which the Leiter report covers, the report can help those students find a congenial place to study. But it won’t help them identify what place has a good placement record. For students interested in studying fields that the Leiter report looks down on or omits altogether, the Leiter report does a huge disservice.

We need studies of Ph.D. granting philosophy departments on criteria like these:

  • the quality and influence of faculty members’ research in their fields (Academic Analytics’ rankings are a step in this direction)
  • faculty-student ratios
  • teacher training
  • preparation for the job market
  • placement records for graduate students

This would be a real service to the profession. In the meantime, I ask any administrator who is taking seriously the Leiter report to confer with the statisticians in his or her own university to get an objective measure of the soundness of these rankings’ methodologies.

By Noelle McAfee

I am professor of philosophy at Emory University and editor of the Kettering Review. My latest book, Fear of Breakdown: Politics and Psychoanalysis, explores what is behind the upsurge of virulent nationalism and intransigent politics across the world today. My other writings include Democracy and the Political Unconscious; Habermas, Kristeva, and Citizenship; Julia Kristeva; and numerous articles and book chapters. Edited volumes include Standing with the Public: the Humanities and Democratic Practice and a special issue of the philosophy journal Hypatia on feminist engagements in democratic theory. I am also the author of the entry on feminist political philosophy in the online Stanford Encyclopedia of Philosophy and well into my next book project on democratic public life.

5 comments

  1. Amen, Sister! The Philosophical Gourmet Report is what happens when a handful of people who take themselves far too seriously put their own institutions at the top of a phony hierarchy. I say to hell with them.

  2. A few quick questions:

    (a) What is the importance of faculty-student ratios? The Philosophical Gourmet ranking is (at least on one view) simply a guide to help undergraduate students decide which programme to apply to. If the faculty-student ratio is even 3,000-1, then this may be of no importance if those faculty teaching graduate students find themselves in a different ratio. Provided graduate seminars are of a reasonable size, then I don’t see why this is important if we have taken seriously the Gourmet’s project.

    (b) What is the importance of teacher training? I do not know of any hiring where the teacher training of a candidate was discussed, let alone interrogated.

    (c) Preparation for what job market? Some programmes will best enable students to find jobs in top liberal arts colleges and research universities; others will enable students to stand a reasonable chance of finding an academic job elsewhere; other programmes still are better suited to finding students jobs in non-academic jobs. I think the latter is important and requires a closer look, but preparation for the job market neglects to point out which market. Surely, for the top end of the academic market, the PGR maps this very well, indeed.

    Given these large gaps and my confidence in the judgement of those involved in providing the rankings, I believe the rankings are perfectly safe reflections of the profession.

  3. The criteria can be refined, certainly. What I had in mind was my own experience: During the course of my graduate work at the University of Texas at Austin, the graduate school made a point of preparing graduate students for teaching. And of course we got experience through being T.A.s and then lecturers. This is useful for campus interviews that involve teaching a class.

    The larger issue remains: whether the Leiter rankings are rankings of the profession at large or merely rankings of what those who do the ranking think.

  4. In response to a couple of Thom’s points:

    1. Maybe the issue isn’t teacher training so much as teaching ability, and Noelle’s point is essentially correct: lots of hiring departments do look at teaching ability (often ranking it above research) and so it would help students to know that they will be trained as both teachers and researchers.

    2. About the rankings being a safe reflection of the profession: but a reflection of what? The rankings are advertised to help undergrads choose a graduate program. But Leiter is explicit that the rankings say nothing about the quality of the actual graduate education other than the “reputation” and “quality” of the faculty. They don’t say anything about the climate of the department or any of a variety of other factors that a student should consider (and Leiter to his credit admits as much).

    This raises an important question: who do the rankings really benefit? Maybe they are helpful to that small handful of students who aren’t sure whether to attend Harvard or MIT or Rutgers — and who can’t get good advising at their undergraduate college. (But how many of those students are there?) Beyond that I’m just not sure that the rankings really benefit anyone, though they can be fun to look at, I suppose.

    The ratings do tell today’s undergrad where to apply so that, when they get the Ph.D in 2015 or so, they will be able to impress others.
    But isn’t that like choosing a college based on the prominence of its football team?

  5. ‘Leiter’s analysis only pertains to what counts for that specific group.’

    I think that there’s a case for this view, but I’ve got two further observations to make:

    a) Leiter’s group may not be representative, but its certainly powerful in the profession (perhaps only a part of the profession, but if so, then a particularly prestigious part of the profession).

    It seems to me that it’s certainly worth while for someone who wants to enter the profession to know what these people think. There might be various ways in which one could react to the information.

    b) That said – I think it’s very easy for people – particularly people outside the profession, (and therefore, eg, people from other disciplines who may be sitting on hiring committees), who may not be aware of some of the peculiarities of the discipline – to assume that this group represents the profession as a whole.

    That strikes me as dangerous – and its not obvious that talking to administrators about statistical methodology is necessarily the best way to respond….

Comments are closed.