The other day someone named Ann posted a comment to an earlier thread about philosophy rankings, including Brian Leiter’s Philosophical Gourmet Report. The upshot of her comment is that (1) she recalls a paper “statistically analysing the feedback and showing near total consensus amongst faculty from the entire range of depts assessed as to who was top and who bottom” and (2) she thought that “at least Leiter’s methodology is explicit and based on up-to-date information. Thanks to the statistical analyses it’s possible for people to be fully informed of the fact that having metaphysicians will count for more than having historians of philosophy (and know exactly how much that counts). Whatever else you think of the PGR, it at least allows people to think clearly about these matters.”
Let’s do think clearly. Years ago I took 3 semesters of graduate level statistics, including survey research methodology. And subsequently I worked on some deliberative polling projects in which some of the nation’s top survey researchers participated. I saw how careful and exacting they were about survey methodology. This doesn’t make me an expert by any means, but I do know the basic fundamental principles, including this one: If done right, a survey of x number of people will tell you what that x number of people thinks. We are tempted to think that we can generalize from that sample to the larger population, just as it’s tempting to think that we can generalize from the people that Leiter has enlisted to do the rankings to the discipline as a whole. But the only way this can be done is if, at minimum, the original sample is (i) randomly selected and (ii) a large enough sample size, which is generally at least 350 people. Given that Leiter’s sample fails either criteria, all his rankings tell us are what those people think. So, Ann’s remark that because Leiter’s analysis is statistical it can fully inform us that one type of philosophy counts more than another is wrong. Leiter’s analysis only pertains to what counts for that specific group. It tells you nothing more than that. Nothing. And, as Ann herself suggests, (1) could be so—and I’d love to see that report—only because the profession at large has come to believe the conventional wisdom.
Granted, Brian Leiter selected his group because they are accomplished in their fields, but again the result is only a reputational ranking of what those particular folks think of the schools that teach their own particular fields. If you analyze the 2006 report, you will see that not a single professor at a “top ten” department had a Ph.D. from a Catholic university. The vast majority of professors teaching at top-ranked departments got their Ph.D.’s from the very same set of departments. Of course, we would expect that Ph.D.s from “top” departments would get jobs at “top” departments. But the problem is that the Leiter report provides no objective measure for ascertaining what are in fact the top departments. Hence it commits a classic logical fallacy. The Leiter report presumes what it sets out to prove. There is no objective measure in the report for ascertaining what in fact are the “top” departments.
And notably underrepresented in the group of rankers and the departments ranked well are outstanding departments such as Michigan State University, Vanderbilt University, SUNY at Stony Brook, the University of Oregon, Emory University, the University of Memphis, Penn State University, and CUNY Graduate School—despite the productivity and influence of their faculty and the success of their graduate students.
Potential philosophy graduate students have good reason to seek out objective ranking of departments. First it’s important to find a good place to study with good faculty where one can fruitfully pursue one’s interest. Second it’s important to find a graduate school with a good placement record. The first is often accomplished with a little sleuthing and good advice, identifying who is doing interesting work in one’s field, or if one is not quite certain yet, what department has broad, plural research taking place. The second requires some study of actual placement success over the years.
For students interested in studying in the areas in which the Leiter report covers, the report can help those students find a congenial place to study. But it won’t help them identify what place has a good placement record. For students interested in studying fields that the Leiter report looks down on or omits altogether, the Leiter report does a huge disservice.
We need studies of Ph.D. granting philosophy departments on criteria like these:
- the quality and influence of faculty members’ research in their fields (Academic Analytics’ rankings are a step in this direction)
- faculty-student ratios
- teacher training
- preparation for the job market
- placement records for graduate students
This would be a real service to the profession. In the meantime, I ask any administrator who is taking seriously the Leiter report to confer with the statisticians in his or her own university to get an objective measure of the soundness of these rankings’ methodologies.