Perplexing Percentages: Women, Philosophy Faculties, and the Rankings

Last summer Julie Van Camp put up a list of the percentage of women tenured/tenure-track faculty in 98 U.S. doctoral programs. The range is from 50 percent at Penn State and the University of Georgia (brava!) down to six percent at the University of Florida and the University of Texas, five percent at the University of Michigan, and zero percent at the University of Dallas. Dallas has only eight people on the faculty so maybe it is just going through a bad spell. But Florida, Michigan, and Texas have no such excuse. Only one out of 17 faculty at Florida are women, only one out of 22 at Michigan, and an appalling two out of 32 at Texas. Shame, shame, shame. On top of it all, the 90 percent-male evaluators of the Leiter report ranked Michigan 3d and Texas 13th among Ph.D. granting universities, while the five universities with the most women on their faculty don’t even make the list. Surprised?

[Correction: I’ve learned that Julie Van Camp first started tracking the percentage of tenured/tenure track women in Ph.D. granting philosophy programs beginning in 2004 and updates the list a couple of times a year.]

The Deciders and Philosophy Rankings

Many people regularly visit this blog of mine to see what’s being said here about philosophy rankings, namely, the infamous Leiter report. Some say that if a philosophy Ph.D. program isn’t “Leiterrific” — if it doesn’t score well on the Leiter report — then it’s objectively not a terrific program. I beg to differ. The best evidence for objecting to the report’s credibility is the narrowness of the specialties of those who conduct the rankings. The evaluators simply do not represent the profession as a whole and so could hardly speak for what’s best in the profession as a whole. Look here to see the list of evaluators who filled out the reputational rankings for Leiter’s 2006-2008 report. It looks like fully half the rankers specialize in metaphysics and epistemology (M&E). Ninety percent of the rankers are male. And few, if any, work in continental philosophy, pragmatist theory, feminist philosophy, or Africana philosophy. And for the most part, the rankers are from the very same set of schools that are ranked at the top. True, they’re not allowed to pick their own institutions or alma mater, but they certainly pick their kissing cousins. Calling all you philosophers of science out there: does this look like good methodology? Doesn’t it assume what it’s supposedly trying to prove?

The day that deans and provosts stop taking the Leiter report seriously is the day that I’ll stop writing about it.

Philosophy Rankings

The other day someone named Ann posted a comment to an earlier thread about philosophy rankings, including Brian Leiter’s Philosophical Gourmet Report. The upshot of her comment is that (1) she recalls a paper “statistically analysing the feedback and showing near total consensus amongst faculty from the entire range of depts assessed as to who was top and who bottom” and (2) she thought that “at least Leiter’s methodology is explicit and based on up-to-date information. Thanks to the statistical analyses it’s possible for people to be fully informed of the fact that having metaphysicians will count for more than having historians of philosophy (and know exactly how much that counts). Whatever else you think of the PGR, it at least allows people to think clearly about these matters.”

Let’s do think clearly. Years ago I took 3 semesters of graduate level statistics, including survey research methodology. And subsequently I worked on some deliberative polling projects in which some of the nation’s top survey researchers participated. I saw how careful and exacting they were about survey methodology. This doesn’t make me an expert by any means, but I do know the basic fundamental principles, including this one: If done right, a survey of x number of people will tell you what that x number of people thinks. We are tempted to think that we can generalize from that sample to the larger population, just as it’s tempting to think that we can generalize from the people that Leiter has enlisted to do the rankings to the discipline as a whole. But the only way this can be done is if, at minimum, the original sample is (i) randomly selected and (ii) a large enough sample size, which is generally at least 350 people. Given that Leiter’s sample fails either criteria, all his rankings tell us are what those people think. So, Ann’s remark that because Leiter’s analysis is statistical it can fully inform us that one type of philosophy counts more than another is wrong. Leiter’s analysis only pertains to what counts for that specific group. It tells you nothing more than that. Nothing. And, as Ann herself suggests, (1) could be so—and I’d love to see that report—only because the profession at large has come to believe the conventional wisdom.

Granted, Brian Leiter selected his group because they are accomplished in their fields, but again the result is only a reputational ranking of what those particular folks think of the schools that teach their own particular fields. If you analyze the 2006 report, you will see that not a single professor at a “top ten” department had a Ph.D. from a Catholic university. The vast majority of professors teaching at top-ranked departments got their Ph.D.’s from the very same set of departments. Of course, we would expect that Ph.D.s from “top” departments would get jobs at “top” departments. But the problem is that the Leiter report provides no objective measure for ascertaining what are in fact the top departments. Hence it commits a classic logical fallacy. The Leiter report presumes what it sets out to prove. There is no objective measure in the report for ascertaining what in fact are the “top” departments.

And notably underrepresented in the group of rankers and the departments ranked well are outstanding departments such as Michigan State University, Vanderbilt University, SUNY at Stony Brook, the University of Oregon, Emory University, the University of Memphis, Penn State University, and CUNY Graduate School—despite the productivity and influence of their faculty and the success of their graduate students.

Potential philosophy graduate students have good reason to seek out objective ranking of departments. First it’s important to find a good place to study with good faculty where one can fruitfully pursue one’s interest. Second it’s important to find a graduate school with a good placement record. The first is often accomplished with a little sleuthing and good advice, identifying who is doing interesting work in one’s field, or if one is not quite certain yet, what department has broad, plural research taking place. The second requires some study of actual placement success over the years.

For students interested in studying in the areas in which the Leiter report covers, the report can help those students find a congenial place to study. But it won’t help them identify what place has a good placement record. For students interested in studying fields that the Leiter report looks down on or omits altogether, the Leiter report does a huge disservice.

We need studies of Ph.D. granting philosophy departments on criteria like these:

  • the quality and influence of faculty members’ research in their fields (Academic Analytics’ rankings are a step in this direction)
  • faculty-student ratios
  • teacher training
  • preparation for the job market
  • placement records for graduate students

This would be a real service to the profession. In the meantime, I ask any administrator who is taking seriously the Leiter report to confer with the statisticians in his or her own university to get an objective measure of the soundness of these rankings’ methodologies.