20th century philwiki rocks a sinking boat

Even as I try to ignore those mean spirits, today I went to a certain blog and found this delightful bit:

More PhD program wikis!

Now we have 20th-century Continental philosophy, started by (brace yourselves) Noelle McAfee.  Fortunately, since a wiki is just as good as its contributors it does not matter who started it.  As with Philosophical Logic, it’s purely informational (who works on what, links to pages etc.), and devoid of crucial qualitative information.  Again, students can start with the PGR results on the latter front.

[please avoid clicking here but here’s the source: http://leiterreports.typepad.com/blog/2015/01/more-program-wikis.html%5D

Ummm — devoid of crucial qualitative information?  Oh, let’s see, you could go to the 20th century continental philosophy wiki and quickly see the strengths of PhD programs around the world, see faculty professional webages and PhilPages profiles listing all their publications, or you could go to the PGR listing of 20th century continental philosophy and see what a handful of mostly Nietzsche scholars and hardly any who do work in contemporary French theory think. You’ll find 13 programs listed without any detail on who is doing what. Some of these programs show up well on the 20th Century Philosophy wiki.  But many who look really great from info on the wiki don’t show up at all on the PGR — perhaps because the evaluators don’t have expertise in the wide range of work going on in 20th century continental philosophy.

As for the wiki, much more work is needed, especially in listing programs outside the US.  So please help pitch in.

What Counts as Philosophy?

Apart from the question of “Who has the rights to the lands of Palestine?” little can be more contentious than the question, “What counts as philosophy?” What are the bounds of this discipline of ours? I like to think that there aren’t any clear and proper boundaries but that there is a roughly common approach (but don’t ask me to define it) and, delightfully, a common canon (at least for what is understood as pre-20th century western philosophy, though lamentably white, male philosophy). Anyone of any persuasion teaching an intro to philosophy class is likely to include some of the philosophers Plato, Aristotle, Aquinas, Descartes, Spinoza, Leibniz, Bentham, Locke, Hume, Kant, Mill, Rousseau, and maybe some selection from Marx, Nietsche, and James. With texts of the twentieth century all bets are off. But what’s one century in a discipline that goes back 25? Given our long history, we’ve had nothing like the canon wars that tore apart English departments in the 1980s. The common canon saves us, but it doesn’t give us a way to define or set bounds to what philosophy is. Philosophy has a way of undermining boundaries, like the boundary between what is properly philosophical and what is not. Just try to set up a fence and see how long it stands.

Even to the extent that we have a common canon, the question of what counts as philosophy is desperately unclear, at least once one strays from a “view-from-nowhere” approach to metaphysics, epistemology, value theory, logic, or any of the many philosophy-of-x arenas. Once the approach becomes more specific and situated, the border wars arise. And the lines are usually drawn between what is hegemonically understood as proper philosophy and what is not. Philosophy that is not in fashion in “the best” schools, not “prestigious,” not hard and clear and rigorous, not properly erected — including today American pragmatism, critical theory, post-Kantian European philosophy, and, oh, certainly feminist philosophy — doesn’t seem to count as philosophy at all, at least by those who are counting and protecting a certain definition of proper philosophy.

Just look (and you’ll have to scroll down and then scan the rigt-hand column) at the specialities of the list of evaluators who were invited to rank graduate programs in philosophy for the 2006 Philosophical Gourmet Report. I am told by a defender of the report that this is a “remarkably diverse” group of good philosophers and so it is truly able to gauge what are, objectively, the outstanding graduate programs in philosophy. Any program that doesn’t end up on the list, I’m told, simply isn’t a good program.

Shocking.

Who defines what counts as good philosophy and hence who counts as the good philosophers? Isn’t this kind of counting tantamount to defining philosophy itself, to saying that M&E (metaphysics and epistemology) counts, but feminist philosophy doesn’t? Or if it’s feminist, it isn’t M&E? Or if it’s concerned with Derrida and not Tarski, or the late Wittgenstein but not the early Wittgenstein, it just ain’t philosophy?

Is that very philosophical?

Philosophy Rankings

The other day someone named Ann posted a comment to an earlier thread about philosophy rankings, including Brian Leiter’s Philosophical Gourmet Report. The upshot of her comment is that (1) she recalls a paper “statistically analysing the feedback and showing near total consensus amongst faculty from the entire range of depts assessed as to who was top and who bottom” and (2) she thought that “at least Leiter’s methodology is explicit and based on up-to-date information. Thanks to the statistical analyses it’s possible for people to be fully informed of the fact that having metaphysicians will count for more than having historians of philosophy (and know exactly how much that counts). Whatever else you think of the PGR, it at least allows people to think clearly about these matters.”

Let’s do think clearly. Years ago I took 3 semesters of graduate level statistics, including survey research methodology. And subsequently I worked on some deliberative polling projects in which some of the nation’s top survey researchers participated. I saw how careful and exacting they were about survey methodology. This doesn’t make me an expert by any means, but I do know the basic fundamental principles, including this one: If done right, a survey of x number of people will tell you what that x number of people thinks. We are tempted to think that we can generalize from that sample to the larger population, just as it’s tempting to think that we can generalize from the people that Leiter has enlisted to do the rankings to the discipline as a whole. But the only way this can be done is if, at minimum, the original sample is (i) randomly selected and (ii) a large enough sample size, which is generally at least 350 people. Given that Leiter’s sample fails either criteria, all his rankings tell us are what those people think. So, Ann’s remark that because Leiter’s analysis is statistical it can fully inform us that one type of philosophy counts more than another is wrong. Leiter’s analysis only pertains to what counts for that specific group. It tells you nothing more than that. Nothing. And, as Ann herself suggests, (1) could be so—and I’d love to see that report—only because the profession at large has come to believe the conventional wisdom.

Granted, Brian Leiter selected his group because they are accomplished in their fields, but again the result is only a reputational ranking of what those particular folks think of the schools that teach their own particular fields. If you analyze the 2006 report, you will see that not a single professor at a “top ten” department had a Ph.D. from a Catholic university. The vast majority of professors teaching at top-ranked departments got their Ph.D.’s from the very same set of departments. Of course, we would expect that Ph.D.s from “top” departments would get jobs at “top” departments. But the problem is that the Leiter report provides no objective measure for ascertaining what are in fact the top departments. Hence it commits a classic logical fallacy. The Leiter report presumes what it sets out to prove. There is no objective measure in the report for ascertaining what in fact are the “top” departments.

And notably underrepresented in the group of rankers and the departments ranked well are outstanding departments such as Michigan State University, Vanderbilt University, SUNY at Stony Brook, the University of Oregon, Emory University, the University of Memphis, Penn State University, and CUNY Graduate School—despite the productivity and influence of their faculty and the success of their graduate students.

Potential philosophy graduate students have good reason to seek out objective ranking of departments. First it’s important to find a good place to study with good faculty where one can fruitfully pursue one’s interest. Second it’s important to find a graduate school with a good placement record. The first is often accomplished with a little sleuthing and good advice, identifying who is doing interesting work in one’s field, or if one is not quite certain yet, what department has broad, plural research taking place. The second requires some study of actual placement success over the years.

For students interested in studying in the areas in which the Leiter report covers, the report can help those students find a congenial place to study. But it won’t help them identify what place has a good placement record. For students interested in studying fields that the Leiter report looks down on or omits altogether, the Leiter report does a huge disservice.

We need studies of Ph.D. granting philosophy departments on criteria like these:

  • the quality and influence of faculty members’ research in their fields (Academic Analytics’ rankings are a step in this direction)
  • faculty-student ratios
  • teacher training
  • preparation for the job market
  • placement records for graduate students

This would be a real service to the profession. In the meantime, I ask any administrator who is taking seriously the Leiter report to confer with the statisticians in his or her own university to get an objective measure of the soundness of these rankings’ methodologies.

Shortcomings of the FSP Index

I’ve learned this morning,  from a comment to my last post and from an e-mail from a friend, about a problem with Academic Analytics’ Faculty Scholarly Productivity Index.  In putting together the data for the index, Academic Analytics used the database company, SCOPUS, which bills itself primarily as covering life science, health science, physical science and social science. I looked through their spreadsheet of journals and databases, and it did include the Philosophy Documentation Center, but this is not as much as one would hope for.  So I contacted Bill Savage at Academic Analytics and asked him about this.  He told me that they knew of the issue but thought it would be ameliorated because people in the humanities primarily publish books.  I told him that this was not at all the case in the dominant strand of philosophy in the States, analytic philosophy, though it was true of other strands (continental, critical race theory, feminism, critical theory).  So in effect, the FSPI, which gives significantly more weight to books than journal articles, does not accurately gauge the productivity of all philosophy programs in the U.S.  Savage said that SCOPUS is working on adding more and more journals to their database, so future rankings should be more accurate.

So, dear readers, the jury is still out.  I’m glad that the index, even with its weaknesses, shows the good work that under-recognized departments are doing.  Based on two and a half years of studying and applying survey research methodology (more than a decade ago), I still think the Philosophical Gourmet is a poor indicator of anything beyond what the people who are asked to respond to the survey happen to think. The findings are not generalizable. In other words, if you want to know how 270 people in high-brow departments gauge their colleagues, read the Leiter report.  But if you want a real gauge of what the profession as a whole thinks or of the quality of various institutions in the English speaking world, look elsewhere.

In the end, I think the best gauge of a graduate program is to be had by talking with search committees at the hundreds of colleges and universities who hire new faculty year after year.  In the end, it’s not how we rate the faculty as much as what kind of teachers and scholars emerge from a program.

Ranking Philosophy Programs

There are now two sets of rankings of Ph.D.-granting philosophy departments in the United States: Brian Leiter’s Philosophical Gourmet (PG) and Academic Analytics’ Faculty Scholarly Productivity Index (FSPI). The latter only ranks the top ten, so I’ll stick with comparing both rankings’ top ten. Only two universities are listed in both rankings: Princeton and Rutgers. The rest are entirely different. FSPI ranks Michigan State first; PG ranks NYU first. Here’s the run-down:

Academic Analytics’ Faculty Scholarly Productivity Index

1. Michigan State University
2. CUNY Graduate School
2. Princeton University
4. University of Virginia
5. Rutgers
6. University of California – San Diego
7. Pennsylvania State University
8. The University of Texas at Austin
9. SUNY at Stony Brook
10. Rice University

Brian Leiter’s Philosophical Gourmet ranking

1. New York University
2. Rutgers
3. Princeton University
3. University of Michigan-Ann Arbor
5. University of Pittsburgh
6. Stanford University
7. Harvard University
7. MIT
7. UCLA
10. Columbia University
10. Univ. of North Carolina –Chapel Hill

The discrepancy can be explained by different methodologies. FSPI is based on data generated by a web-crawler of individual faculty members’ productivity in terms of scholarly publications, honors and awards, and grants. Comparing the sheer volume of scholarly publications is, I think, a bit dicey, since it equates publication in more- and less-selective presses and journals. However, the honors, grants, and awards criteria, a better gauge of quality, probably balances things out. Also FSPI takes into consideration whether one’s journal articles are cited in others’ journal articles — certainly an excellent indication of the influence of one’s work.

The Philosophical Gourmet’s methodology is as follows, according to its web site:

This report ranks graduate programs primarily on the basis of the quality of faculty. In late September and early October 2006, we conducted an on-line survey of 450 philosophers throughout the English-speaking world; over 300 responded and completed some or all of the surveys. The survey presented 99 faculty lists, from the United States, Canada, United Kingdom, and Australia and New Zealand . Note that there are some 110 PhD-granting programs in the U.S. alone, but it would be unduly burdensome for evaluators to ask them to evaluate all these programs each year. The top programs in each region were selected for evaluation, plus a few additional programs are included each year to “test the waters.”

Leiter lists the names and affiliations of the people who filled out his survey. The full list is available here http://www.philosophicalgourmet.com/reportdesc.asp. Note that there is no one on the list from Michigan State, Penn State, or Stony Brook: and only one each from Rice, and CUNY — and none of these schools show up in his top ten even though they do show up in FSPI’s top ten. But four of Leiter’s responders are with NYU; nine have been affiliated with Stanford; thirteen with Michigan; twenty-two with Pittsburgh; and another twenty-some with Harvard — and all of these schools show up in his top ten. Leiter notes that no one who has received a Ph.D. or taught at a particular institution may rank that institution. That’s goood. But still one might suspect that the entire pool of respondents comes from a particular orientation and holds a certain set of conceptions of what counts as quality faculty. Few hale from truly pluralist departments, and so it’s not suprising that truly pluralist departments don’t end up on PG’s top ten. In fact several of PG’s top ten bill themselves on their own web sites as working solely in the analytic tradition.

Note the following.

Schools ranked in the top ten by Academic Analytics that don’t appear in the top ten Philosophical Gourmet rankings:

Michigan State University (lots of strengths in ethics, continental philosophy, feminist philosophy, social and political philosophy, and philosophy of science)

CUNY Graduate School (mostly analytic, diverse interests)

University of Virginia (strong analytic department with strengths in ethics and political philosophy)

University of California San Diego (analytic faculty, strengths in philosophy of mind, history of modern, ancient)

Pennsylvania State University (some faculty have left since the study was done, but it still has its characteristic strengths in continental philosophy, pragmatism, and feminist theory)

The University of Texas at Austin (at the time of the study, it had a bit more strength in continental philosophy – Louis Mackey and Robert Solomon have since passed away)

SUNY at Stony Brook (a school exceptionally strong in continental philosophy, feminist theory, and critical race theory)

Rice University (also has strengths in continental philosophy)

Schools ranked in the top ten by the Philosophical Gourmet that don’t appear in the top ten Academic Analytics rankings (links can be found here ) :

New York University
University of Michigan – Ann Arbor
University of Pittsburgh
Stanford University
MIT
UCLA
Columbia University
UNC-Chapel Hill

No doubt these are excellent programs, but to say they are the very best based on the judgment of an unrepresentative cohort of faculty selected by someone with already marked views about what counts as quality is simply bad logic. I’d opt for the Academic Analytics’ Faculty Scholarly Productivity Index any day. It more accurately reflects the productivity and range of scholarship in philosophy today.

EDIT: More on this topic can be found in this subsequent post.