I made this proposal the other day over at Daily Nous (which just got a nod from the Daily Nous editor Justin Weinberg here) for an alternative to the PGR and other rankings: a 21st century tool that students could use to get information on graduate programs.
The APA has been collecting data from philosophy PhD programs for a few years now for its Guide to Programs on placement rates, etc. What if more information were collected, such as numbers of books published with university presses, faculty citation and Google Scholar analytics, peer-reviewed conference papers, faculty areas of specialization, etc? And then what if that information were turned into a search engine such that a prospective graduate student (or anyone) could go there and search by key words for programs that offered what she or he was wanting to study? Programs that were more research productive (with faculty being cited more) would show up higher on the list than those that weren’t. So the student could create a customized ranking of programs that would meet his or her interests. Anyone could use that data to generate rankings of any particular specialty.
Citations, publications, etc. are a better measure than perceived reputation. Not only are they more objective, they factor in the careful scrutiny that goes into the peer-review process—as opposed to top-of-the-head perceptions of faculty lists by those that may be unfamiliar with those faculty members’ work.
Here’s a link to the current guide. Much more data will be needed but at least there is a starting point and a process already in place. I welcome ideas for a proposal to send to the APA.
I have to say that I’m with you in principle on this, but I can’t actually endorse the ‘no rankings’ attitude. As I see it, we don’t live in a universe run on principles. I think you’ll agree with me on that (as a fellow pragmatist). My pragmatic worry is just this: somebody is going to institute a ranking system no matter what — if PGR goes away somebody will fill the void. It simply can’t be forbidden (what would that even mean?).
So the issue is then this: are we going to get involved in the rankings and insure that they are as democratic as possible, or are we going to pretend that we can shut down the existence of rankings with good arguments and thereby not contribute to a more democratic and fair ranking system?
Mine is a position of compromise with reality, I suppose. I would of course agree if you were to say that one shouldn’t compromise on deeply immoral issues. But my view is that rankings are not deeply immoral, they’re just unfortunate. There are causes on which I would brook no complicity with those who produce injustice, but this is not such a cause. Rankings are not a site of injustice, they’re just a site of hyper-professionalization. I think it’s important not to confuse the two, lest we begin to think that our petty academic politics are as important as real injustices just down the street.
ck, As far as I know, most academic disciplines do not have rankings. So why is it inevitable that philosophy will? Also note that those institutions of ours that have not been on the PGR radar have worked just fine without rankings. HIghly qualified prospective graduate students find us. We generally have well over 100 applications for about 6-8 spots.
I suppose I agree that in an ideal situation we wouldn’t have them, and ideally we probably don’t need them. Once something that people perceive as useful exists, it is hard to break the desire for it. You are right that it’s not inevitable. But my sense is that it’s very likely it will persist. I just suspect that too many people at all levels are invested for it to go away any time soon. If that’s right, I think we should be pressing for a ranking system that more of us can feel good about (or at least not terrible about, given that there will be no perfect ranking).
So I’m with you in spirit (pure reason?), but it’s just a question of strategy stepping forward. I’m hoping more of my colleagues from a diversity of philosophical backgrounds will get involved with whatever replaces the PGR, rather than continuing to be disappointed in it (hopefully we need not be appalled by it, as we are now). We can be dissappointed in it but still involved in it and committed to its improvement (by analogy: consider America).
Also… A side point… I think it’s also important to recognize that most disciplines do have rankings of some kind (e.g., the NRC rankings, US News, and more) even if not rankings run internally by the discipline. In the absence of any post-Leiter-PGR rankings, prospective students will just seek those out, they will find them, and they will use those. Maybe those rankings are better (I’m not sure, but I suspect we could do better; the last NRC raw data for philosophy I saw were full of errors and gaps).
I think this approach has great potential.
Some other ideas:
1. Data visualization: I think of how infographics have made data a bit more palatable. Imagine pie charts representing a department AOS’s, their publication areas, and publication types (books, articles, edited volumes, etc). Since the graphics could be based on the same data-source on a server, they could be updated as the data is updated (as opposed to releasing static documents annually).
2. Longitudinal representations of: placement records, average student/TA ratio, outside grants/funding, stipend/cost-of-living, undergraduate majors, and demographic info. This might be asking too much of the system, but this data is pretty easy for departments to provide (although I wonder if there are privacy issues looming in some of these metrics).
3. Concerns about publication metrics. I get the sense (from others) that the pressure to publish is not all good. So, I wonder if publicizing publication metrics (or including them as variables that inform rankings) would help or hurt, overall. And, separately, it seems that there is a distinction to be made between various kinds of publications. Obviously, edited volumes, single-author books, papers, book reviews, etc. cannot be considered equally. But more than that, there are differences between stand-alone papers and response papers. Some philosophers crank out a handful of response papers every year, making conversation-like chains of papers that might not accomplish as much as non-response papers (indeed, some response-papers are the equivalent of answering a clarificatiry question that probably should have been addressed in the inaugural paper). It seems that these strings of papers could easily have been turned into one, more fruitful, publication if the authors had just collaborated on a single paper or book. And if I am right about this, then single publications could have far more value than multiple publications, so there is a quantity-quality concern.
4. Custom ranking: one could easily offer custom ranking tool that allows users to get custom ranking reports by selecting the variables they care about (leaving out the variables they don’t care about). This would also allow for both fine-grained and broad comparisons, and everything in between. For example, the user could select “gender distribution” and compare departments on that one variable. And then they could add other variables to the comparison as they see fit (and they could even decide how to weight each variable). Again, if the tool is based in a single data source on a server (a spreadsheet would do), then this could be very easy to use and to update. This custom ranking has the following merits: it is useful to people with different criteria, it would be useful to more than just grad students, it is pretty simple, and it need not be run by a philosopher.
I am sure I am not the first to have these thoughts and I am sure there are problems with these thoughts that I have overlooked. I am open to hearing those problems.
Thanks for the great idea!
Here is an early-internet model from Peter Suber, now in archive form (since ’03), useful to all levels of student as well as scholars. I used to use Hippias all the time before the advent of the databases:
http://legacy.earlham.edu/~peters/philinks.htm
Such a search engine would be possible using linked open data and semantic web technology. Not only could you search using key terms, but you could search according to criteria that are important to you, and receive a table (or other visualizations) with all of the information listed. For example the query, “give me a list of the placement records and geographic locations of all of the departments with more than three faculty who publish in the history of philosophy,” or “give me a list of departments in the Great Lakes region with faculty that specialize in philosophy of law”, could a table of those departments with the information of interest. Is anybody working this? Would there be general interest in building this kind of system? I work with semantic web technology, so if there is interesting, feel free to get in touch with me.
Even if this didn’t take the place of rankings, it might be a valuable resource for determining fit.
I was so inspired by the idea of custom rankings that I went ahead and fleshed out a preliminary vision for it on my blog: http://www.byrdnick.com/archives/6451.
Glad I ran into this great idea! Thanks Noelle!