The Shadow of a Phantom or How to do a Survey

A long, long time ago, in a state not too far away, I studied statistics and survey research methodology. This was when I was at Duke University’s public policy program (1985-1987) and my professor was John McConahay. We students spent an excruciating semester on statistics, running programs in the dark of night in the dark ages of computers, when a stray comma in a formula kept us up all night in the computer lab. Then I spent a year with him on survey research methodology (including one double-blind test of two beers—there’s nothing like learning by doing!). In this two-semester course I learned lessons that served me well when, in 1996, I helped run a national deliberative opinion poll.  We had one of the most respected organizations in the country running the survey part (the University of Chicago’s National Opinion Research Center) as well as counsel from leading figures in the field. For the few years that I worked with Jim Fishkin on deliberative polling, I came to appreciate deeply how to get the best take one could on, to borrow a phrase from Derrida, the shadow of the phantom that is public opinion.

I also came to feel a sense of horror when bad surveys are taken as good, authoritative, and meaningful ones. Some of these are what survey researchers call SLOPs, self-selected listener opinion polls. These are worse than sloppy; they trot out results that look real and generalizable because they are quantitative; but in fact only say what those who bothered to answer the question think. (Or if the questions are leading, then not even that.) What this unrepresentative group thinks can absolutely NOT be taken to represent what the whole thinks.

Basic lessons I learned from studying at Duke and working on the deliberative poll include these:

  • The best way to know what a group thinks is to survey everyone in the group.
  • But if you can’t do that, you many need to survey a sample of the whole. But for this sample to represent  the whole group  it needs to mirror the whole
  • The best way for a sample to mirror the larger population is for it to be generated randomly, certainly not by quotas. Market researchers often use quotas, e.g. x number of blacks, whites, Hispanics, women, men, dogs, whatever. But quotas of obvious things like these miss less obvious distinctions that might skew the results.
  • Getting a random sample of a whole is tricky and requires much careful effort.
  • Usually the breaking point size of a sample, no matter how large the population, is 300. A population of 3 million can be sampled nearly as well by 300 as can be a population of 30,000. Bigger is better, meaning less margin of error. But the quality of sample sizes under 300 for large populations degrades quickly.
  • The order and wording of questions is crucial.

The above criteria are important for any attempts to generalize from a sample to a larger population.  They are especially important in reputational rankings. If the sample is skewed at all at the beginning, if the sample is generated from a set of assumptions of “what counts as good,” then the outcome is bound to be generated from these initial assumptions. It will produce “what is good” based upon what it thought was good in the first place. And its only defense for the first place was what it wanted to prove in the end.  This is the worst kind of circular thinking and may be invisible to even the most educated university administrators.

In these post-metaphysical times, when we want to assess, say, graduate programs in any of the liberal arts, there are a number of ways to proceed. Like the recent National Research Council’s survey, we could get a sense of reputation, productivity, graduate student success, etc.  (For all its flaws, this was a valiant effort.) The latter are fairly (but not entirely) objective matters; but the former is thoroughly subjective—but very meaningful since really it is “the tribunal of public opinion” that matters at the end of the day, in philosophy as well as democracy. There are no external standards that tell us which arguments are most compelling. At the end of the day, what matters is whether or not we find it compelling. So who is doing “the best” work on Heidegger today?  To find out, I would consult as many people as I could who work on Heidegger today. Then I might have something like the pragmatic (and provisional) truth of the matter.

To approximate this kind of truth, the NRC asked the chair of every single graduate program in the country to give their views on what were the best programs and faculty. They did not go to “the top” schools’ chairs and ask what they thought, because of course this would beg the question of which schools were top ones. (And of course this is the fatal flaw of the Leiter reports.)  They asked everyone, or at least every chair, which strikes me as a fairly (though not completely) representative way to get a picture of the whole.

If the American Philosophical Association were to do its own survey of graduate programs in the United States, it would be best if it surveyed every single one of its members and asked them to indicate which programs and faculty are doing important work and providing good graduate education in their own fields. But more importantly it should make public, in one place, placement rates, student support, faculty CVs, as well as any other information that would help those interested know what the strengths are of all the programs in the country. This would be a real service to the profession.

Ranking Continental Philosophy Programs

I just noticed Brian Leiter’s list of what he deems to be the top continental philosophy programs. Save for a few that obviously belong, the list is bizarre. The ones that seem most to belong here are those with asterisks or pound signs, meaning ones that had to be ad-hoc’d into the list.

Group 1 (1-3) (rounded mean of 4.0) (median, mode)

Georgetown University (4, 4.5)
University of California, Riverside (4, 4)
University of Chicago (4, 5)

Group 2 (4-10)  (rounded mean of 3.5) (median, mode)

Cambridge University (3.75, 3)
Columbia University (4, 4.25)
#University at Stony Brook, State University of New York
*University College Dublin
#University of Essex
University of Notre Dame (4, 4.5)
University of Warwick (3.5, 4)

Group 3 (11-31) (rounded mean of 3.0) (median, mode)

*Boston College
Boston University (3, 3)
Harvard University (3, 3)
*Loyola University, Chicago
*New School University
New York University (3, 3)
Northwestern University (3, 3)
Oxford University (3.5, 3)
#Pennsylvania State University
Stanford University (3, 3)
Syracuse University (3.25, 3)
University College London (3, 3)
University of Auckland (3, 3)
University of California, Berkeley (3, 3)
University of California, Santa Cruz (3, 3.25)
*University of Kentucky
*University of New Mexico
University of South Florida (3, 2)
*University of Sussex
University of Toronto (3, 3)
*Vanderbilt University

* inserted by Board
# based on 2004 results, in some cases with modest adjustments by the Advisory Board to reflect changes in staff in the interim

It’s easy to understand why the list is so strange.  For years I have noted that the problem with Leiter’s methodology is that it is based on reputational rankings from a group of rankers he has self-selected.  Here is the list of rankers for this continental philosophy ranking:

Evaluators: Kenneth Baynes, James Bohman, Taylor Carman, David Dudrick, Gary Gutting, Beatrice Han-Pile, Pierre Keller, Sean Kelly, Michelle Kosch, Brian Leiter, Stephen Mulhall, Brian O’Connor, Peter Poellner, Bernard Reginster, Michael Rosen, Iain Thomson, Georgia Warnke, Robert Wicks, Mark Wrathall, Julian Young.

I have been involved in continental philosophy circles for over many  years, but I only recognize four of these philosophers as in any way qualified to assess continental philosophy overall. Others may be familiar enough with the field to recognize which programs have individuals doing work in continental philosophy (from a certain bent). But it would be a huge stretch to say that as a whole they are deeply familiar with what is going on in the field.

Objectively speaking, the best measures for success in any given area of philosophy are these: getting published in the major journals of the field and by the major publishing houses of that field, getting papers accepted at the major conferences in that field, and excelling at  job placement.  Data on the 3d point is lacking because of lack of will or coordination, but the first two are simple enough to assess.  For continental philosophy just look at the programs of the past years’ meetings of the major societies, e.g. SPEP, which is the second largest philosophical society in the U.S. and identify the leaders of these organizations, whose papers are getting accepted, and which doctoral programs are training emerging scholars. For publications, look to who is getting published in the leading journals in continental philosophy (such as Continental Philosophy Review, Philosophy Today, Constellations, and Philosophy and Social Criticism) and by the academic publishing houses that have lists in the field.

Any student serious about going into continental philosophy would be wise to dismiss this obviously biased ranking. Any reputational ranking has serious limitations, but at the very least a reputational ranking of a field should consult those who know the field well: for continental philosophy this would include the leaders of SPEP and other continental societies; the authors and editors of series published by Columbia, Indiana, SUNY, Routledge, Rowman & Littlefield; and the editors of the main journals in the field.

Otherwise the report just confirms the reporter’s preconceived ideas about what counts as philosophy. And if continental doesn’t count to him, despite the fact that continental philosophy is one of the most vibrant and innovative fields in the humanities today, then the results are bound to be twisted.

***

For what it’s worth, of U.S. doctoral programs in continental philosophy I’d easily recommend these to my students (in alphabetical order): CUNY grad program, DePaul, Emory, the New School, Penn State, Stonybrook, Vanderbilt, and perhaps Boston College, Boston University, Loyola, Memphis, Northwestern, and Syracuse. No doubt there are other good and emerging programs that I’ve missed, so please post a comment if you notice any such omission.

Edit: I’ve subsequently found that the reason so many continental programs aren’t ranked (at least without an asterisk or pound sign) is that they have opted out of the rankings by not submitting a list of faculty to the PGR. Nonetheless, the basic problem remains (and this may be why so many continental programs have opted out.)