One thing I’ve learned in life is that human beings are creatures very much obsessed with social hierarchy, and academics are even more obsessed with this than most people. So, any public rankings that involve oneself and the institutions one is part of tend to have a lot of influence. In the US, US News and World Report each year ranks the “Best Colleges”, see the rankings for National Universities here. My university’s administration tended in the past to express skepticism about the value of this ranking, which typically put us tied for 8/9 or 8/9/10. This year however, everyone here agrees that there has been a dramatic improvement in methodology, since we’re at number 4.
For most academics though, the real ranking that matters is not that of how good a job one’s institution does in training undergraduates, but the ranking of the quality of research in one’s academic field. Where one’s department fits in this hierarchy is crucial, affecting one’s ability to get grants, how good one’s students are and whether they can get jobs, even one’s salary. The gold standard has been the National Research Council rankings, which were supposed to be revised about every ten years. It turns out though that the task of making these ranking has somehow become far more complex and difficult, with more than fifteen years elapsing since the last rankings in 1995. Since 2005 there has been a large and well-funded project to generate new rankings, with release date that keeps getting pushed back. Finally, last year a 200 page book was released entitled A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs, but still no rankings.
Recently the announcement was made that all will be revealed tomorrow at a press conference to be held in Washington at 1pm EDT. I hear rumors that university administrations have been privately given some of the numbers in advance, to allow the preparation of appropriate press releases (see here for an example of a university web-site devoted to this issue).
The data being used was gathered back in 2005-2006, and the five intervening years of processing mean that it is rather stale, since many departments have gained or lost academic stars and changed a lot during these years. So, no matter what happens, a good excuse for ignoring the results will be at hand.
Update: University administrations have now had the data for a week or so and are providing it to people at their institutions. For an example of this, see the web-site Berkeley set up here. All you need is a login and password….
Update: (Via Dave Bacon) Based on the confidential data provided to them last week, the University of Washington Computer Science and Engineering department has released a statement characterizing this data as having “significant flaws”, and noting that:
The University of Washington reported these issues to NRC when the pre-release data was made available, and asked NRC to make corrections prior to public release. NRC declined to do so. We and others have detected and reported many other anomalies and inaccuracies in the data during the pre-release week.
The widespread availability of the badly flawed pre-release data within the academic community, and NRC’s apparent resolve to move forward with the public release of this badly flawed data, have caused us and others to urge caution – hence this statement. Garbage In, Garbage Out – this assessment is based on clearly erroneous data. For our program – and surely for many others – the results are meaningless.
The UW Dean of the College of Engineering has a statement here where he claims that, despite 5 years of massaging, the NRC data contained obvious nonsense, such as the statistic that 0% of their graduating CS Ph.D. students had plans for academic employment during 2001-5.
Update: Boston University has broken the embargo with this press release. They give a chart showing that almost all their graduate programs have dramatically improved their ranking since the 1995 rankings, while noting that the two rankings are based on different criteria, the NRC says you can’t compare them, and the 2010 rankings in the chart are not NRC numbers, but are based on their massaging of the data. I suspect that the NRC data will be used to show that, like the kids in Lake Wobegon, all programs are above average.
For more on this story, see coverage by Steinn Sigurdsson at Dynamics of Cats.
Update: The NRC data is out, and available from its web-site. But, no one really cares about that, all they care about are the rankings, and the NRC is not directly putting those out. Instead, they’ve subcontracted the dirty work to phds.org, where you can get rankings here, using either the “regression-based” or “survey-based” score. In mathematics and physics, the lists you get are about what one would expect, with perhaps somewhat of a tilt towards large state schools compared to the 1995 rankings (most dramatically, Penn State was number 36 in 1995, number 8 or 9 this year).
The UW dean appears to be correct. I’ve been perusing the data set for schools that I have been connected with. There are some flagrant errors that seem to significantly impact the results. For example, they record that the Boston University physics department has a faculty that is 75% female. This gives them a huge bump in the “diversity” rating. In addition, despite having a median time-to-graduation of 6 years, they claim only 9% of physics grad students graduate within 6 years. Basic sanity-checking does not seem to have been applied.
I have some doubts about the Penn State ranking, at least in math, where they made a similar jump. I think it’s a fine dept, but to jump from 35 to 10th seems dubious to me. Perhaps they’re strong in areas outside of my interests, but I wonder what’s going on with they’re data.
There are some terrible flaws here, and I don’t really have enough time to comment on the utter incompetence that these rankings have displayed.
What I will say is this. This is an extremely confusing and misleading ranking, and to have ranges instead of a simple ranking has to be one of the dumbest things any ranking system has done. My guess is that they did this so that colleges can brag about how high they are possibly perceived to be. This is a ploy to make NRC a more popular ranking, but it won’t happen, since no one’s going to through all the trouble and confusion of phds.org to get a half-assed confusing-as-hell ranking system. US News, unfortunately, will become the standard for graduate school rankings. In a way, it already has.
Also, there are serious data problems. anon above me states that Penn State jumped from 35th to 10th in math. Chicago went from 5th to 12-39 with a survey ranking of 27-57. This is despite the fact that Chicago is within the mathematics world reputed as top 5 in the United States and in the last year, just recruited a Fields medalist (which are quite rare within US institutions). Something must have gone seriously wrong with data collection.
Anyway, that is all I can say. It is just very disappointing that this was what I was waiting for for the last five years…
There’s some good drill-down into the math numbers going on at my blog In particular, one commenter points out that all of Chicago’s many postdocs were counted as faculty, while none of (e.g.) Yale’s were. As you can imagine, this plays havoc with measures like “proportion of faculty holding external grants.”
Thanks JSE (fixed link),
The way faculty are counted seems to be one of the main weaknesses of this survey. They had some very elaborate counting scheme, involving “core”, “associated”, “allocated” faculty, and each institution had to figure out separately how to reconcile their own way of counting faculty with what the NRC wanted. The definition of “core” involved things like whether you were on phd committees, etc. It seems that different institutions made different choices about how to count, which in some cases benefited them immensely, in others hurt them a lot.
So what is the job and yearly budget of the National Research Communism agency. The wikipedia article on it is very short on details.
rrtucci,
I don’t know about the NRC in general, but this particular project is described as costing “well over $4 million”, and I think that’s the cost of the NRC people’s effort. The cost to universities to get this data together, analyze it, and put out press releases might be an order of magnitude more than that.
Looks like if you want a higher ranking from the NRC, you’re going to have to grease a few palms. I suppose that technically a bribe or two to the odd council official could be marked under “research expenditure” since promoting the university’s profile/ faculty salaries is all research is about nowadays.
These rankings are all nonsense anyway. The London Times brought out one of these lists recently and it’s essentially an Anglo-Saxon Academy Award ceremony—and every bit as insular. The US and especially the UK were vastly overrepresented, as were most anglophone countries.
The UK had ~26 universities, while France has only 4, despite the fact that both have similar population sizes (~60 million) and development levels. Germany (~80 million) has about 14. Ireland (pop ~4 million) had 2. There were no Russian(~150 million) universities in the top 200—or Indian ones (~1.3 billion).
These rankings are bogus to begin with. Their only purpose is to satisfy the deep need for our society to reduce entire institutions and their collective efforts down to a single number, which can be compared to other numbers to produce a league table. It’s a pathological disorder of the modern world (I blame the press) to seek such numbers regardless of whether they hold any real meaning( They are generally meaningless).
The effect of all of this is debilitating to education everywhere it is applied. Who is the “best”? Whoever has the higher “number”. How good is an institution? Check its “number” on the table next to all the other numbers. Forget the other factors; there’s a number. This is like publication counts/citations, only applied to entire universities and schools.
This is why I’m quite serious when I say that universities should allocate money to bribe the officials making these numbers. Better that than play the corrupting game of juking the stats; rushing to meet meaningless targets to up the number(s), fudging figures and cutting corners all the way, and likely wrecking the university in the process.. If your aim is to turn your institution into an elaborate machine for producing a higher number, then you might even succeed; but your institution will likely fail in the process.
On a deeper note, what does it say about our society if we base our collective decisions on numbers which mean nothing? Price Indices, league tables, stats of all kinds. Numbers without meaning(or mathematics), which emerge from calculations which make no sense, based on data which has been manipulated. And then we feed them all back into the systems which run our lives. All rather dystopian.
Dear ObsessiveMathsFreak — Academics love of rankings (as much as many of us, such as myself, try to deny it) is, in my opinion, partially built from our own early success in school. More or less, everything you said could be applied to a test score achieved in fourth grade. My own realization that I _might_ be good at math started when I got a 51/52 in some test on fractions. What does that 51/52 really mean? Some different test might have score me 15/52. It’s so arbitrary and meaningless etc etc.
However, what I do remember is my teacher’s happy reaction to my performance, and that ultimately made a difference to me. I went off to do well in school (meaning, more meaningless A’s), as do many of our peers. It’s hard to shut off caring about grades after years of benefiting from them!
A guy named Jules posted the following very interesting comment about this issue in Scott A-Ronzoni’s blog:
Deans and department chairs want to kill off any ranking system, because they do not want prospective graduate students to get this data. I hope that they are not successful now, because you can be sure they won’t put anything in its place.
For example, UW does not report any of the NRC-requested data on its website. If you email them for it, they will not respond. Now they complain that their faculty count is wrong when they themselves inflated the count. Cry me a river.
As to MIT, I don’t know what “found academic employment” means precisely. What counts is how many PhDs found tenure-track positions. I doubt this is 50%.
@rrtucci: Scott was talking about the MIT EECS department, in terms of “found tenure track jobs”. Tenure track jobs were much easier to find in EECS as opposed to math and physics.
Look at the tables of honors and awards recognized for adding points! The NRC claim that “starting in fall of 2005, NRC staff compiled lists of international, national and disciplinary awards in all the taxonomic fields …” so how come in more than 3 years they failed to find the academie des sciences in Paris, the Royal Society of London, and other leading and internationally respected sources of honors – in Cambridge MA alone there are 28 Fellows and Foreign Members of the Royal Society, and those associated with graduate programs are overlooked. Yet the Maple Leaf Farms Duck Research Award is included as Prestigious – doubltess meritorious on some scale, but in all fairness not at the level of the academie des sciences of Paris or the Royal Society of London, or many other awards completely overlooked by the NRC. I thought there was something about not disciminating against people based on nationality and national origin here in the US, but perhaps the NRC thinks it’s above the law as well as courtesy and thoroughness.
I really don’t think I can improve on ObsessiveMathsFreak’s commentary-except to add I’ve found in my travels the reality of the quality of the top schools is often far removed in reality due to many factors:Not the least of which is the archaic practice of placement of wealthy “legacy students” in top programs who can barely read.This takes a very limited spot from deserving students of lower pedigree. It’s nothing new at prestigious universities,of course-but it’s certainly new in the sciences. I think this is one of the great unspoken factors undermining scientific training in this country.
To Andrew L.
I don’t know for sure about other graduate departments, but I do know that the MIT Math Department does not admit any legacy students (and I strongly suspect that most other graduate programs do not). Undergraduate admissions is another matter.
As for the ranking, all I can say is that there are a number of instances where School A is ranked well above School B, and where I would advise most graduate students who had this choice to attend School B.
What obviously happened is that some departments figured out how to game the data entry and others ignored them (it was a lot of work).
It wasn’t always at the department level…the data collection was administered at the university level.
@Andrew L : I’ve seen you make this claim in other forums (eg on mathoverflow), and I have to call BS. I’ve either been a student at or been employed by several “top” departments, and I’ve got close friends at several others. I’ve never encountered any students like you describe.
Simply put, I think you don’t know what you’re talking about.
Andy P.,
Agreed. I’ve been associated with my share of “prestigious” universities and have seen few if any incompetent rich kids in math and science classes. The main effect of legacy admissions (and it is of significant size) is to give a big help to people who are part of the large group with similar qualifications around the cut-off for admission. The number of those with much inferior qualifications getting in is small; for this I believe you need to be not only a legacy, but from a family with significant financial or political power. And in those few cases, the students involved tend to major in something other than math or science.
And this only applies to undergraduates. From what I’ve seen graduate admissions pretty much completely ignores considerations of legacy status or wealth.
Just one more comment … I have seen (very rarely) incompetent students admitted to graduate programs, but in at least one case this was because in their home country, they had a family with significant power, and this let them obtain incredible recommendation letters, excellent grades, and very good scores on standardized tests.