Vet School Rankings Are A Costly Joke
If you thought college rankings were bad, vet school rankings are even worse
The Problem with College Rankings
Everyone has heard of the US News & World Report college rankings, especially anxious high schoolers and their parents, university faculty, and administrators. These rankings are more than a superficial prestige pissing-contest between schools; they have enormous influence over where prospective students apply, they shape the university job market, and they can even influence funding for these institutions.
According to one 1999 National Bureau of Economic Research (NBER) study:
“Our analyses indicate that a less favorable rank leads an institution to accept a greater percentage of its applicants, a smaller percentage of its admitted applicants matriculate, and the resulting entering class is of lower quality, as measured by its average SAT scores. While tuition levels are not responsive to less favorable rankings, institutions offer less visible price discounts in the form of slightly lower levels of expected self-help (loans and employment opportunities) and significantly more generous levels of grant aid. These decreases in net tuition are an attempt to attract additional students from their declining applicant pool.”
Another NBER study from 2007 investigated the impact of the rankings have on public funding for these institutions:
“Using college level data from 1987 to 1995, we have three main findings: first, USNWR coverage causes colleges to increase educational and general expenditures per student. Second, these expenditure responses are funded by a 6.5% increase in state appropriations per student, but tuition revenue does not respond. Third, the state appropriation response to USNWR exposure is larger the larger the pre-college age population, voter turnout and USNWR newsstand sales are in a state. These heterogeneous responses are consistent with the USNWR rankings reducing the scope for political agency. Thus our results suggest that, in addition to a consumer response, the publication of quality rankings may influence the provision of quality through a political channel.”
One faculty member at UCSB described the situation like this:
“…even if every school criticizes the validity of the U.S. News & World Report college rating system, these institutions still spend a great deal of money and time on trying to raise their rankings. In other words, a bad evaluation system is driving the decisions of many of our colleges and universities. For instance, in order to raise their selectivity rating, schools pour money into advertising and recruitment in order to make sure that many students apply. In fact, even the universities that reject the vast majority of interested students spend lavishly on trying to attract more students so they can reject the highest number of applicants.
Another key way that schools compete for the applications of incoming students is by showing off their great athletic centers, food courts, and other extracurricular activities. Once again, due to the lack of any accepted method of evaluating student learning, colleges and universities rely on non-educational aspects to attract and retain students. For example, when students and their parents go on school tours, most of the information given relates to non-educational topics like housing, parking, dining, fraternities, athletic facilities, and entertainment options, and when guides do provide information concerning educational activities, it is often false or misleading.”
In short, college rankings have become a costly arms-race, and students and tax payers are the losers.
What should be done about this? In response to criticisms of their practices, USNWR changed their undergraduate ranking methodology last fall to try and reduce biases towards selective, high-resource institutions that mainly hurt large, public universities. Holden Thorp, the Editor-in-Chief at Science magazine, recently wrote about this shake-up. Spoiler alert: A lot of former high-ranking schools are upset about changes to their methodology that penalizes them while relatively boosting public schools in the rankings. His whole piece is worth your time, read it here:
The Utter Insanity of Vet School Rankings
This brings me to the US News & World Report veterinary school rankings, which is fraught with all of the same problems above, and then some! With vet schools comprising a tiny fraction of the total number of colleges and universities in the US, these institutions breathlessly tout where they stand in the rankings, especially when they go up:
Well if these rankings are so important to these institutions, surely they are at least based on some objective quality metrics, right? You might think so based on how human medical schools are ranked, which includes myriad factors like total federal grant funding, research output per faculty member, % of graduates who go into primary care, MCAT scores, etc. You can quibble with the specific metrics and weights, but it is based on SOMETHING tangible.
How about vet schools?
Nope!
Here is the complete methodology for vet school rankings from US News & World Report’s own website:
In addition to getting a broad education in animal science, veterinary medicine students can specialize in areas such as small animal surgery, environmental toxicology and aquatic medicine. These are the top veterinary medicine schools. Each school's score reflects its average rating on a scale from 1 (marginal) to 5 (outstanding), based on a survey of academics at peer institutions.
It is literally a subjective opinion survey not unlike a 1-5 star Google or Yelp review. Who fills this out? Per USNWR, “The rankings were computed solely from the results of peer assessment surveys completed by deans, other administrators or faculty at accredited degree programs or schools.” There are no objective metrics like applicant scores, NAVLE board pass rates, or post-graduation employment data at all.
There are so many problems with this “methodology” it is hard to know where to begin. First, to state the obvious: School administrators and faculty are essentially being asked to RANK THEIR OWN COMPETITION on a high-stakes survey that we know impacts applications and funding. Would it make any sense to have a survey of “best restaurants” filled out by the managers of rival restaurants? Of course not!!!
Next, these survey methods frankly wouldn’t pass muster with even a political opinion poll in the newspaper, let alone something as highly impactful as annual university rankings. The most recent survey had only a 53% response rate and without knowing how many total surveys were sent out and basic details about who took it, it is literally impossible to assess this data. What are the ranges and confidence intervals on these scores??? A simple 1-5 number with no context is meaningless.
It should also be apparent that these survey responses will be heavily influenced by the age of institution, subjective opinions about prestige, as well as funding/resources. The bottom six schools (all with a ranking of 2.2/5 or lower) are the six most recently accredited programs (2010 to present). In contrast, the top three schools were founded 75 years, 116 years, and 129 years ago. It is simply impossible for new programs to have a comparable reputation in terms of number of graduates, research papers, legendary faculty, and overall impact to the profession as schools that are on average over a century old.
How Should the Profession Respond?
In summary, while the ranking system for colleges in general is flawed and leads to increased costs and stress, the veterinary school rankings are completely unjustifiable nonsense detached from any objective measure of reality. It is almost comical to compare the highly detailed and granular data the med school rankings are based on to a “How much do you like program X on a 1-5 scale?” for vet schools.
We need to do better.
To start with, prospective pre-veterinary college students should be told in no uncertain terms that the USNWR vet school rankings are complete bullshit
Vet schools should absolutely stop touting their biased and borderline fictional “Yelp rankings” that distort so much about higher education
Deans and other administrators sent this survey should refuse to fill it out and provide feedback to USNWR that they insist on reworking the rankings to include some objective metrics