Times Higher Education published its 2016 Student Experience Survey last week. This year, Loughborough claimed top honours, with Southampton Solent the highest climber, and most London universities near the bottom of the pile.
Each year I look at these and similar tables in The Times, The Guardian and the all-powerful NSS and I find myself asking the same question: Why are the people who compile these tables such a bunch of rankers?
A rank is easy to understand and it makes for a nice headline (there’s always a winner!) but it presents next to nothing in terms of usable information.
At the top of the table, both Loughborough and Harper Adams are the same in terms of their measured student experience. Qualitatively of course, they are worlds apart. Harper Adams is a small agricultural institution in Shropshire, while Loughborough is the sporting colossus of the East Midlands. They don't have a single degree programme in common. Someone planning to study at Loughborough wouldn't consult the table, see that students at Harper Adams have better personal relationships with staff and decide to become a farmer instead. That would be absurd and yet there they sit together on the table just asking to be compared.
More sensible, perhaps, to look at institutions that fit a common set of criteria. Suppose you have a shortlist of similar universities (in terms of UCAS entry requirements for a chosen subject, for example), you can see where they all rank on the table to see which is better. That might seem entirely reasonable, but different positions on the table don't relate to predictable differences in overall score.
The Times Higher Education report draws attention to Southampton Solent University, because it has jumped 36 places up the table this year from 90th to 54th. But is that only a small step for a university or does it constitute a giant leap for university-kind? Are those intervening universities from 89th to 55th a yawning chasm or are they all identikit institutions? To find out you need to look at the scores rather than the ranks, as this handy histogram shows:
Times Higher Education Student Experience 2016
We'll get to what the scores are a measure of later, but note that there is one institution (London Metropolitan) with the lowest score of 64, and two (Loughborough and Harper Adams) with the highest of 86. The modal (and median! and mean!!) score is 77, which 16 universities share (an eclectic mix including Southampton Solent and Imperial College).
The nature of this distribution is such that if London Metropolitan increased its score by five it would rise barely more than five places. If the University of the West of Scotland (in 89th) increased its score by five, it would rise more than 50 places.
Solent's leap from 90th to 56th translates to an increase in score from 71 to 77. That doesn’t sound impressive (you could say it’s only 8% better) but what does it mean in terms of the underlying rating scales that survey participants completed?
The Times Higher Education survey comprises 22 differently-weighted elements. These have sensible-sounding labels such as “High-quality staff/lectures”, “Good social life” and “High-quality facilities". Respondents gave a rating on a 7-point scale. The methodology given in the report is sketchy, but I assume the scale is neutral in the middle, with three levels of agreement or disagreement on either side, like the figure below but with more sensible labels.
My institution, Brunel University, has slipped 19 places in the rankings from 59th to 78th since the last survey. That doesn't look good. It's the kind of drop that could prompt an internal review. Or maybe even an action plan with which to move forward. But Brunel actually has a higher overall score this year (up from 75.1 to 75.7), it's just that other institutions have improved more.
In fact, Brunel has fallen for two consecutive years. A couple of years ago it proudly boasted being 27th in the table, the highest in London. The fall in position since then has been precipitous, while the change in score has not.
The figure below puts these scores in the context of the lowest ranked university this year (“Worst” in the figure) and the highest ranked uni (“Best” in the figure).
For the past three years, the average Brunel score has been generally nearer to ‘mostly agree’ than ‘somewhat agree’ (or whatever labels are actually used in the Times Higher survey).
Despite this two year plummet of more than 50 places, Brunel is still somehow the second highest rated in London with a score just below Imperial. (This ignores Royal Holloway, which geographically is "University of London" about as much as UCL's School of Energy and Resources.)
It's important not to infer that because these data are problematic, they are worthless. We cannot say that there are no measurable differences between universities, although in the peculiar phrasing of the Times Higher Education survey, "there is no statistical significance in the scores of similarly ranked universities". Nor would anybody suggest that student experience is irrelevant, or that we shouldn’t try to measure and improve it.
There must be a better way to do this, though, than with tables of ranks of weighted averages of average scores on different 7-point scales.
When a university has an average rating of 5.5, is this because most respondents gave fives and sixes? Or, are there lots of sevens coupled with a few ones and twos? The issues a university needs to address are completely different in those two scenarios (one of mostly positive students, the other a mix of lovers and haters).
A 7-point (Likert-type as psychologists like to say) scale should be interpreted with caution. One student might give a rating of 3 because they're just on the negative side of shoulder-shrugging. Another might give a rating of 1 because they're apoplectic about everything including the fact the survey scale doesn't go any lower than 1. In this case the incandescent student is much more than three times as negative as the slightly-annoyed one, but the averaging masks that. There are also questions with Likert scales about left-right biases, cultural differences, and social desirability, and other critiques that are by no means new. Statisticians recommend treating such data (at least initially) as ordinal.
The better solution would be to think of more imaginative and effective ways of surveying students about the different aspects of their experiences at university. I can't think what that solution might be. I admit that my idea of a controlled experiment in which students are randomly moved between different universities every twelve months is somewhat implausible.
The Times Higher Education notes the London effect that institutions within the M25 get lower scores. If you zoom in on my Google map below it's notable that the high scoring (dark circles) tend to be far from the metropolis and that there is a preponderance of lower scores (light circles) in central London. This is accounted for in terms of a higher cost of living and lack of unified campuses, but I'm not convinced it explains sufficiently why London gets on average 72, compared to 78 for non-London universities.
[Forgive me if your favoured institution is in the wrong place or hidden by another]
The Merrier the More
There is a close relationship between the results of this survey and the last National Student Survey. In fact the correlation is statistically magnificent. This shouldn't come as a great surprise given that a similar sample of students was asked similar questions about similar topics at about the same time. The scatterplot below shows the relationship, but the outliers give pause for thought
The bubbles show the sample size in the Times Higher Education survey. The NSS is a behemoth with a massive sample size, but for the Times Higher there were between 50 and 300 respondents from each university (mostly about 100). That the small-sample universities have a lower correlation with their NSS scores is sufficient to raise eyebrows at some of the Times Higher data.
What turns the eyebrow raising into an askance look of suspicion is that the the larger samples in the Times Higher gave larger scores. To put it simply, universities scoring less than 70 had on average only 89 respondents to the survey. Those scoring more than 80 had 167. Why this is the case requires speculation in another blog post, I think.