They say the first step to overcoming a problem is admitting you have one.
Well, the inaugural TEF (Teaching Excellence Framework) results are in and only one leading higher education institution within the M25 secured a gold rating.
London, I think you have a problem.
The UK capital is of course home to many of the world’s leading Higher Education Institutions, so it does seem to defy logic that so many would fall short of the top prize.
So what’re the numbers?
As the chart below illustrates, 21% of universities in London were rated Gold, compared with 36% outside of London.
And 33% of institutions in London are Bronze (yes, SOAS included), compared with 13% outside of London.
So it is beyond contention that the TEF outcome for London is far worse than the rest of the UK.
This is not to diminish what I’m sure is the incredibly high standard of teaching at Imperial College London. But statistics that consistently skewed do warrant closer scrutiny.
So what’re the (possible) reasons for this?
The National Student Survey (NSS) is completed by final year undergraduates at all UK universities. There are some interesting patterns in this data, such as the fact that older students are on average a little less satisfied than younger students.
HESA data for the last three years shows that there is a higher proportion of mature students in London. So it follows that this would play a role in lower London scores.
Further evidence of this is provided in another NSS data pattern. It shows that average student satisfaction is lower among students at universities in London compared with the rest of the UK. Last year the average for London universities was lower on every one of the 23 questions in the NSS. TEF does not take this London factor into account when it sets different benchmarks for different universities.
So why are students slightly less satisfied in London?
Could it be that students have longer commutes and spend less time in university facilities? Are living costs a factor? The TEF results are one more reason to examine these questions and make changes where possible that will further improve the student experience.
The same questions can be asked when we look at the data on whether students continue from their first year to their second year. In TEF, benchmarks are adjusted up or down to take account of the differences in non-continuation between broad subject areas, students entry qualifications, and age groups. This is because the data shows these factors influence whether or not a student will continue with their studies. But there also seems to be a London effect here, as the proportion of students not continuing is about 2% higher in London than outside London.
Small but consistent patterns
A difference of 2% might sound small, but of course it goes without saying that every student who does not complete their studies is a cause for concern. And in TEF a difference of 2% can be enough to turn Silver into Bronze. It is right to focus on these small margins when looking at the data, but it is also important that everyone using TEF results knows which factors are in the benchmarks and which are left out. That way the results can fully inform student choice.