The first results of a new ratings system for universities, led by the UK Government, has just been published – the Teaching Excellence Framework (or TEF). What a university offers to its students is now officially graded as either Gold, Silver or Bronze. So what is it that makes one university ‘gold’ and another one ‘bronze’? Most important of all perhaps, can these results help prospective students decide what to study and where?
And while TEF may be new, the idea of assessing the quality of teaching and learning in universities is decades old. The existing system is designed to check that institutions are meeting basic expectations for teaching quality. It is really important that universities are held to account in this way, and it would certainly be very risky not to have these basic checks in place. But a system like this doesn’t set out to help future students tell which university and which course will be the right fit for them.
TEF focuses heavily on data. Using six measures drawn from three sources: the results of a student satisfaction survey, the percentage of first years who do not continue studying the following year, and a survey of graduates which shows what they were doing six months after the end of the course.
Each of these data sources contains some really valuable information and may well help prospective students to make informed decisions about where to apply. But it is also the case that none of these data sets actually measure the quality of teaching. It is worth considering each one in turn:
- Student satisfaction reflects the opinion of students about aspects of their course, but it does not always follow that high-quality teaching leads to high satisfaction. What satisfies one person may leave the next person wanting more. For example, a challenging degree course will enable a student to develop further – and learn more – but the student might not give a top score for satisfaction in a survey due to his or her subjective experience. Student satisfaction is a really important indicator, but it should not be treated as a fail-safe measure of teaching quality.
- Whether students continue into their second year is another important indicator. But it refers to a much wider range of factors than teaching quality. Non-continuation is around 2% higher in London than the rest of the UK, but this does not mean teaching inside the M25 is 2% worse than teaching outside the M25. More work is being done to try to understand this London factor but it may be that higher living costs are a factor for some students.
- As the survey of graduates takes place about six months after they leave their course, it only gives an early indication of what graduates do next. The expectations of students and graduates may vary, especially by subject. One graduate’s measure of success is obtaining a permanent post in an established graduate job role as soon as possible; another graduate will choose a different path, perhaps taking short-term relatively junior roles to break into a competitive area within the creative industries. But only the established role will be counted in TEF as a ‘graduate job’, while the more individual path through a junior role counts against the university as it is classified as a ‘non-graduate’ job. Yet both outcomes are a success story so long as the graduate is pursuing what they want to do in life after studying.
TEF takes all these measures into account and combines them into a single award of Gold, Silver or Bronze – a metric that is easy to understand, but there is an important detail hidden within… TEF sets different targets for different universities. Past survey results show that at a national level student satisfaction varies between subjects. It also varies by other characteristics like age (young students are on average more satisfied than older students) and gender (female students are on average more satisfied than male students). Because these patterns are well-established the benchmark that a university has to hit will be adjusted up or down accordingly, depending on the mix of students it has recruited. In statistical terms this is a perfectly valid method and the details have been published in great technical detail. But it might end up with the result that a Gold university actually has lower student satisfaction than a Bronze university. Future students may want to know about the actual scores as well as the relative measures used in TEF.
There is now an enormous amount of rich and detailed data available about the experiences and results of students at UK universities. The danger with TEF is that this richness is overlooked in the stark headline-grabbing titles of Gold, Silver and Bronze.
But there is another way forward: rather than simply accept the result of TEF, students can use it as a guide to ask what matters to them. If their main focus is graduate jobs, then they could look beyond TEF to the graduate destinations data. Another student might draw up their own list of questions which go far beyond the three data sources in TEF. Rather than being an end point, the best way to view TEF is as a starting point, one that leads prospective students to ask universities probing questions and encourages them to search far and wide for that specific course which is right for them.