The introduction of the Teaching Excellence Framework (TEF) to rate the quality of teaching in UK universities has been controversial largely because of its rating system. Each university is judged to be ‘Gold’, ‘Silver’ or ‘Bronze’ for teaching: predictably enough those universities placed in the top two categories are usually delighted, those in the bottom category less so.
But while these categories are controversial the Government’s aims for TEF are positive: they want universities to continually improve the quality of their teaching, and for students to have useful information about where and what to study. We can all agree with these aims.
The problems start when you try to measure teaching quality: no-one can agree on a single set of measures that will consistently distinguish excellent teaching on all courses at all universities. In one way the Government is to be admired for trying, although the consequence is risky. As they conduct this giant experiment on a world-leading higher education system we just have to hope not too many international students are deterred from excellent universities that have been given a Bronze rating.
The London factor
One issue is that fine margins make all the difference in TEF. Another factor is that all sorts of things influence the data making it hard to compare. For example, student satisfaction in the National Student Survey (NSS) is slightly lower in London compared with outside London. But no-one thinks teaching is better or worse just because you are in or out of London. We don’t fully understand why this difference occurs but many London students do face a high cost of living, long journey times and the challenge of studying and living in two different places. It seems to be that these general factors are influencing satisfaction with specific aspects of learning and teaching measured in the NSS. Recently students have completed the NSS while strikes and a student-led boycott took place on some (but not all) university campuses – this complex political context makes NSS results even harder to understand and use as an accountability measure.
Then you get issues when there is data missing: the Government employment data only captures those working and paying tax in the UK. So when SOAS graduates go and work abroad they are ‘missing’ from the UK data, and counted as if they were unemployed. SOAS is highly successful at preparing its students to work abroad, but as a result SOAS gets penalised in its TEF metrics.
These are the kinds of tricky issues that come up when you start comparing data, issues which cannot be captured by Gold, Silver and Bronze badges. However TEF should not be dismissed – the metrics it uses are incredibly valuable so long as they are interpreted carefully. Student satisfaction surveys, measures of student progression, the jobs that graduates go on to do – these are data sources that we take really seriously, and spend a great deal of time analysing. From working in the Planning team at SOAS I know first-hand that academic staff use this data and want more of it. It helps colleagues see where the next improvements can be made, to target limited resources on the things that matter, and it is another way we ensure the student voice helps guide our priorities.
The problem comes when we apply the TEF lens and try to feed all these factors into a single judgement about the whole of the student experience.
We would not dream of using ‘gold’, ‘silver’ and ‘bronze’ in our internal analysis.
So is it really that useful for future students choosing their degree? I am all for the use of data to inform student choice but we have to interpret the numbers with care. What is important to each student varies and that is something a TEF award cannot capture.
Analysis of data will continue to play a key role helping universities like SOAS improve the quality of teaching and our students’ experiences. There is a vast range of student-related data in the public domain now: it is most valuable when we look past the Gold, Silver or Bronze badges and really engage with the individual measures of the student experience.
Find out more