We support the aims of TEF but express concern that its metrics (including student satisfaction and employment outcomes) do not capture teaching excellence and do not explore nuances in localised provision.
Response submitted 2019
Yes. We support a framework for assessment that recognises the diversity of student needs and provision and that enables various forms of teaching and learning excellence to be identified – one size does not fit all. However, we remain concerned that the metrics and approach adopted for assessing teaching excellence and student outcomes continue to be blunt instruments that:
a) do not directly measure excellence in teaching delivery [rather, student perception of and satisfaction with the provision they receive, with the associated concerns about the use of the NSS raised by the Royal Statistical Society and others];
b) fail to recognise that excellent student outcomes are not exclusively measured by income-linked/highly-skilled employment outcomes; and
c) do not allow the nuances in subject-level / localised provision, which is a key differentiator in excellence, to be fully explored [in part due to problematic subject groupings, but also the narrative format itself, the guidance for which has discouraged assessors from considering ‘successful but localised practices’].
It is not clear from the approach and metrics piloted to date whether they have been selected to (a) inform student choice, (b) identify poor provision (or thresholds of provision), (c) allocate funds; or (d) enable the generation of ‘league tables’.
It has also been unclear from the approach and implementation which of the needs of employers, business, industry and the professions are being met by TEF.
We are frustrated that this review, while necessary, has been initiated before the outcomes of the final round of subject-level pilots in summer 2019 are announced. That pilot includes substantial changes to both metrics and subject panel structures, which makes us unable to fully respond to the impact of proposed metrics and benchmarking on subject-level TEF processes and measures. Although our comments relate to geography, other subjects are similarly affected.
No. Effective metrics must be valid, robust, comprehensive, reliable, and current. We agree that there is not current quantitative metric that can adequately capture teaching quality across the great diversity of teaching and learning approaches and environments found in universities.
Core metrics proposed based on the NSS questions are not fit for purpose, for reasons of validity and reliability Teaching quality and student satisfaction are different things, and at best tangentially associated. Furthermore, metrics as they stand, show little variation and do not differentiate between the vast majority of universities. We draw reference to the analyses of the NSS (by the ONS, by Marsh and Cheng, by HECFE and by Surridge), cited by the Royal Statistical Society in their response.
We remain concerned that measures of student satisfaction will discourage innovation and drive behaviour, for example in inhibiting or limiting the provision of certain types of modules, especially those that challenge preconceptions or are in any way ‘non-standard’. In geography, for example, data and other quantitative skills and other methods training often, not always, receive lower satisfaction ratings; however they are of high value to employers and future career prospects.
While contact hours per se are not a helpful indicator of teaching quality, they do allow individual subjects to highlight key pedagogical differences in delivery. Their removal from TEF prevents subjects such as geography from highlighting, in a comparable format, quantified time involved in fieldwork, labwork, independent research and study. In programmes such as geography these intense experiences of learning are critical to learning outcomes. The time and role of all those supporting and facilitating the teaching and learning experience regardless of contract type (e.g. teaching assistants who facilitate small group teaching in the lab, field tutorials etc; technicians in the lab and field) must be considered and valued.
In terms of student outcomes, we remain concerned that measures of outcome are exclusively linked to measures of (highly-skilled) employment, which assumes a direct link between teaching quality and employment outcomes and which does not capture the myriad of inter-dependent factors (locational, institutional, socio-demographic, disciplinary etc) that influence employment choices and outcomes. The acquisition of skills, knowledge and understanding over the course of a degree extends beyond a snapshot employment status. Providers should not be deterred from recruiting students onto programmes with social value (as opposed to earning power). Positive outcomes are much broader than paid employment (e.g. unpaid or voluntary work, time overseas).
Employers are a heterogeneous group and their needs are diverse. The employment destinations of graduates also are diverse. For disciplines such as geography, given the variety of career paths and outcomes, identifying a highly skilled employment metric (or metrics) would be particularly difficult. Some students will pursue graduate careers ‘in’ their disciplines, others will draw on their transferable skills and find employment in a broad range of sectors and roles. Very careful attention needs to be given to engagement with employers (inclusive of large organisations and SMEs) and metrics used to document quality and success of graduates from their perspectives, e.g. in terms of how well graduates have applied their knowledge, skills and behaviours to a workplace context.
We have received requests from small and large employers for greater direct engagement with providers in shaping curricula, supporting training delivery, and shaping long-term skills provision to address future gaps in employer need. We do not consider employer engagement (including placements, sandwich years, and industry liaison of other types) to have been sufficiently recognised within the metrics to allow employer needs to be fully considered or reflected by either providers or the Panels.
Employment destinations within a short period of graduation are a poor guide to later career progress. If employment destination is to be pursued, research needs to inform an understanding of the time required post-graduation for students to enter such high skilled employment, which will vary by discipline.
Although we accredit programmes in more than two-thirds of geography departments across the UK, we discourage the use of accreditation as a core metric for indicating teaching excellence. Accreditation, in most cases, recognises threshold, not excellent, delivery, e.g. for baseline regulatory requirements and professional practice. For some subjects, multiple optional accrediting bodies and accreditations exist, for others there is no accrediting body or available accreditation. The inclusion of information about accreditation may be at the discretion of the provider in their narrative. Aligned with this, we caution against the use of Subject Benchmark Statements, which can vary considerably in their format and approach (and their definition of subjects), as the benchmark for expressions of excellence, when their core purpose is to set out threshold delivery.
Metrics need to recognise differences in students - their backgrounds, experience, expectations and desired outcomes from higher education.
In terms of teaching quality and learning environment, individual students will value different aspects of their degree experience – face-to-face contact hours, strong employability-focus, proximity to the best researchers, library facilities, etc. This may also vary between disciplines within providers, and between a group of subjects aggregated for TEF for the convenience of panel organisation.
For student outcomes, we encourage attention to the locational, institutional, and cohort socio-demographic factors at play in benchmarking employment outcomes. Employers are a heterogeneous group and their needs are diverse. The employment destinations of graduates [in particular geography and social sciences graduates] are highly diverse, in terms of sector, job type, and geographical location. We would welcome more transparency around benchmarking of employment outcomes.
We welcome attention to provider student intake profiles; over-valuing employment outcomes may cause providers to take a risk-averse approach to admissions, selecting those applicants which have the greatest likelihood of future success and reinforcing key structural issues and inequalities in employment outcomes.
We support the use of expert peer review panels to evaluate both benchmarked data and written narrative submissions. We note, however, that the use of broad-brush metrics combined with relatively short narrative statements for broad subject groupings, will not allow providers to give sufficient depth of examples of subject-specific pedagogy and provision to highlight both threshold delivery and excellence over-and-above. Providers appear to have been actively discouraged from including or highlighting what assessors may determine to be 'localised' practices, when these may be the most distinctive elements of subject-specific teaching excellence.
We still consider there to be a risk that measures of student satisfaction acting as a proxy for excellence will discourage innovation and drive behaviour, for example in inhibiting or limiting the provision of certain types of modules, especially those that challenge preconceptions or are in any way ‘non-standard’. In geography, for example, data and quantitative skills and other methods training often, not always, receive lower satisfaction ratings; however they are of high value to employers and future career prospects. Likewise, a very important element of teaching and learning in higher education, particularly in the social sciences, is challenging students and exposing them to alternative perspectives and different ways of thinking about the world. This involves using a diverse range of teaching practices (seminars, labs, field courses etc). This can unsettle students, there are no ‘right’ answers and students are expected to be active participants in their learning. This learning experience may be as important as the learning outcome. Student satisfaction metrics soon after graduation do not always reflect the value of these experiences. Students do revise their understanding of the relevance and value of content of their degrees, but after some time post-graduation/in employment. Capturing such perspectives would be helpful – thus reinforcing the point about timing of data capture.
We also consider there to be a risk of ‘gamification’ of metrics at provider and subject-level due to the three-banded ‘medal’ outcomes. Efforts to nudge metrics to drive a shift from Bronze to Silver, or Silver to Gold, may potentially work against innovation and change across a whole provider, or discourage improvement across the board by only focusing efforts on those areas which have the greatest potential for improving the ‘final score’.
An unintended consequence of this may be the creation of subject hierarchies within providers, reinforcing inequalities in provision, risking the closure of subjects which do not meet particular metrics (those relating to highly-skilled employment outcomes, for example), reducing partnership, collaboration and experimentation through interdisciplinary teaching, and potentially increasing subject-unit autonomy within providers.
We consider subject-level ratings of a Gold / Silver / Bronze / Provisional type to be inadequate in describing the specific nature of the quality of teaching provision, and broadly incomparable between providers due to the wide variety of contexts in which geography departments sit within providers, which then isn't well-aligned with the subject aggregation and panel in which it is assessed as a subject.
We would be interested to see an approach that allows provider/subject performance against benchmark within each of the eleven criteria, e.g. in a spider-web chart format (such as that proposed for the Knowledge Exchange Framework), and an overall rating calculated on that basis. This might offer greater transparency and comparability between providers, and help drive enhancement of provision by highlighting which providers are exceeding benchmarks in specific areas.
We do not think that the development of TEF has engaged with a wide enough range of employers sufficiently well to be able to consider whether TEF meets their needs, or that it understands what their needs are.
We have received requests from small and large employers for greater direct engagement with providers in shaping curricula, supporting training delivery, and shaping long-term skills provision to address future gaps in employer need. For example, provider engagement with employers (e.g. placements/sandwich years; industry liaison committees; engagement of other types) has not been recognised within the metrics or narrative guidance in a way that allows employer needs to be fully considered or reflected.
Employers also seek confidence that measures of student satisfaction being used as a proxy for teaching excellence will not discourage providers from delivering topics or skills training that are high in demand in the workplace but which traditionally attract lower satisfaction ratings from students, e.g. quantitative skills and other methods training.
Due to the high diversity of employment destinations for geography graduates, the quality and success of geography graduates are recognised in a wide variety of ways by employers; some focus on the application of geography-specific knowledge and skills; others on the application of transferable skills. This is not addressed in terms of the nature of excellent teaching, or students’ ability to recognise when it has been taught, and some measure of employer satisfaction would be preferable.
We endorse the dual support approach to funding, and argue for the ringfenced AR funding consistent with geography's accepted part-STEM status.
Our response expresses concern that the "measure of 8" perfomance measure within the EBac does not necessarily include a humanities subject
Our response evaluates the 1+3 model in general, and highlights inflexible quota allocations and limited options for quantitative training as discipline-specific issues.
Our response welcomes the revised content, and suggests some changes to wording. However, we encourage more emphasis on developing quantitative and geo-spatial data skills.
By placing a booking, you are permitting us to store and use your (and any other attendees) details in order to fulfil the booking.
We will not use your details for marketing purposes without your explicit consent.
You must be a member holding a valid Society membership to view the content you are trying to access. Please login to continue.
Join us today, Society membership is open to anyone with a passion for geography
Cookies on the RGS website