Our response summarises and highlights community views on the proposals for REF, conveying support for dual funding, non-STEM mode of assessment, and discipline-based peer-review. This response refers to what ultimately became REF2014.
Response submitted 2008
The Royal Geographical Society (with The Institute of British Geographers) welcomes this opportunity to comment on the Research Excellence Framework consultation on the assessment of funding of higher education research post-2008. The Society is the learned society and professional body representing geography and geographers. It was founded in 1830 for the advancement of geographical science and has approximately 15,000 Fellows and members.
In preparing our response to this consultation, we requested comments from all UK Departments of Geography, representatives of the Council of Heads of Geography Departments, the RGS-IBG research groups, and members of the Society’s Research Committee.
Many responses were received. While there is general agreement in the community, there is not uniformity in perspective. The Research Excellence Framework (REF) document focuses on the implementation of a new proposed research process. The responses range more widely, with many focused on broader matters of principle.
In preparing this document, we had the opportunity to consider our responses with those being made from neighbouring disciplines. In this regard, it is very apparent that on a considerable number of issues, notably the concerns about STEM - non-STEM divisions, geography’s concerns as a subject community are very close to those expressed on behalf of archaeology by the Standing Committee for Archaeology.
Overall the geographical community favours the non-STEM mode of assessment led by peer review
The community very strongly supports the maintenance of the dual funding support for higher education research – QR and Research Council funding – with QR funds allocated through a nationwide, transparent and fair system
While geography as a discipline has distinct elements, spanning the natural sciences, social sciences, humanities, it is not alone; disciplines such as archaeology, planning, for example, share such breadth of perspective. The concerns of geography as a discipline that transcends STEM - non-STEM are shared by other disciplines in a similar position
Many respondents are not convinced by the claim that inter-disciplinary research will not be adversely affected, especially where research attempts to bridge the STEM -non-STEM divide
Many expressed concern that the new system will undermine the disciplinary basis of research evaluation as currently employed in most areas. There is a strong belief that peer review should be discipline based, with panels of experts from disciplines or closely cognate subjects
Wide concern has been expressed about the ability of the proposed system to appropriately recognize policy relevant research and that supporting innovative knowledge exchange
It is widely believed that the new framework will not reduce the institutional effort required. For STEM subjects it will become an annual exercise; for all exercises there will still be the need formally to select staff for inclusion. Given the levels of funding to be allocated, and the implications for the financial position of individual universities, large effort is inevitable
There is a strong belief that the new evaluation system, and especially the use of bibliometrics, will affect the behaviour of individual researchers and institutions
The focus upon bibliometrics will have uncertain consequences for early career researchers, those who take career breaks, those in emerging areas of research, and those whose research (because of its nature) takes a long time to come to publication
We have reservations about the framework generally as proposed and doubt that the smaller number of review panels is adequate to assess the broad range of UK science based research.
This proposal cannot be endorsed without further information on the operation of bibliometric analysis. Much more precise information on how publication rankings are to be normalised for specific research traditions is required. We judge that much finer-grained resolution than the broadbased areas identified would be required to fairly assess research quality, particularly for niche areas of research endeavour. There are well defined differences in culture within as well as between disciplines.
More fundamentally, it is not accepted that bibliometrics are an effective indicator of research quality.
In addition, in order to endorse the proposal detailed information on non-STEM assessment (the light touch peer review) needs to be included.
Were something like the present proposal to be operationalised, geographical research would be most appropriately assessed by a peer-review led system.
In the present proposal there are no indications of how, within the main subject groupings, particular discipline strengths are to be identified.
There are serious issues with relation to disciplines such as geography that are inherently inter/trans disciplinary subjects crossing the science /social science /humanities borders.
While in general geography considers that its research as a whole is more appropriately assessed through a system led by peer review, there are substantial scientific components to the discipline – geomorphology, quaternary science, hydrology, geoinformatics, for example. These would fall into more than one of the proposed six STEM panels.
It is not clear how QR funding to specific disciplines or areas of research can be made without meaningful differentiation between disciplines.
For the proposed non-STEM subjects, there is little indication of how bibliometrics are intended to inform judgements, nor how the light touch peer review system would operate.
There are likely to be some STEM or part-STEM subjects where excellent publication is not restricted to the outlets used in the Thomson Scientific Web of Science. Of particular concern are monographs and edited volumes.
We are not convinced that bibliometric indicators can provide a robust quality indicator. It is notable that none of the current RAE sub-panels indicated in its working methods that bibliometrics were a suitable basis for judging research quality.
Metrics on postgraduate students should focus on completion not on registration.
In terms of research grants metrics, the bias is heavily in favour of capital intensive disciplines. In many parts of the social sciences large research grants are not appropriate; research relies on data collected by others (the census is a classic example).
Given the substantial work still necessary to acquire data on citations beyond the scope of the Web of Science – including but not restricted to citations in monographs and edited collections – it is unreasonable to expect the proposal to be endorsed as it stands.
It is not clear how the proposed framework will deal with units/groups of researchers; the attribution of citations on multi-authored work is not addressed.
There are fundamental doubts that citation analysis is itself an indictor of quality. Impact and quality are not necessarily the same.
Research which does not lend itself to the kind of bibliometric measures proposed (basically that which informs the Thomson Web of Science) is likely to be adversely affected.
The Leiden report, on which the REF proposals rely, itself stresses that bibliometric analysis must be interpreted “in a qualitative, evaluative framework that takes into account the contents of the work”. We do not see how this can be achieved without some form of appropriate form of peer assessment.
Issues of census period, publications near to the time of research assessment, review papers (which attract high citations), attribution for multi-authored work all complicate the use and interpretation of bibliometrics and are not satisfactorily addressed in the consultation document
Citation databases contain unreliable data.
International coverage in the citation database is very limited. This is particularly important for those engaged in research of importance to specific national audiences.
Bibliometric conventions treat all citations equally; however, there are many reasons to draw attention to published work other than to recognise its quality.
Concerns are expressed about early career researchers.
Concerns are also expressed about emerging areas of research, more esoteric research, and research that because of its nature takes a long time to come to publication – such fields may be discouraged if they do not earn citations.
The assessment of those traditions of STEM research which publish in monographs and edited collections needs to addressed.
There is a danger, particularly in the Social Sciences, Arts and Humanities, that ‘slow-burning’ class papers will be excluded if the bibliometric period is too short. Citation half-lives must be carefully considered. At the same time, a clear distinction needs to be drawn between citations (and other forms of credit) accrued to research done years before relative to that published more recently. Otherwise, in principle, a steady stream of citations from research done in the distant past could prop up someone’s quality rating indefinitely.
Precisely how the normalising of profiles for specific disciplines (and indeed sub-disciplines) is to take place is not made clear. For example, a quick inspection of ISI Essential Indicators reveals that Human Geographers are commonly amongst the top 1% of cited social scientists but the magnitudes of citation are several times smaller than Physical Geographers in the top 1% of the Geoscience or Environmental Science communities. Even in Human Geography, there are significant differences in citation patterns for Economic and Historical Geographers, for example.
While focusing on the author’s address at the time of research assessment may help reduce the ‘transfer market’, and its negative effects, it may be wise to give some consideration to the address at the time of census to allow departments to have and be rewarded for strategic growth.
A focus on rates per paper or average citation rates may not fully capture the cumulative impact of prolific researchers.
How to ensure appropriate comparable standards across the spectrum of STEM / non-STEM research – presuming the value of the distinction.
How, if at all, bibliometrics are to inform a light-touch peer review.
What role should be played by other metrics – for example, those related to ‘environment’ and ‘esteem’, and why if they are significant in a ‘light touch’ system they are ignored for STEM subjects.
How many outputs are to be subjected to peer review.
What time periods might be suitable for different subjects.
If it is the case that bibliometrics are deemed to be relevant, how we ensure citations in books/book chapters are captured.
A full and independent assessment of whether a peer review led system can be carried out without discipline-specific panels needs to be undertaken.
More broadly, there is insufficient guidance in this document about what a light-touch peer review might entail.
To continue a form of a peer review led system that is informed by disciplinary insights and expertise
The breadth of research income needs to be included, not just research council funding.
Different disciplines need different levels of funding. Appropriate quantitative benchmarks of external funding should be used as reference points.
Measures to reward applied, policy relevant and knowledge exchange work need to be developed.
The focus should not just be on postgraduate numbers – metrics should focus on completion rates.
We are not convinced that the range of UK research can adequately be assessed by six expert panels for the STEM disciplines.
In terms of both burden and potential instability in the system, we question why an annual review, rather than 3 or 5 years, is proposed for STEM disciplines.
A fair system of allocating the relative weight to be given to different indicators is likely to vary across disciplines occupying the same ‘super-panel’. Clarity is needed on this question.
Given how undeveloped the proposals are for the non-STEM subjects, we are unable to make an informed response to this question. However, we would be willing to work with HEFCE to take this area forward.
While this may reduce burdens in some parts of the sector, we are not convinced that it will reduce the burden overall. Internal reviews and continual analysis of indices are likely to increase within institutions, not least on account of the proposed operation of two quite different assessment systems
By moving to a system of assessment that is less visibly disciplinary, local pockets of excellence are likely to be less evident internationally.
Our evidence to the Cabinet Office focuses on the role of geography in delivering geospatial and broader geographical skills. We highlight a need to support subject specialist staff and relevant GI training in schools and HE.
We caution that TEF metrics must appropriately recognise issues around equality and access, and should be better defined and communicated. We support the focus on teaching quality
Our response argues for an international and multiscalar focus in the new strategic priorities.
Our response argues for investment in innovation, and both disciplinary and interdisciplinary research. We also express strong support for dual funding via Funding Councils and Research Councils.
By placing a booking, you are permitting us to store and use your (and any other attendees) details in order to fulfil the booking.
We will not use your details for marketing purposes without your explicit consent.
You must be a member holding a valid Society membership to view the content you are trying to access. Please login to continue.
Join us today, Society membership is open to anyone with a passion for geography
Cookies on the RGS website