1. Do you have any comments on the proposal to maintain an overall continuity of approach with REF 2014, as outlined in paragraphs 10 and 23?
For the good of the discipline of geography as a whole, we believe it is important to retain as much continuity as possible. This recognises the effort in the sector to adapt to the changes between RAE 2008 and REF 2014, and also the 3+ years of work that has already gone into preparations for REF 2021.
2. What comments do you have about the unit of assessment structure in REF 2021?
The standing of geography in the UK as world-leading discipline, combined with the size of the likely return to REF 2021 (evidenced by returns to past assessment exercises and continued growth in the discipline in higher education) warrants geography as a distinct UoA. This is our distinct preference.
The economies of scale expected to result from combining archaeology with geography and environmental studies did not arise. There were only seven combined submissions (22 archaeology and 37 geography, 7 combined; and 8 environmental studies). The vast majority of departments, therefore, had to submit a case to support a split submission at the start of the REF 2014 process, which further increased workloads.
Should, however, a different decision be made and other disciplines be included in the UoA with geography and environmental studies, we expect geography as a discipline to be clearly signalled in any subpanel nomenclature as the first discipline listed; that a geographer serve as the subpanel chair; and that any discipline added into the UoA enter with a good collaborative spirit.
Given the nature of geography departments and their institutional settings, and geographers appointed outside of these departments, there must be recognition, and robust mechanisms, to enable crossreferrals of some staff between UoA (for the case of geography, what was B7 and C17 in REF 2014).
3a. Do you agree that the submissions guidance and panel criteria should be developed simultaneously?
Yes. Guidance and criteria should be published early, and should not be superseded by further guidance. This should help produce a more coherent set of requirements and rules.
b. Do you support the later appointment of subpanel members, near to the start of the assessment year?
It is vital that all subpanel members are ‘in post’ ready to contribute to the important work of deciding upon submissions guidance and panel criteria. This will help to ensure as much, and as great a range of expertise as possible, is available at this crucial formative stage. Early involvement also will permit greater shared understanding and promote disciplinary confidence in the panels. There should, however, be some flexibility for unanticipated issues.
4. Do you agree with the proposed measures outlined at paragraph 35 for improving representativeness on the panels?
Yes. The RGS-IBG and Conference of Heads of Geography in Higher Education Institutions will actively solicit nominations, working closely with the other nominating bodies (notably the Royal Scottish Geographical Society). We will be particularly attentive to the representativeness of the panels in the context of geographers and geography departments across the UK.
5a. Based on the options described at paragraphs 36 to 38 what approach do you think should be taken to nominating panel members?
We propose that the RGS-IBG (through the Research and Higher Education Committee) and Conference of Heads of Geography in Higher Education Institutions will, as in previous exercises, coordinate an open nomination process actively soliciting nominations across the geographical community. This approach is supported by our community. We will work closely with the Knowledge Exchange Committee of the Royal Scottish Geographical Society to ensure collective engagement with institutions in Scotland.
The structured form described in paragraph 37 appears to be an appropriate approach to enhancing equality and diversity.
Some in the community request more transparency about the final selection process (i.e. who makes the ultimate decision on subpanel chairs and members following nomination by the nominating bodies and on what basis).
Of key importance is having the trust of the research community and of those who will be assessed in the process and the outcome. The processes used in the past have, to a large degree, ultimately delivered subpanel membership that the community has welcomed, supported and trusted.
5b. Do you agree with the proposal to require nominating bodies to provide equality and diversity information?
Yes as this is the way of both ensuring consideration but also of explaining divergence from what might be expected/desirable and what the result is.
6. Please comment on any additions or amendments to the list of nominating bodies, provided alongside the consultation document
The list is appropriate for geography. We note, however, it should be the Royal Geographical Society (with IBG) that is listed not the Royal Geographical Society and Institute of British Geographers separately
7. Do you have any comments on the proposal to use HESA cost centres to map research active staff to units of assessment, and are there any alternative approaches that should be considered?
While the principle, to reduce gaming, is sound, we strongly oppose this approach. HESA cost centre data are error-riddled and incomplete. They should NOT be used to map research active staff to UoA.
Specifically, this approach will NOT work for geography. HESA cost centre codes in many departments split human and physical geography depending on teaching assignments, at least historically, and do not map neatly onto a unit of assessment. Staff returned to the Geography (C17) UoA in the last REF would be allocated elsewhere. The resulting misalignment would be counter to the articulated purposes of REF.
There are other issues too. Using HESA cost centre codes to allocate staff can mitigate against other efforts to encourage and foster multi and interdisciplinarily e.g. those in large research centres and/or other institutional strategic research initiatives.
We sincerely hope this approach is not pursued, but if it is then to give the flexibility required, suggestions are either to use HESA codes to allocate FTE, but allow a % variance for each UoA; and/or to identify a range of codes appropriate to a UoA informed by practices in the last REF.
8. What comments do you have on the proposed definition of 'research active' staff described in paragraph 43?
While broadly supportive of the proposal to broaden the eligible staff base to include more early career staff, the geographical community are concerned about the definition and operation of the 'measure of independence'. It is too open to different interpretations.
We would welcome further clarification over ‘independent’ researcher and research assistant. It is paramount that a measure is established and agreed which is readily and unambiguously applied in all situations. This test for independence would need to apply for some of the period, not just the point of census, otherwise there would be anomalous effects.
This definition needs to be established soon, not least to get a sense of the size of UoAs. Some in the community argued that past practices have worked well and care is needed not to drastically increase the number of staff submitted, which could greatly increase the administrative burden on departments. It could also create awkward and often insoluble pressures around outputs (e.g. lack of suitable numbers of publications for ECRs, competing publication pressures for project or funder commitments and REF, etc.).
It is important that unintended consequences be monitored; specifically there is a risk that teaching-focused staff could be moved to teaching-only contracts. We recommend that those who have been on a research oriented contract during the REF cycle, who have not moved institution, be expected to be returned.
9a. The proposal to require an average of two outputs per full time equivalent staff returned?
In general, the proposal to decouple staff and outputs is welcomed by the geographical community. There is also support for specifying maximum and minimum numbers of outputs if all staff are to be returned.
The average of two outputs per FTE is seen broadly as an appropriate compromise between limits upon burden of assessment and having a large enough sample for the results to reflect the strength of the discipline.
A sampling approach is not supported (para 49). This would introduce further chance and uncertainty; the REF process is already sampling the full suite of research outputs. One consequence of the proposed changes in output numbers, if implemented, may be the need to reconsider the definitions of 3*, 4* etc, or to introduce additional quality grades, in order to make it possible to differentiate between submissions. Clear and early guidance on any such changes will be needed.
A small minority of institutions with smaller staff numbers in geography have raised concerns that individual staff circumstances will not be accounted for. Their point is that departments with staff with quite different profiles of parental leave, family commitments, periods of sickness etc (which would previously have been taken into account by individual circumstances and affected the number of publication required) cannot be fairly compared. A departmental census count alone would not take these compositional factors into account and these would all be averaged out at departmental/UoA level. Moreover, having a colleague on longterm sick or parental leave does not in any way increase the ability of others in the department to do research. The proposal risks undoing good work around equalities.
9b. The maximum number of outputs for each staff member?
The majority of the community recommended a maximum of four, stating that a maximum of six would allow departments to draw much of the return from a small group of staff, which defeats the purpose of a comprehensive return of ‘research active’ staff. It also raises very real issues of equality and workload division, thus shifting a greater burden onto departments.
9c. Setting a minimum requirement of one for each staff member?
There is broad support that a minimum of one for staff being returned as research active is needed to adequately reflect the strength of the discipline within the UoA, and within the nation as a whole. However, many caveated this support by noting that it is almost impossible to make an informed evaluation without understanding the 'measure of independence' which would affect the pool of eligible staff and consequently the likely distribution of publications across staff within a typical department.
This also needs to be balanced against the nonportability proposal, as it could disadvantage some ECRs, or disincentivise departments from encouraging independent fellowships.
10a. Is acceptance for publication a suitable marker to identify outputs that an institution can submit and how would this apply across different output types?
Yes for some but NOT all forms of publication. This would be particularly problematic for books, common in some subdisciplines of geography.
Acceptance for publication is used for open access rules so is traceable. DOIs might be useful way of dating outputs as they are assigned by publishers when an article is published and made available electronically.
The ‘acceptance for publication’ date will be hard to determine for books and date of publication is not much better as it can be manipulated for commercial reasons (for example, when delayed until the following year to maintain a book’s perceived currency). There are issues here too with portability; significant work can be done at multiple institutions.
10b. What challenges would your institution face in verifying the eligibility of outputs?
We would not be involved in verifying eligibility of outputs.
10c. Would nonportability have a negative impact on certain groups and how might this be mitigated?
While recognising the principles behind the proposal, we believe that the unintended consequences far outweigh the benefits.
Nonportability will influence ECRs who want to be employable after their PhD and may discourage them from publishing papers based on their PhD before they get a full time position (probably in another department). However, if they have not published those papers they will be disadvantaged in being hired. This could lead to more internal hires (see comments elsewhere about the importance of mobility for the vitality of the discipline and departments)
There will be implications also for others at later (mid and senior) career stages too and those, at any career stage, on short term contracts.
While the existing REF rules may lead to excessive movements in staff between institutions immediately prior to the REF exercise, the proposed change may make it difficult for staff to move, reducing the healthy circulation of academic staff between institutions. Part of the strength of geography as a discipline in the UK and the strength of many geography departments, is the mobility of staff at all career levels. This proposal could lead to stagnation at the top end of the promotion ladder.
Furthermore, different subdisciplines of geography produce outputs at different rates given the nature of research, methods and outputs (papers and books). This would have a differential effect for the discipline and departments.
Introducing nonportability would produce a discontinuity with previous REF/RAE exercises. A number of departments highlighted that it will be difficult to impose this so far into REF cycle where institutional hiring strategies have been developed with a different set of expectations
If adopted, alternative suggestions from the community include:
• The portability rule be ‘suspended’ for scholars who were not entered in REF 2014, hence allowing people becoming early career scholars during the present REF period still to be employed on the basis of the outputs that they can bring with them.
• A mixed model that allowed portability of outputs for early career staff (e.g., staff within a certain time period of PhD completion, 23 years for example).
10d. What comments do you have on sharing outputs proportionally across institutions?
Concerns were expressed that this may be complex to implement and would require a clear system for determining correct proportional splits between institutions. Adjudicating proportionality between institutions could be a major administrative burden, and would again add another layer of complexity.
Alternatively, a more practical approach to this issue would be to have no limits on portability, but to apply equal sharing of outputs for anyone moving close to the census date.
We encourage HEFCE to reconsider the 'old' category A* staff – this enabled outputs to be shared across two submitting units if someone moves.
11. Do you support the introduction of a mandatory requirement for the Open Researcher and Contributor ID to be used as the staff identifier, in the event that information about individual staff members continues to be collected in REF 2021?
Yes. It would be very helpful if HEFCE could make some effort to ensure compatibility between ORCID and REF reporting (as has been trialled with Researchfish, for example).
12. What comments do you have on the proposal to remove Category C as a category of eligible staff?
Category C staff are unlikely to be an issue for geography departments, so we have no particular view on this.
13. What comments do you have on the definition of research assistants?
As noted before, a clearer definition would be helpful for this and in relation to independent researcher.
14. What comments do you have on the proposal for staff on fractional contracts and is a minimum of 0.2 FTE appropriate?
There is general agreement in the geographical community that a minimum of 0.2 FTE seems appropriate, as does the requirement for an explanatory statement.
Asking subpanels to judge the eligibility of staff on fractional contracts who hold substantive posts outside the UK grants them a lot of discretion and is likely to result in inequities. One option is to exclude staff who hold substantive research posts outside the UK and whose research is not primarily focused in the submitted unit. These associations can be noted in the Environment section.
It would be unsatisfactory to have a system in which the eligibility of large numbers of individuals remains unknown even at the point of submission.
15. What are your comments in relation to better supporting collaboration between academia and organisations beyond higher education in REF 2021?
The proposed flexibility in the number of outputs that can be returned addresses concerns relating to staff members moving into academia from other sectors.
There is a sense that it is unnecessary to do anything more in this direction – the inclusion of ‘Impact’ as a substantial component of REF more than meets this requirement of playing up collaborations between academia and organisations beyond the academy.
In geography, many of these collaborations are likely to take place either informally or under the aegis of research projects, rather than via formal secondments, and might not be captured by simple metrics. It would be more appropriate in such cases to capture them via impact case studies or the impact environment statement.
16. Do you agree with the proposal to allow the submission of a reserve output in cases where the publication of the preferred output will postdate the submission deadline?
Yes. This is important in terms of retaining continuity with REF 2014.
17. What are your comments in relation to the assessment of interdisciplinary research in REF 2021?
An explicit statement about structures for interdisciplinary support in the environment template is welcomed.
Geography is intrinsically interdisciplinary, evidenced in part in the REF 2014 returns. The mechanisms for the assessment of that interdisciplinary work in REF 2014 were robust and effective. Thus we believe there is no need to appoint ‘Interdisciplinary Champions’. This is a view also held by the Royal Scottish Geographical Society.
One single ‘interdisciplinary champion’ on each subpanel will be unable to cover the wide range of interdisciplinary work (e.g. geographers contribute to interdisciplinary work in gender studies, urban studies, environmental management, food studies, international development and much more).
We do not think there is value in an interdisciplinary identifier as it is too difficult to provide a clear definition that will be relevant in all cases – and almost every piece of work in geography could be considered ‘interdisciplinary’ in some way
If adopted very clear guidance is needed as to what level of ‘interdisciplinarity’ is in question – between main panels AD? between subpanels? within subpanels? There are excellent instances of interdisciplinary work between human and physical geographers returned as part of the same UoA.
18. Do you agree with the proposal for using quantitative data to inform the assessment of outputs, where considered appropriate for the discipline? If you agree, have you any suggestions for data that could be provided to the panels at output and aggregate level?
The use of metrics has been discussed at length and we do not support their use for geography. Our previous submissions on this topic have consistently not supported the use of metrics for geography and have highlighted the many problems in their use across the discipline. The breadth of geography as a discipline and the wide range of outputs returned to REF 2014, as highlighted in the subpanel report, mean that identification of metrics that would be meaningful and comparable across the discipline is not possible. Thus we remain unconvinced that there is added assessment value at output level above and beyond what a subpanel can provide. For subpanels/disciplines who want to see such metrics deployed, it should be clear that expert review remains the crucial activity only informed by, never determined by, metrics.
19. Do you agree with the proposal to maintain consistency where possible with the REF 2014 impact assessment process?
Yes. Consistency of approach is strongly welcomed given the investment of HEIs in impact tracking and realisation since REF 2014. Getting rid of the separate template is good though, as is wrapping it into the environment statement.
20. What comments do you have on the recommendation to broaden and deepen the definition of impact?
We welcome the recommendation to broaden and deepen the definition of impact.
However, the proposal to measure ‘academic impact’ further complicates the already challenging task of measuring impact beyond the academy. There is also a risk of doublecounting as the assessment of academic outputs already includes a measure of their academic significance.
In addition, clear guidance will be required to clarify relevant impacts on teaching. Impact on teaching is already ubiquitous (via the emphasis on researchled teaching), and needs clarity on whether it refers to university only or includes primary and secondary education.
Impact on public engagement seems to sit more comfortably within ‘environment’ as it relates to the contribution of HEIs to wider public and civic life, and we see little point in duplicating that across different sections of REF.
It is critical for HEFCE to recognise explicitly that impact is very often achieved by collaboration between researchers in different institutions, and that we therefore need clear and consistent guidance about joint submissions of similar case studies by different HEIs.
21. Do you agree with the proposal for the funding bodies and Research Councils UK to align their definition of academic and wider impact?
Yes. Alignment of impact definitions between funding bodies and RCUK is sensible. It is important to recognise the mechanisms by which the different forms of impact are assessed; ‘academic’ impact is assessed via outputs, while ‘wider’ impact is assessed via case studies. We would welcome greater recognition of the coproduction of research. A point also made by the Royal Scottish Geographical Society.
22. What comments do you have on the criteria of reach and significance?
It is important to maintain the concepts of reach and significance from REF 2014 for consistency. The response of the geography subpanel of REF 2014 was very sensible on this matter (paragraphs 60, 71 in the Panel C overview report, January 2015).
However, greater clarity is still needed these are quality criteria and not geographically defined (i.e. local can be highly significant).
It is equally important that ‘reach’ be recognised as dynamic; it would be inappropriate to assume that the beneficiaries or target population can be known and defined with confidence at the outset of the work.
23. What do you think about having further guidance for public engagement impacts and what do you think would be helpful?
We would welcome this. Helpfully it would reinforce the distinction between pathways to impact (e.g. dissemination and engagement) and impact.
Given the challenges of identifying clear and meaningful quantitative measures of impact that can be applied consistently across this range of work, we caution of the risk of introducing attempts to measure impact that they will distort the kinds of work submitted in order to meet such criteria.
See also responses to Q20 above.
24. Do you agree with the proposal that impacts should remain eligible for submission by the institution or institutions in which the underpinning research has been conducted?
Yes but this proposal needs refinement. We do need to recognise the enabling role of the institution, but also the core role of the researcher without whom the impactful research would not have happened in the first place. For example, in a situation where a researcher does work at institution 1, but then moves to institution 2. After some time, new pathways emerge and are developed/facilitated in institution 2. It would not make sense to assign all of the impact to institution 1, and the rules should incorporate enough flexibility to be able to assign impact as well as research. A shift toward a ‘body of work’ rather than specific impacts also complicates the assessment of nonportability, as the body of work by definition stays with the individual rather than the institution. There also needs to be recognition that HEIs may be disincentivised from supporting impact facing work by staff who have joined midway through the cycle if there is no way to claim any ownership by the new institution.
25. Do you agree that the approach to supporting and enabling impact should be captured as an explicit section of the environment element of the assessment?
Yes. The guidance needs to be clear, however, about whether this is about excellence of support or excellence of impact, as these are not the same thing.
26. What comments do you have on the suggested approaches to determining the required number of case studies? Are there alternative approaches that merit consideration?
We agree that the overall number of case studies should not change; this may mean that the ratio of case studies to submitted staff should move to something like 1 per 20 rather than 1 per 10.
In this context from a departmental perspective, and related to an earlier point, it is critical to get clarity on staff numbers as early in the cycle as possible, so that it is then clear how many cases studies are needed; use of average staff numbers over some period of time is therefore problematic.
There are mixed views in the community on the proposal to reduce the minimum number of case studies to 1; this was discussed in the consultation to REF 2014 and abandoned on the basis that it would reveal individual case study scores.
If institutional case studies are allowed, then the number of case studies required by a single UoA needs to be adjusted; it would be unfair to ‘penalise’ UoAs that do a lot of interdisciplinary work by forcing them to provide both institutional and UoA focused case studies.
27. Do you agree with the proposal to include a number of mandatory fields in the impact case study template to support the assessment and audit process better (paragraph 96)?
Yes. We understand this is essentially what UoAs did in REF 2014 (which took them quite a lot of time trying to work out the precise periods/dates of impact claimed and of when relevant staff members were in post). This would give more space within the template to describe the research and impact.
28. What comments do you have on the inclusion of further optional fields in the impact case study template?
We welcome this possibility if it will help ensure wider dissemination and applicability of the work.
29. What comments do you have in relation to the inclusion of examples of impact arising from research activity and bodies of work, as well as from specific research outputs?
We welcome this possibility – for recognition of the role of the discipline and the audiences interested in, and actively drawing on, this work. We note impact in the policy domain is often driven by an expertise model, where it is often problematic to ascribe impacts to individual pieces of research. This change will, however, require the panels to provide a very clear definition of what is meant by ‘research activity’ and ‘bodies of work’. Without these definitions, it is hard to evaluate the proposal.
We note in many impact case studies submitted in REF 2014, the impacts were indeed arising from clearly delimited programmes of research activity/bodies of work (from which specific research outputs had arisen).
30. Do you agree with the proposed timeframe for the underpinning research activity (1 January 2000 31 December 2020)?
Yes. The proposed time frame is appropriate. The key is the impact in the timeframe of the current REF.
31. What are your views on the suggestion that the threshold criterion for underpinning research, research activity or a body of work should be based on standards of rigour? Do you have suggestions for how rigour could be assessed?
The community offered two perspectives
A definition of rigour alone seems inappropriate if the recommendations around ‘research activity’ and ‘body of work’ are adopted, as it will be very difficult to define. Instead, the very fact that a piece or body of research has had impact could be taken, by definition, as a measure of its originality, significance and rigour. The panel assessment here is of the reach and significance of the impact; assessment of the quality of the underpinning research is (1) not what the case study should be focused on and (2) has evidently already been done by the body that has been impacted. We suggest that this requirement is dropped, as it seems to serve no useful purpose, and replaced instead by a clear definition of what constitutes allowable underpinning research (point 29).
This needs to be kept as simple as possible – for example, whether research outputs are in recognised journals or other outlets, whether the underlying research had received funding through a competitive scheme, whether it has been acknowledged as ‘academic’ research by other bodies, etc. At issue should not so much be assessing the rigour of the underlying research, as satisfying the subPanel that the research in question was undertaken in ‘good faith’ as academic inquiry which others (journals, funders, media, etc.) have clearly recognised as such.
32a. The suggestion to provide audit evidence to the panels?
The majority response from the geographical community was no. Questions were raised as to how panels can sift all available evidence for all submitted case studies. The possibility that an audit MAY be required will be sufficient to drive HEIs to hold (but not submit) the necessary evidence.
Units should be encouraged to include as much of their evidence in the impact case study template as possible (eg. relevant numbers, key testimony) together with live URL links to relevant other sources (eg. academic sources, policy sources, external evaluations, etc).
32b. The development of guidelines for the use and standard of quantitative data as evidence for impact?
We strongly oppose the introduction of standardised forms of quantitative evidence. It is very unlikely that standardised numerical evidence will be broadly useful across the sector, and its use will always be confined to a particular subset of UoAs and topics. There is a real danger that this would lead to a hierarchy (real or imagined) of evidence quality; in turn, HEIs might prioritise those areas of impact that lend themselves to standardised measures, and neglect those that do not – an unintended and counterproductive result.
The structure of REF 2021 should not be determined by a need to ‘enable further analysis of impact at a national level following the assessment’ (para 106). Similarly, we would like to see some recognition by HEFCE of ‘stakeholder fatigue’ among nonacademic institutions who are already (and will increasingly be) called upon for evidence by multiple HEIs.
32c. Do you have any other comments on evidencing impacts in REF 2021?
It is important to underline viability/credibility of both quantitative and qualitative data – and to underline importance of providing evidence of both impacts and pathways to impact.
There is no mention here of the value of testimony as compared to independent or objective forms of evidence, and nothing on participatory framing of research. Both were challenges for REF 2014 and clear guidance would be very helpful.
We also feel strongly that impact must be viewed globally, and that the difficulties and complications of obtaining evidence of impact in settings outside of Europe and North America should be recognised explicitly.
33. What are your views on the issues and rules around submitting examples of impact in REF 2021 that were returned in REF 2014?
It is simple and straightforward to limit submissions to new impact that arose during the REF 2021 period, irrespective of whether or not the impact was submitted in REF 2014.
We would suggest that some additionality should be required for example in terms of activities leading to impact (e.g. stakeholder workshops) which should have been undertaken in the assessment period. The underlying criteria are still excellence and impact within the period.
The case study could be flagged as having been submitted earlier, so that the panels could check this. No additional rules are needed on the particular proportion of case studies that were submitted in REF 2014, as that could cause perverse incentives and could exclude examples of real and important impact in the period.
However, this does need to avoid 'fossilising' REF2014 impact case study outcomes through 100% repeat entries and to allow newly emergent research impact to be recognized. One possibility would be a 50% limit on REF 2021 case studies previously submitted.
34a. Do you agree with the proposal to change the structure of the environment template by introducing more quantitative data into this aspect of the assessment?
Yes, a degree of standardisation would be welcome, although the environment template for REF2014 already contained much or all of the data held by the institution.
34b. Do you have suggestions of data already held by institutions that would provide panels with a valuable insight into the research environment?
Additional insights could be gained from Athena Swan data.
35. Do you have any comment on the ways in which the environment element can give more recognition to universities' collaboration beyond higher education?
This we believe is unnecessary. The impact section of REF already measures HEI collaborations beyond academia, so it is not clear why this should be further emphasised in the environment section. The suggestion to merge the environment and impact template addresses this issue and is a sensible change.
36. Do you agree with the proposals for providing additional credit to units for open access?
No. This would give credit for a purely ‘technical’ issue with no meaningful relationship to research quality. This will be dealt at institutional and not departmental level. Such a move may privilege the 'better off' HEIs
37. What comments do you have on ways to incentivise units to share and manage their research data more effectively?
The community are uncomfortable with the potential tension here between open data, data protection issues, and the anonymity that characterises the REF. While open data initiatives are welcome, neither the Stern Review nor the consultation make clear the rationale for including open data compliance as an element of research excellence, and this is likely to be highly disciplinespecific; such requirements will raise real issues for critical social science research, for example, or research in authoritarian countries. We therefore recommend that open data compliance not be included in the environment section. The potential for submitting open datasets as research outputs is, we believe, sufficient motivation to encourage this behaviour in HEIs where it is appropriate and warranted.
38. What are your views on the introduction of institutional level assessment of impact and environment?
The institutional context is an important part of the assessment of environment and impact. Many submissions to REF 2014 included standard institutional statements and it makes sense to formalise these into institutional impact and environment sections. Attention should then focus on how each UoA makes use of and benefits from institutionallevel initiatives.
However, institutional case studies raise a number of issues, and it’s not clear what problem they have been introduced to solve.
-
How will the burden of creating institutional case studies be shared across departments, and how will the inevitable unevenness be accounted for?
-
What determines the spread of interdisciplinarity or collaboration that is necessary to define an institutional case study, and how can this not already be captured within UoAs?
-
Will interdisciplinarity be defined across panels, or across subpanels, or in some other way? (see earlier comments on this issue in the context of the interdisciplinary nature of much geographical research).
-
What is the evidence that impact occurs at the scale of an HEI and not through the relationship between individual researchers or teams and their external beneficiaries?
-
Why not simply empower subpanels to recognise impact for what it is – often crossdisciplinary, often involving different actors or institutions, and rarely falling neatly into UoA boundaries?
There are also challenges to how this could work properly in practice
-
The dangers of poorer units being artificially ‘raised’ by good institutionallevel performance and the reverse
-
The dangers of units having their best impact case studies use by their universities
-
The difficulty of establishing what is a genuinely paninstitutional level impact case study over one that just happens to involve partners in, say, two or three different of an HEI’s submitting unit
There would probably need to be rules developed to standardise matters in this respect, but they would be complex, hard to enforce, etc
39. Do you have any comments on the factors that should be considered when piloting an institutional level assessment?
As noted above, we do not support institutional-level assessment, so would not have views about a pilot.
The number of institutional case studies, at nearly the halfway point of the REF cycle, is likely to be a major issue for many HEIs. This needs to be adequately explored in the pilot, so that HEIs can make clear decisions about how to address this requirement.
40. What comments do you have on the proposed approach to creating the overall quality profile for each submission?
We suggest that the proposed institutional assessments will be counterproductive. They will smooth out differences between UoAs, increasing the scores of weaker UoAs and pulling down stronger ones, thus encouraging ‘bunching’ and working against peaks of excellence. REF is and always has been a disciplinary exercise, structured as it is around UoAs.
If this does happen, there should be an increase in weighting of Units of Assessment versus institutional level.
41. Given the proposal that the weighting for outputs remain at 65 per cent, do you agree that the overall weighting for impact should remain at 20 per cent?
Yes.
42. Do you agree with the proposed split of the weightings between the institutional and submission level elements of impact and environment?
No. Because do not agree with institutionallevel assessment and the proposal places far too much weight on the institutional level environment. This is not a good reflection of the quality within individual UoAs, and will dilute the differences between individual UoAs in different institutions.
43. What comments do you have on the proposed timetable for REF 2021?
We agree that starting at 1.1.2014 keeps things more straightforward and ‘clean’, and prevents staff confusion. We question why the environment data do not also start 1.1.2014
A census date is a really useful framework, and any movement to some sort of rolling or timeweighted average (e.g. for numbers of outputs or ICSs required) is going to be a big administrative burden.