Making sense of excellence
‘Research excellence’ (RE) has gone mainstream in the 21st century science policy, but recent studies show that RE is still very much an ambiguous concept, easily (mis)used for many different purposes in science policy domains, such as Europe’s Horizon2020 program (e.g. Young, 2015). Faced with the risk of becoming a grossly inflated and hence an increasingly meaningless concept, how should decision-makers go about dealing with RE as an analytical construct - especially in the context of research performance assessment and evaluation? Ideally one would want to rely on reliable, objective information based on authoritative quality standards. Peer review is the common ‘qualitative’ approach but suffers from several practical limitations, especially for large-scale systematic assessment. Another approach is quantitative indicators and measurement of research performance.
So far, especially in the world’s advanced science systems, citation impact measures have provided some of the much needed ‘hard evidence’ to help identify outstanding performance. The informational logic is simple and appealing: the more citations received from fellow researchers, the larger the likelihood of being regarded as excellent within that particular scientific community. Being among the most highly cited percentiles within a field of science is gradually becoming a de facto standard of excellence (Tijssen et al., 2002). Large-scale implementation of such RE notions within formalized in research evaluation settings, like in the UK’s Research Excellence Framework, have however revealed several methodological and analytical shortcomings of such citation impact measures (Wilsdon et al., 2015). A smart mix or quantitative and qualitative information (‘metrics and narratives’) seems to be the best solution.
Research excellence in Africa
Applying either the peer review method and/or the citation impact framework is relatively straightforward in many high-income countries, where scientists and scholars produce research publications (usually in in high-quality international journals) that accumulate large quantities of citations from peers. But things are slightly different in lower-income countries across the globe, where resources for science are much scarcer, publishing in international journals is less obvious, and research tends to be more focused on aid spending, economic development and societal transformation. But also here the quest for international recognition and excellence is becoming a dominant force in policy making. Take Africa for example. Many research organizations in Africa is also center staging RE, where it has been prioritised and institutionalised in several ways. There are now ‘Research Excellence Initiatives’ throughout Africa, such as World Bank’s African Centres of Excellence project. South Africa has established an extensive funding programs to establish national research centres, under the banner of Centers of Excellence, in strategic fields of science to boost or consolidate areas of research strength. Several of Africa’s major research-intensive universities launched their ARUA network in 2016, mentioning ‘excellence’ in their joint mission statement: “ARUA is intended to develop local research excellence through collaboration to find solutions to the development problems of Africa. It is set to become a pan-African network for bringing research and academic excellence to the fore throughout the region by developing strong and viable research universities”.
So, it seems that also in Africa there is a clear case for further unpacking of RE to support evidence-based research management, science funding and policy debate. But what kind of analytical framework could help operationalize RE that does justice to the large variety of local settings and (challenging) circumstances that exist on this continent? Addressing this key question was the driving force of a 2016/2017 exploratory study, conducted by Erika Kraemer-Mbula and myself, of RE in sub-Saharan African. Our main findings and recommendations were distributed through a series of 2017 publications that started with a 50-page report for the Science Granting Councils Initiative (Tijssen and Kraemer-Mbula, 2017a) and a blog post for World University News (Tijssen and Kraemer-Mbula, 2017b). Later that year, a research article version appeared in the academic journal Science and Public Policy (Tijssen and Kraemer-Mbula, 2017c), followed by a 7-page bi-lingual policy brief - in English and French - to summarize our main message a wider audience of practitioners and science funders in Africa and the ‘Global South’ (Tijssen and Kraemer-Mbula, 2017d).
Toward better practices ?
Our SGCI policy brief presents the following 10 recommendations to help create a more solid evidence base for analysis, debate and decision making (Tijssen and Kraemer-Mbula, 2017d, p. 6-7):
- science funders should be more explicit in their descriptions or definitions of ‘research quality’ and ‘research excellence’;
- determining ‘excellence’ is contingent on appropriate performance standards and benchmarks;
- the appropriateness of a performance indicator depends on its degree of ‘usability’ and ‘user acceptability’ in terms of information value, operational value, analytical value, assessment value and stakeholder value;
- proper understanding and operationalizing requires multiple perspectives (both local and global); it is important to make a clear distinction between common global benchmarks and ‘local’ customized ones;
- experiences within low- and medium-income countries in adapting concepts of ‘research excellence’ and ‘research quality’ to their local contexts constitutes valuable sources of information to establish good practices in assessment and evaluation practices worldwide;
- expert opinions from peers should be a prime source of information for value judgements on research quality and excellence;
- personal views, usually embedded in implicit scientific norms regarding quality standards or driven by selected showcases of successful research, should be complemented by external empirical information to create ‘informed peer-review’ assessment and evaluation;
- the multidimensional nature of research excellence requires an ‘indicator scoreboard’ approach, where performance indicators may span the entire spectrum from research resources to socio-economic impacts;
- the choice of performance indicators and/or excellence benchmarks will always be context-dependent and goal-dependent; there is a clear need to incorporate local contextual factors in customized indicators;
frameworks designed to assess research excellence ought to be flexible enough to incorporate changes in the local context and priorities, as well as dynamics of the global science system.
Generally speaking, a well-accepted ‘dominant’ heuristic is needed to identify and RE in its many shapes and forms, with a convincing rhetoric to influence researcher communities and their major stakeholders. Such a heuristic tool requires a critical and constructive approach that aims at understanding and applying RE as key concept in evidence-based research assessments and science policy. A clear set of guidelines and recommendations (like those listed above) can contribute to develop a shared terminology, with appropriate definitions and operationalization of underlying concepts, that can then shape common practices and methodological principles with regards to the assessment of research proposals, activities and performance.
Low-income settings require a much better understanding and appreciation of ‘local research’ environments and objectives. Here we need a more strategic and focused approach of RE in which, for example, transformative, interdisciplinary work on complex, development challenges are just as relevant as breakthrough science in biomedicine aimed at creating a large international citation impact. To change the views on RE of science funders within lower-income countries, one needs to engage in a policy discourse on research quality perspectives, and presents ‘opportunity structures’ to contextualize and operationalize RE within relevant local and global environments, but also to introduce encouragement for setting local standards according to their own priorities with regards to the degree of competitive selection or inclusiveness.
About the author
Robert Tijssen is professor of Science and Innovation Studies at Leiden University (Netherlands), research fellow at the University of Leiden’s Africa Studies Centre, and member of the LeidenASA core group (Leiden African Studies Assembly). He is an extraordinary professor at Stellenbosch University (South Africa), and CWTS project manager at DST-NRF Centre of Excellence in Scientometrics and Science, Technology and Innovation Policy (SciSTIP, South Africa).
Tijssen, R., Visser, M., and Van Leeuwen, T. (2002), Benchmarking international scientific excellence: are highly cited research papers an appropriate frame of reference? Scientometrics, 54, 381-397.
Tijssen R. and Kraemer-Mbula, E. (2017a). Research excellence in Africa: policies, perceptions and performance, January 2017
Tijssen R. and Kraemer-Mbula, E. (2017b), Research excellence - beyond the buzzword, University World News – Africa edition, 13 January 2017 Issue No:442
Tijssen R. and Kraemer-Mbula, E. (2017c). Research excellence in Africa: policies, perceptions and performance, Science and Public Policy, November 2017 (doi: 10.1093/scipol/scx074).
Tijssen R. and Kraemer-Mbula, E. (2017d). Perspectives on research excellence in the Global South: assessment, monitoring and evaluation in developing country contexts, SGCI Policy Brief no. 1, December 2017.
Wilsdon, J., Allen, L., Belfiore, E., et al. (2015). The metric tide: report of the independent review of the role of metrics in research assessment and management (doi: 10.13140/RG.2.1.4929.1363).
Young, M. (2015), Shifting policy narratives in Horizon 2020, Journal of Contemporary European Research, 11, 16-30.