Please login first
Research Evaluation
1  Leeds Beckett University, Leeds, UK

Published: 21 January 2015 by MDPI in 1st Electronic Conference of MDPI Editors-in-Chief session Measuring Research
Abstract: Not everything that can be counted counts. Not everything that counts can be counted.

The evaluation of 'research' is now a common-place activity and understandably so. Things have moved on from previously vague ideas about 'value of research' in terms of 'dissemination'; usually thought of as encompassing a range of products or outputs in the form of reports, papers, and presentations. The problem, however, is that such initiatives all too often lead to ROI/CBA type calculations, a form of 'research accounting' that is yet another aspect of what Strathern has termed 'audit culture'. The result is that we measure what is easiest to measure, rather than trying to develop meaningful and indicative measures. Moreover any form of measure quickly becomes discredited as people find ways of 'juking the stats'. Or in terms of Goodhart's Law 'When a measure that becomes a target, it ceases to be a good measure' In the UK this was embodied in The Research Assessment Exercise, now termed The Research Excellence Framework, due to be published in mid‐ December. In the early years this was based in part of the number of publications/outputs for each person submitted, but this was soon discontinued in favour of maximum of 4 outputs per person, which also became a de facto minimum. Many other countries have similar forms of assessment. Different disciplines or subject areas have different ideas about what counts as an output, or have different weightings for outputs such as books/monographs, journal papers, conference papers, and so on. Furthermore not all outputs in the same category are considered to be of equal merit, based not simply on the contents but also on the reputation of the journal or publisher. Consequently there now exist examples of journal rankings, a component of bibliometrics, which indicate the standing of specific journals within specific fields of study. There are various forms of measure, all now greatly facilitated by the existence of the internet, whereas previously people had to rely on citation indices which were often out of date and required significant labour to compile and maintain. There is a wide range of current options for these rankings, including citation reports and impact factors, as well as a range of organizations that assemble and supply these, including Thomson, Scopus, SCImago, and Eigenfactor. The result is that researchers aim at those journals with the highest ranking, and editors and publishers strive to attain high ranking. Both of these goals have various ramifications, and in some cases this has led to questionable practices on both sides. In addition, with the development of open access policies, the traditional business models for journals have been brought to the fore, with new models being developed that rely less on revenue from institutional subscriptions and more on article processing fees from authors. The result is that journal publication is a fraught issue for all parties. Potential authors now may have to find significant funds to get published, editorial policies are under pressure to ensure a high ranking for the journal, and publishers are having to adapt to a new climate of funding and ranking, leading to some critics to point to what they see as 'predatory practices' by some. Issues of licencing and copyright are also more fraught and complex. The necessity now is for journals to establish and maintain high levels of trust and reputation, but this is always going to be a challenge for publishers, editors, editorial boards, reviewers, and authors.

Accompanying this presentation is the article published in the Software Practitioner, available at
https://dl.dropboxusercontent.com/u/608183/Bryant%20SP%20March%20Issue2.pdf
Keywords: research evaluation, REF
Top