Scientific impact assessment cannot be fair
In this paper we deal with the problem of aggregating numeric sequences of arbitrary length that represent e.g. citation records of scientists. Impact functions are the aggregation operators that express as a single number not only the quality of individual publications, but also their author's productivity.We examine some fundamental properties of these aggregation tools. It turns out that each impact function which always gives indisputable valuations must necessarily be trivial. Moreover, it is shown that for any set of citation records in which none is dominated by the other, we may construct an impact function that gives any a priori-established authors' ordering. Theoretically then, there is considerable room for manipulation in the hands of decision makers.We also discuss the differences between the impact function-based and the multicriteria decision making-based approach to scientific quality management, and study how the introduction of new properties of impact functions affects the assessment process. We argue that simple mathematical tools like the h- or g-index (as well as other bibliometric impact indices) may not necessarily be a good choice when it comes to assess scientific achievements. © 2013 Elsevier Ltd.
|Journal series||Journal of Informetrics, ISSN 1751-1577|
|Score|| = 45.0, 23-08-2020, ArticleFromJournal|
= 45.0, 23-08-2020, ArticleFromJournal
|Publication indicators||= 9; = 8; = 6.0; : 2013 = 2.009; : 2013 = 3.58 (2) - 2013=3.609 (5)|
|Citation count*||6 (2015-09-07)|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.