Opinion | Why Do We Need to Change Research Evaluation Systems?

Reading Time: 4 minutes

Research evaluation systems produce anxiety, increase job precariousness, and encourage the overproduction of papers.

Opinion | Why Do We Need to Change Research Evaluation Systems?
Sisyphus. Image by AK Rockefeller. Under Creative Commons license.
Reading time 4 minutes
Reading Time: 4 minutes

Research evaluation systems produce anxiety, increase job precariousness, and encourage the overproduction of papers.

A few weeks ago, I found three distinct yet similar articles. One reported the murder of a mathematics professor at a Chinese university. Another was about a Dutch university that decided to abandon the impact factor. The last article discussed the global obsession with academic “excellence.” What do these stories have in common? Research evaluation systems.

The premise of the three stories is that research evaluation systems (based on quantification and competencies) generate anxiety, increase inequality and precariousness while encouraging excessive competition and overproduction of papers. The exact mechanisms used to measure the quality of universities are designed to undermine it, as Professor Sebastiaan Faber warns in the article The Traps of University Excellence (in Spanish).

Let us first review Faber’s article, which I recommend that you read from start to finish. In short, what Faber argues is that what we know today as “academic excellence” or “academic quality” is based more on quantity than quality. Furthermore, this is reflected in universities’ obsession with world rankings and how science and knowledge are currently funded. How do we measure the quality of a university? We look for its position in one of the university rankings that we all know. What are the indicators that these ranking organizations commonly use? One of the most important indicators is scientific production (e.g., the number of papers, number of citations, impact factor). How do universities evaluate their professors and researchers? The decisive metrics include the number of papers published, their impact factors, the number of citations, and the projects or grants awarded during their careers. And what do the funding agencies that award these grants evaluate? You guessed it: the number of papers published, the impact factors, the citations… It is an unsustainable vicious cycle, and the academic community is suffering the damages.

“At the end of the day, it is always easier and cheaper to measure quantity than quality. However, the truth is that the fixation on the quantitative has wreaked havoc throughout academia. It has led to an insane race for survival and a huge waste of money, time and talent. A tragedy not only scientific but social,” says Frank Huisman, historian and professor at the University of Utrecht. What are these ravages? In addition to the increasing precariousness of faculty, we have also seen high rates of anxiety, desertion, depression, and burnout in the academic community due to the culture of “publish or perish.” The constant pressure to publish and the hyper-competency generated by the shortage of permanent faculty positions have led researchers to take drastic measures. In June, the journal Nature published the news of the murder of a mathematics professor on a university campus in Shanghai. The prime suspect is a researcher at Fudan University. Although the motive is unknown, the tragic incident reopened the debate about the failures in the incentive system and tenure track that universities in China have adopted. However, these failures are not unique to China’s university system, nor are they recent. In Spain, the National Agency for Evaluation of Quality and Accreditation (ANECA) has demolished the dreams of more than one researcher. In Chile, the precariousness of an academic career has brought professors to a level of disillusionment that falls in nihilism. In Europe, the pressure to publish is so unsustainable that in 2014, a group of academicians spoke out for “dis-excellence” and, not to mention, in the United States, there are countless cases and examples.

Can we break out of this vicious cycle? Are there alternatives? Yes, there are. For some years now, various movements worldwide have sought to change the system for evaluating research. In 2012, the “San Francisco Declaration” proposed eliminating metrics based on the impact factor. There was also the Charte de la désexcellence  (“Letter of Dis-Excellence”) mentioned above. In 2015, a group of academicians signed the Leiden Manifesto, which warned of the “widespread misuse of indicators in evaluating scientific performance.” Since 2013, the group Science in Transition has sought to reform the science evaluation system. Finally, since 2016, the Collectiu InDocentia, created at the University of Valencia (Spain), has also been doing its part.

Even China, which in its eagerness to surpass the United States in the scientific race, adopted an ambitious long-term plan based on scientific publications, is now reviewing its incentives program. China is evaluating if the incentives offered to achieve the desired results and seek new ways to assess academics. Another more recent example is found at Utrecht University, which this week announced that in 2022, the university would formally abandon the impact factor for the hiring and promotion decisions of its academic staff. The university will judge its academicians by other standards, including their commitment to teamwork and their efforts to promote open science. “The impact factors do not truly reflect the quality of an individual researcher or academician,” the statement said. Here you can read more details about the new Recognition and Rewards scheme of the university.

As Xavier Aragay said on Twitter, evaluation is being discussed at all levels of education. Not just about evaluating students who cry out for changes, but also about the assessment systems of science and knowledge and, more importantly, the people who comprise science and transmit that knowledge. So many KPIs, evaluation tables, rankings, performance metrics and numbers, in general, make us forget that in universities, the people working are in charge of the training and growth of other people. We have lost sight of the social function of the University. What more important function is there than that?

Translation by Daniel Wetta.

Karina Fuerte

(She/her). Editor in Chief at the Observatory of the Institute for the Future of Education.

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0