What is Replicability, and Why is it in Crisis?

Reading Time: 3 minutes

Scientists warn of a panorama of epistemological production without pathways to reliability.

What is Replicability, and Why is it in Crisis?
Replicability Crisis. Photo: Flickr/Nic McPhee.
Reading time 3 minutes
Reading Time: 3 minutes

In addition to reinforcing the criteria for quality control and academic honesty, it is necessary to consider communication in academic work.

Paper mills have created an unfavorable situation in academia. They have weakened research quality and put its credibility at risk, forcing scientists into harmful practices, like dividing works into smaller, incomplete installments to maintain a calculated publishing pace.

Under this context, one consequence is affecting significantly scientific production: the crisis of replicability. The term, also known as reproducibility or replication, refers to repeating the experiment in different situations to prove its results. This is done to check the safety of the first experiment’s findings and its feasibility.

Paper mills threaten the replicability of studies and experiments because they don’t prioritize the production of proven knowledge. Its objective is to create products for academicians trying to fill a publication quota. Data from prefabricated research may be plagiarized, incomplete, altered, manipulated, or intentionally biased to support the thesis of the person who orders the work.

Science without verification

Can we say we still have validated knowledge when testing in the scientific method is compromised? This question has troubled academicians for at least a decade. Scientific production has struggled with verification since before prefabricated manuscripts have had their current boom. Alvaro de Menard, an academic researcher in social sciences and participant in the replication markets project by the Defense Advanced Research Projects Agency (DARPA), offers a sharp and discouraging perspective on the state of replication in social science.

The processes that lead to unreliable results are routine, well understood within the academic community, predictable, and easy to avoid, Menard argues. However, the rigor of scientific research has not improved.  The conversation about these aspects of scientific production has been widely discussed but has not yet generated a more robust structure for production and dissemination that facilitates verification. Fallible science continues to be published in scientific journals.

The role of peer review

The academic community places much confidence in the last validation step before publishing an article. Peer review has been the bastion to defend the technical quality of research work and the ethical system on which the production of credible knowledge is based.

Nevertheless, peer review has not been infallible in separating replicable content from that which is not. A study conducted by Northwest University in Evanston, Illinois, found that this quality measure is less effective than we would think. According to authors Yang Yang, Wu Youyou, and Brian Uzzi, fallible works circulate as much as the verified ones.

Rigor alone is not enough

Most conversations about the replicability crisis revolve around protocols. They consider what steps to take to control the proliferation of hasty, prefabricated, or peer-reviewed manuscripts that pass with undetected fallibility.

While prioritizing this, scientists and academicians neglect a crucial aspect to ensure replicability: communication—the chances of repeating a particular experiment decrease exponentially when working with vague descriptions or overcomplicated instructions. The problem increases in complexity when it comes to reproducing an experiment written in another language.

With this problem in mind, the National Academies Press in the United States published a series of recommendations to facilitate reading research papers and experiments. The organization recommends that researchers include a clear description of how they achieved their results. The reports shall consist of sufficient details of appropriate research, which must include:

  • A complete and concise description of all methods, instruments, materials, procedures, measurements, and other variants used for the study.

  • A clear breakdown of data analysis and decisions for excluding specific data and including others.

  • For results that depend on statistical inference: an explanation of the analytical choices, when those decisions were made, and whether the study was exploratory or confirmative.

  • A discussion about the general restrictions, such as which methodological aspects the authors consider could be changed without altering the result, which must remain constant.

  • A report on the accuracy of the statistics used.

  • A discussion about the limitations or uncertainty of measurements, results, and inferences.

Despite this complicated panorama, some academicians forecast a period of improvement in the problem of low reproducibility. However, it is necessary to do the work and examine the opportunities that this problem poses. Are you an academician? Have you run into studies that are not replicable? What are your impressions? What would you suggest to mitigate this situation within academia? Tell us in the comments.

Translation by Daniel Wetta.

Sofía García-Bullé

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0