Many problems exist in the world; no matter how insignificant or profound they may seem, some people believe that technology can overcome any challenge. According to human rights defender Alex Drew, this notion reflects how people assign global priority and high value to solutions developed using digital tools to mend human problems.
This theory, called techno-solutionism, implies that a machine, software program, mobile device application, or algorithm can improve or resolve a specific complex scenario. Solutions are quantified by the speed at which things can be “repaired.” Christine Rosen, author and President of the Colloquium on Knowledge, Technology, and Culture at the Institute for Advanced Studies in Culture, acknowledges that some technological tools do add value but warns that the danger is that premises about individual technological solutions merge into a single ideology.
Moreover, Drew adds that the haste of the Western world to embrace immediate technological solutions violates different human rights, which becomes conceived as an unfortunate side effect by those who develop technology. The end user ignores the full context, justifying the action due to the accelerated problem resolution. He also expresses that the current philosophy of competition, valuing efforts to launch a product faster than the opponent, leads to the insufficient assessment of the tools. The competitive environment causes consideration that human rights violations can be solved on the fly by developing a new application.
According to the UCLA Center for Critical Internet Inquiry, academicians and activists have revealed a “tremendous denial on the part of Silicon Valley elites and governments to acknowledge and act upon the myriad social harms that emanate from their products and services.” Drew provides an example of technology being biased by the prejudices of its creators, pointing out that Silicon Valley is far from the refuged world of most people; it provides remedies to problems the elite do not fully understand.
Rosen adds that techno-solutionism speaks the language of the future but acts in the present and the short term. The defenders’ appeal of this system of ideas is that this is an apolitical issue when, in fact, its consequences usually do have a political impact.
For her part, Lucie Krahulcova, Director of Programs and Partnerships at Digital Rights Watch, notes that the term technological solutionism gained notoriety around 2013 following the publication of Evgeny Morozov’s book “To Save Everything, Click Here: The Folly of Technological Solutionism.” In the book, the Belarusian writer and researcher describes the need to solve real and complicated problems flawlessly and quickly. However, Krahulcova argues that although the intention seems magical, the technology is not because humans create it, so it is subject to the same defects of prejudices and biases. She determines that most intricate matters require complex real-world measurements.
In an interview, Natasha Dow Schüll, a cultural anthropologist and associate professor in the Department of Media, Culture, and Communication at New York University, cites the book as “an endemic ideology that recasts complex social phenomena like politics, public health, education, and law enforcement as ‘neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!'”
Evgeny Morozov states that due to the portability of smartphones, the ubiquity of social networks, and the proliferation of sensors everywhere, the infrastructure to provide new solutions makes possible options that did not exist 15 years ago. This is not the problem; it is the fact that these technologies are brought together under the same umbrella named the “Internet” below the assumption that they all bear a uniform dynamic through which they are understood and governed.
Krahulcova specifies that the point is that very few things need to be an application, and resorting to hasty responses implies a limited understanding of the problem and impedes vital discussions on the topic. She argues that once technological solutionism has become ingrained in public discourse, it is tough to reverse or eliminate it.
In addition, she mentions that the political investment of time and resources in technology is extensive. An example is the CovidSafe application created during the pandemic to detect nearby cases of people infected with Covid-19. The platform cost $21 million to design and launch; however, the fewer users actively using it, the less effective it was, with no guarantee of working correctly. Another was Robodebt, an automated scheme by the Australian government that erroneously required welfare recipients to return the support granted. Due to an incorrect algorithm, citizens received letters declaring that they owed thousands of dollars, which forced more than half a million people to pay false debts from the plan’s initiation in 2016 until 2019 when it was declared illegal.
Likewise, in 2019, a UN report on extreme poverty warned about the emergence of a dystopia of digital well-being and that tech companies operate in a duty-free zone for developing technology solutions. Krahulcova argues that these techno-solutionist systems used to be relegated to fiction genres but are now confirmed, even in police and intelligence agencies. Some agencies use predictive algorithms that exacerbate discrimination against marginalized communities through the promotion of bias when it would be harmless to outsource a simple solution.
Social implications
The artist and researcher Joana Moll establishes that techno-solutionism simplifies or hides simultaneous realities, even when it has been shown that this current is not practical in addressing considerably complex events. She states that it is occasionally actively adopted as the only response to a critical situation and that although the approach could provide systemic stability in the short term by avoiding immediate collapses, understanding the background and origin of the problem is challenging.
Regarding the book The Social Construction of Reality (1966), Moll also explains that reality comprises a simultaneous sophisticated and subjective processing of multiple contextualized events (experiences, interactions, language, personal and social inheritances). However, everyday interactions are now carried out through electronic devices and interconnected systems. Over time, this dictates human beings’ relationship with the world, influencing the construction of reality itself.
She also agrees with other experts that many everyday technologies are designed by corporations of the capitalist system, such as those in Silicon Valley. These techno-patriarchal systems nullify the possibility of reaching agreements or negotiations to modify or adapt to certain situations. Therefore, she gloomily suggests that imagining alternative ways of living in the world seems distant and that it is inevitable to question the long-term implications of solving highly complex systemic challenges with reductionist techno solutions.
In turn, Nanjala Nyabola of the Atlantic Council’s Digital Forensic Research Laboratory highlights that technology has become a fundamental mechanism for managing the movement of people internationally, which has led to ethical debates about its impact. Different countries run platforms to address citizenship or digital identity issues. Nonetheless, she indicates that the rising dependence on technology to resolve complex social or political circumstances reinforces exclusionary ideologies such as ethnonationalism and racism.
“That same technology, developed in securitized immigration contexts with fewer legal protections, is then often redeployed more broadly within democratic societies or sold overseas to governments with less responsive governance structures, muddying citizens’ expectations of due process, civil rights, and democratic protections,” she says.
For her part, Mahak Nagpal, Assistant Professor of Ethics and Business Law at the Opus College of Business, points out that a humane solution should be the starting point. She believes that the potential of Artificial Intelligence (AI) could currently be overestimated, particularly the rhetoric that AI would be able to solve intricate social problems. She shares that although AI could provide advantages in specific contexts, human capabilities and ingenuity still need to be considered. Sometimes, organizational decision-makers have techno-solutionist attitudes. To this end, she recommends that technology should not be conceived as the only remedy but rather explore sustainable business models that evaluate non-technological interventions before opting for merely digital instruments.
Thus, she proposes asking whether a problem is technical, social, or a broader cultural issue. Next, it is essential to recognize the benefits of such technology and have a healthy understanding of its inherent limitations to achieve a realistic good.
A concrete example of reductionist techno-solutionism occurred during the recent pandemic. Petra Molnar, a lawyer and anthropologist, specifies that this approach does not address the elementary causes of displacement, forced migrations, or economic inequality, accentuating the spread of global health crises.
Rosen details that while uncertainty prevails, it is understandable to feel comfortable facing technological solutionism. However, this approach can be radical in times of crisis. She points out that under pressure, resigning oneself to dependence on techno-solutionism in public health and education is natural. However, it is imperative to analyze the consequences.
André Cardozo Sarli, a specialist in AI, technology, and responsible gaming, stresses that education is an essential pillar and public training is a platform to combat illiteracy and inspire people to seek and obtain better opportunities. He presents the case of Brazil, where the São Paulo Ministry of Education chose to use ChatGPT to develop curricula for sixth-grade students in the public school system. Curricular development professionals carried out an activity and reviewed and adapted the artificially generated content. They adopted these measures without any debate after an extreme digitalization plan exchanging physical books for electronic ones. However, he notes that it is alarming that there was no mention of considering the threat that generative AI could create fake content or that it is fed information in English to produce content for a North American audience. This panorama raises conflicts where learning strategies are again designed as general “solutions” that do not consider specific contexts, such as those in Latin America.
He points out that education is not only about transferring information; he resolves that latent inequalities in educational systems such as Brazil’s are exacerbated. In addition, he observes that this ideology affects the most vulnerable populations who depend on public education. “AI tools could support the development of material, but not be in charge or substitute the human work, especially given the tradition of decades of local knowledge production in the state,” he notes.
On the other hand, the Colombian Judiciary has used ChatGPT to support decisions such as waiving the payment of medical fees to treat an infant with autism and, on another occasion, explaining to the public that it was possible to meet in a virtual headquarters in the metaverse to solve a traffic case, agreeing that identities could be verified in this space. The hearing was broadcast on YouTube.
This raises concerns about how professionals use this technology to answer crucial questions about doing their jobs. By itself, this is worrying, but also, coupled with the fact that it was a justice system, it implies potential dangers. This does not mean that technologies are not helpful. They can be used as instruments to assist in managing the workload and optimizing administrative times, not as a substitute for duties and responsibilities towards citizens. Therefore, guidelines must be instituted that explicitly determine when it is applicable to have the technology for a specific purpose, justify its use, and implement it under those circumstances.
The excitement of new technological trends should not overwhelm understanding the detailed hows and whys of applying digital tools. Only after identifying its purposes and designing reasoned strategies for implementing them is it valid to accompany human wit with technology. Using it as a superficial patch deepens the latent gaps, which can differ even in the same region. A generic approach will only prolong meaningful resolutions and a better future.
Translated by Daniel Wetta
This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0 















