- Andrew Sears
When Technology Becomes Political, Ethics Becomes Imperative
Editor's Note: This week, I'm pleased to feature an essay by Natalia Domagala, the head of data ethics policy at the UK's Department for Digital, Culture, Media and Sport. Natalia offers a timely account of why ethics matters more than ever in a time when the lines between technological, economic, and political systems are becoming increasingly blurred. Come for her analysis of COVID-19 technosolutionism, stay for her critique of a critique of Hobbes' Leviathan.

Ethics is no longer optional in a world where technology, politics and economics are impossible to disentangle. We are living through the birth of a new era of extraordinary technological advancements that will fundamentally change the way we live, work, and interact; this presents both an opportunity and a warning [1]. The online world has morphed with our offline reality and has begun to shape it, creating as much promise as peril.
A comprehensive case study of our technology-dominated era comes from the ongoing coronavirus pandemics. From the onset of this pandemic, there has been a widespread assumption that technology would provide the solution to this crisis. Workplaces around the globe underwent an enforced crash course in remote working and even the most reluctant were inclined to digitalise their practice. Governments partnered with developers and software companies in search of a silver bullet, a technocentric solution to test, track, and trace populations to contain and combat the virus. But somewhere between the latest infection rates and updates on the bluetooth tracing technologies, voices of academics, ethicists, and human rights activists broke through suggesting that potential misuses of these technologies and their long-term consequences can be as ominous as the virus itself. When a technological solution is seen as a political solution, we remain in a state of extreme vulnerability. The idea that technology can truly and wholly protect us is something more dangerous than bad politics or bad government, suggests Linnet Taylor, an Associate Professor at Tilburg Institute for Law, Technology and Society [2]. She is one of many who are stressing that if government through technology is to become the new normal, it must become more accountable.
The fast-emerging field of governing through technology encompasses a range of difficult decisions. As raised by Taylor, ‘if we want this governance to operate through technology, we have to be aware of the existing limitations of democratic accountability in technology’. Greater transparency in the sector, increased visibility of data ethics policies, and the creation of opportunities for public scrutiny could preserve the benefits that technology can bring while mitigating potential negative consequences. The rapid response of the technology sector to the coronavirus pandemics, and in some cases their stance against certain government decisions related to technological solutions [3], further demonstrates what we have been observing for a while: that the role of tech companies has changed from mere service providers to active and powerful political players.
This shifting landscape requires not only legislative responses, but also a cultural change. Data scientists, as the chief architects of the new reality, ought to be well-educated about the societal consequences of their work and the public ought to develop the skills and awareness necessary to understand decisions being made on their behalf by governments and the private sector. Meaningful accountability structures can only emerge when there is sufficient transparency and public involvement and when the public possesses the appropriate skills to understand the process and decisions made in developing technological products. The need for algorithmic explainability and literacy is one example of this.
If the coronavirus lockdown has taught us anything, it’s that when it’s too late for prevention, we absolutely must get the cure right. What this means for technology is an urgent need for appropriate safeguarding measures to prevent any major misuses before they occur. This begins with appropriate legal frameworks and regulation. However, compliance with the law is not always synonymous with an appropriate use of technology; existing legal loopholes and various interpretations of laws create room for potential harm. The movements of digital and data ethics aim to address this gap between what’s legal and what’s right through understanding the complex value judgements that digital production and data generation, analysis, and dissemination can raise.
In their research on ethics in the technology industry, Emanuel Moss and Jacob Metcalf identified four overlapping meanings of the word ethics: moral justice, corporate values, legal risk, compliance [4]. For them, in the best case scenario, "ethics inside of technology companies consists of using robust and well-managed ethical processes to align collaboratively-determined ethical outcomes with the organization’s and commonly-held ethical values"[5]. It is the understanding of ethics as moral justice that calls for holding the creators of technology accountable for the societal consequences of their actions. However, moral justice is not something that can be easily instilled or suddenly enforced; it’s a result of the long-term cultivation of certain beliefs and values, of a well-developed conscience as well as an understanding of the broader implications of one’s work.
For this, every course in the world related to data science, computer science, programming, robotics, or AI must contain thorough digital ethics modules throughout the entire curriculum to prevent the perception that technological work is separate from societal production. There already exist excellent academic institutions and courses teaching data and digital ethics, but we must learn to regard these as foundational for becoming a successful technologist or data scientist rather than as optional electives [6]. Ethics needs to become an inseparable part of the technology industry from the earliest stage possible and should be understood as an ongoing component of every technologist’s professional development. Technologists and data scientists should be held to a high standard of ethical conduct, perhaps even by a code analogous to the hippocratic oath in the medical field [7]. Technologists’ ethical competency should periodically be evaluated, and anyone who works with data should be trained to understand a full range of potential consequences that their work and decisions can generate.
Ethics is for human flourishing, not corporate profit
It is not surprising that there are commercial benefits from having the highest standards of data and digital ethics. Through due-diligence, user research, bias-spotting, and additional checks and balances, companies can protect themselves from rolling out ethically flawed products and avoid the costly losses that usually accompany ethical errors. Because of this, companies often frame ethics in the context of increasing competitive advantage, speaking of ethics in terms of potential cost savings and revenue generation from socially-conscious customers. This undoubtedly is a positive development and a valid incentive for businesses to adopt ethical practices in their work. Nevertheless, perceiving ethics as a source of competitive advantage, as a business asset, feeds into the narrative that ‘we will only do ethics as long as we don’t stifle innovation’ and deflects attention from what digital and data ethics really is about. Such logic further perpetuates the ‘profit and growth first’-orientated story of capitalist societies. Do doctors see medical ethics as an optional enhancement of their practice, or its cornerstone? Ethics should not be merely another variable in the profit-and-loss calculation. Ethics needs to be seen as an absolute must.
Ultimately, the issue of ethical technology boils down to a more fundamental question: what is the purpose of technology and technological progress? Is it to enrich the world, to transcend beyond the boundaries of what is humanly possible; to enrich lives, to free up human time? Misuse of technology starts where monetary gain or surveillance become the driving force of technological developments rather than these higher values. This is precisely why there is a pressing need for data and tech practitioners and consumers to understand that tech should be used in a socially responsible manner; that technology should be built to do no harm to human beings or the environment, and that science and mathematics are not neutral when they are funded by organisations, companies, and governments with certain agendas and expectations.
When it comes to the role of the state in propagating digital ethics, it might be worth seeking clues in philosophy. The Hobbesian metaphor for the state is the figure of Leviathan, a monstrous, artificial construct formed of all the bodies of its citizens. Critiquing this idea, philosopher Hannah Arendt envisions the Leviathan state as an irresistible and overpowering machine that requires absolute obedience from its subjects, depriving them of political rights and participation [8]. Arendt’s interpretation suggests that the Hobbesian state will most likely act in the interests of machines in our current age of technological development, perhaps to the detriment of human citizens. However, philosopher David Runciman argues that this might not be the case. After all, Hobbes’ Leviathan state is made up of all the bodies of its citizens; it is meant to mimic the human action [9]. Although it is a machine, it has been built by humans to control other machines, to prevent mindless, robotic acts. Therefore, the question is whether in the age of the machines it is possible for the state to serve our interests. Technological ethics may play a decisive role in keeping the Leviathan on humanity’s side.
Footnotes
[1] https://www.weforum.org/focus/fourth-industrial-revolution
[3] For instance, in the context of centralised and decentralised contact tracing apps
[4] https://points.datasociety.net/too-big-a-word-13e66e62a5bf?gi=98cd006bfc31
[5] https://points.datasociety.net/too-big-a-word-13e66e62a5bf
[6] For example, NYU’s Center for Data Science has an interdisciplinary course dedicated to Responsible Data Science developed and taught by Julia Stoyanovich.
[8] Arendt, Hannah (1958) The Human Condition; Degryse, Annelies (2008) The Sovereign and the Social: Arendt's Understanding of Hobbes. Ethical Perspectives. Issue 2, Volume 15, p. 245 - 260.
[9] https://www.talkingpoliticspodcast.com/history-of-ideas
Views and opinions in this essay are personal and do not represent those of institutions or organisations that the author may be associated with in professional capacity, unless explicitly stated.