An essay on the contradictions within AI codes of ethics, first published on One Zero.
Artificial Intelligence should treat all people fairly, empower everyone and engage people, perform reliably and safely, be understandable, be secure and respect privacy, and have algorithmic accountability. It should be aligned with existing human values, be explainable, be fair, and respect user data rights. It should be used for socially beneficial purposes, and always remain under meaningful human control. Got that? Good.
These are some of the high-level headings under which Microsoft, IBM and DeepMind respectively set out their ethical principles for the development and deployment of AI. They’re also, pretty much by definition, A Good Thing. Anything that insists upon technology’s weighty real-world repercussions—and its creators’ responsibilities towards these—is surely welcome in an age when automated systems are implicated in every facet of human existence.
And yet, when it comes to the ways in which AI codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that AI codes are recipes for automating ethics itself; and that, once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
***
What’s wrong with this view? To quote a September 2019 article in Nature Machine Learning, while there is “a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy)”, what precisely these principles mean is quite another matter. There remains “substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.” Ethical codes, in other words, are much less like computer code than their creators might wish: not so much sets of instructions as aspirations, couched in terms that beg more questions than they answer.
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with conservators of wealth; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like—or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
In their different ways, most AI ethical codes acknowledge this. DeepMind puts the problem up front, stating that “collaboration, diversity of thought, and meaningful public engagement are key if we are to develop and apply AI for maximum benefit,” and that “different groups of people hold different values, meaning it is difficult to agree on universal principles.” This is laudably frank, as far as it goes. But I would argue that there’s something missing from this approach that need to be made explicit before the debate can move where it most needs to be—into a zone, not coincidentally, uncomfortable for many tech giants.
This is the fact that there is no such thing as an ethical AI, any more than there’s a single set of instructions spelling how to be good—and that our current fascinated focus on the “inside” of automated processes only takes us further away from the contested human contexts within which values and consequences actually exist. As the author and technologist David Weinberger puts it in his 2020 book, Everyday Chaos, “insisting that AI systems be explicable sounds great, but it distracts us from the harder and far more important question: What exactly do we want from these systems?” When it comes to technology, responsibilities and intentions equally lie outside the system itself.
***
At best, then, an ethical code describes debates that must begin and end elsewhere, about what a society should value, defend and believe in. And the moment any code starts to be treated as a recipe for inherently ethical machines—as a solution to a known problem, rather than an attempt at diagnosis—it risks becoming at best a category error, and at worst a culpable act of distraction and evasion.
Indeed, one of the most obvious and urgent current ethical failings around is a persistent over-claiming for and mystification of its capabilities—a form of magical thinking suggesting that the values and purposes of those creating new technologies shouldn’t be subject to scrutiny in familiar terms. The gig economy; the human cloud; the sharing economy; the world of big tech is awash with terms connoting a combination of novelty and inevitability that brooks no dissent. Substitute phrases like “insecure temporary employment”, “cheap outsourced labour” and “largely unregulated online rentals” for the above and different possibilities for ethical engagement start to become clear.
Lest we forget, we already know what many of the world’s most powerful automated systems want, in the sense of the ends they are directed towards: the enhancement of shareholder value for companies like Google, Amazon and Facebook; the empowerment of technocratic totalitarian states such as China. And any meaningful discussion of these systems demands a clear-eyed attentiveness to the objectives they are pursuing and the lived consequences of these. The challenge, in other words, is primarily political and social, not technological.
***
As the author Evgeny Morozov argued in a Guardian column exploring fake news (another turn of phrase that conceals as much as it reveals), any discussion of technology that doesn’t explicitly engage with its political economy—with the economic, political and social circumstances of its manufacture and maintenance—is one explicitly denuded of the questions that matter most.
“What,” Morozov asks, “drives and shapes all that technology around us?” If we cannot open up such questions for democratic debate, then we risk turning “technology” into little more than a “euphemism for a class of uber-human technologists and scientists, who, in their spare time, are ostensibly saving the world, mostly by inventing new apps and products.”
Perhaps the most telling myth of our time is that of machine superintelligence, the promise of which simultaneously turns AI ethics into a grapple with existential threats and a design process aimed at banishing human unreason—at outsourcing society’s greatest questions to putative superhuman entities in the form of AIs (and, presumably, the experts who tend and optimise them).
Even the most benign version of this scenario feels nothing like a world I would wish to live in. Give me, rather, the capacity passionately to contest the applications and priorities of superhuman systems, and the masters they serve; and ethical codes that aim not just at a framework for the interrogation of an AI’s purposes, but the circumstances and necessity of its very existence.