Machines and minority rule

A version of this essay was first published in The Guardian, March 2016

The trouble with using games like Chess and Go as measures of technological progress is that they are competitions. There’s a winner and there’s a loser. The stories we tell about them end in a scoreline – and this month’s biggest tech news story had a clear victor. Machines, four. Humanity, one. That was the final result of the match between Google’s AlphaGo and human champion Lee Sedol at the fiendishly complex game of Go, and it came complete with a disconcerting question: what’s next? Where will the machines claim their next victory: putting you out of a job; solving the mysteries of science; bettering human abilities in the bedroom?

This is what you might call the usurpation narrative of human-machine interactions. A creation is pitted against its creators, aspiring ultimately to supplant them. Science fiction is full of usurpations, sometimes entwined with a second strand of anxiety: seduction. Machines are either out to eliminate us – Skynet from Terminator 2, Hal in 2001 – or to hoodwink us into a state of surrender – the simulated world of The Matrix, the pampered couch potatoes of WALL-E. On occasion, they do both. These are just stories, but they’re powerful and revealing – not least of the fact that stories are easier to grasp than what’s actually going on.


According to a survey over over 2,000 people conducted by YouGov for the British Science Association to mark British Science Week, public attitudes towards AI vary greatly depending on its application. Fully 70 per cent of respondents are happy for intelligent machines to carry out jobs like crop-monitoring – but this falls to 49 per cent once you start asking about household tasks, and to a miserly 23 per cent when talking about medical operations in hospitals. The very lowest level of trust comes when you ask about sex work, with just 17 per cent of respondents trusting robots equipped with AI in this field – although this may be a proxy for not trusting human nature very much in this situation either.

The results map closely onto the degree of intimacy involved. Artificial intelligence is okay at a distance. Up close and personal, however, the lack of a human face counts more and more. All of which both makes intuitive sense – and leaves a pressing question unaddressed, which is just what it means for a machine to carry out a task in the first place. Here, the image of a robot stepping into the shoes of a human worker couldn’t be more wrong. When it comes to technology’s most significant applications, we are neither usurped or seduced – because the systems involved are nothing like us in either their function or faculties. As a species, we are not in competition with information technology at all: we are, rather, busily adapting the fabric of our world into something machines can comprehend.

Consider what it means to teach an autonomous robot to do something as simple as mowing grass. First, you take a long wire and lay it carefully around the borders of your lawn. Then you can set your miniature mower loose. It doesn’t know or care what a lawn is, or indeed what mowing means: it will simply criss-cross the area bounded by the wire until it has covered all the ground. You have successfully adapted an environment – your lawn – into something a machine understands.


I’ve borrowed the above example from the philosopher of technology, Luciano Floridi, who in his book The Fourth Revolution explores the degree to which we have radically adapted most of the environments we work and live within so that machines are able to grasp them. We have, he notes, “been enveloping the world around [information technologies] for decades without fully realizing it” – wrapping everything we do in layers of data so dense that they can no longer be comprehended outside of machine memory, speed and pattern-recognizing power.

I say comprehended, but AlphaGo no more understands the game of Go than a robot mower understands the concept of a lawn. What it understands is zeroes and ones, and the patterns that can be drawn from their prodigiously smart crunching. We translate, the machine iterates and performs. Increasingly, machines translate for other machines, carrying on their data exchanges without our intervention. When the arena is something as pure as a board game, where the rules are entirely known and always exactly the same, the results are remarkable. When the arena is something as messy, unrepeatable and ill-defined as actuality, the business of adaptation and translation is a great deal more difficult.

Here, Floridi offers another useful analogy. Let us imagine, he suggests, two people in a relationship. One is extremely stubborn, inflexible, and unwilling to change. The other is precisely the opposite: adaptable, empathetic, flexible. It doesn’t take a genius to see how things will develop over time. When one person is willing to compromise and the other isn’t, more and more tasks end up being done the way that the uncompromising partner insists – because otherwise they wouldn’t get done at all. The flexible partner will eventually adapt their entire lives around the inflexible partner’s insistences.

When it comes to human-machine interactions, even the smartest AI is orders of magnitude more inflexible than the most intransigent human. We either do things the way the system understands, or we don’t get to do things at all. Hence one of the most useful phrases to enter popular culture in the last fifteen years: “Computer Says No.” It comes from a sketch in the comedy series Little Britain, and is likely to provoke groans of recognition from anyone ever flummoxed by a system that doesn’t recognize their wishes as an option. “Computer Says No” mumbles a morose employee in response to a perfectly reasonable request, assaulting her keyboard with a single digit. It doesn’t matter what a million people might want – if the option isn’t on the menu, it might as well not exist.

In social science, this is sometimes known as minority rule. Just five per cent of a population can, for example, remove a particular choice from everyone else through inflexibility. If I’m cooking dinner for one hundred people and I know that five of them are lactose intolerant, I will cook something that suits everyone; if there are a couple of vegans coming and I don’t have the capacity to make multiple dishes, I’ll rule out even more kinds of food.


In an era where machines are implicated in more and more of our most intimate decisions, the minority whose rules apply are those designing machines in the first place. Even the smartest AI will relentlessly follow its code once set in motion – and this means that, if we are meaningfully to debate the adaptation of a human world into a machine-mediated one, this debate must take place at the design stage.

By the time it gets to “Computer Says No,” it’s too late. The technology is in place, its momentum gathering. We need to negotiate our assent and refusals earlier, collectively. And for this negotiation to work, we must ask what it means to translate not only productivity and profit but also other values into a system’s aims and permissions: justice, opportunity, freedom, compassion. “Humanity Says No” isn’t a phrase for our age, yet. But it may need to become one.