I Made a Funny Son and Youre Not Laughingã¢â‚¬â„¢
Reading this tweet by Maciej Ceglowski makes me want to set down a conjecture that I've been entertaining for the last couple of years (in part thanks to having read Maciej's and Kieran's previous work as well as talking lots to Marion Fourcade).
The conjecture (and it is no more than a plausible conjecture) is simple, but it straightforwardly contradicts the collective wisdom that is emerging in Washington DC, and other places too. This collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens' needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).
The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott's Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.
So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.
This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.
These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.
In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott's account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.
So in short, this conjecture would suggest that the conjunction of AI and authoritarianism (has someone coined the term 'aithoritarianism' yet? I'd really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable.
Like all monotheoretic accounts, you should treat this post with some skepticism – political reality is always more complex and muddier than any abstraction. There are surely other effects (another, particularly interesting one for big countries such as China, is to relax the assumption that the state is a monolith, and to think about the intersection between machine learning and warring bureaucratic factions within the center, and between the center and periphery).Yet I think that it is plausible that it at least maps one significant set of causal relationships, that may push (in combination with, or against, other structural forces) towards very different outcomes than the conventional wisdom imagines. Comments, elaborations, qualifications and disagreements welcome.
Source: https://crookedtimber.org/2019/11/25/seeing-like-a-finite-state-machine/
Post a Comment for "I Made a Funny Son and Youre Not Laughingã¢â‚¬â„¢"