牛牛资源

AI will create many more Post Office scandals, says academic

Primary page content

Dr Dan McQuillan has warned of the dangers of Artificial Intelligence, claiming that it will create more scandals, like the Post Office Horizon IT saga currently dominating headlines.

牛牛资源 Dr McQuillan, Lecturer in Creative and Social Computing has issued a warning that Artificial Intelligence will lead to 鈥渕ore occasions where computing and bureaucracy combine to mangle the lives of ordinary people, but scaled in ways that make Horizon鈥檚 harms [the IT system in Post Office scandal] look like small beer.鈥

Dr McQuillan shared his views in the form of an opinion piece published by , in which he says: 鈥淥ne thing the Horizon IT system and AI have in common is their fallibility; both are complex systems which generate unpredictable errors. However, while the bugs in Fujitsu鈥檚 bodged accounting system stem from shoddy software testing, AI鈥檚 problems are foundational. The very operations that give AI its 鈥榳ow factor鈥, like recognising faces or answering questions, also make it prone to new kinds of failure modes like out-of-distribution errors (think Tesla self-driving car crashes) and hallucinations. 

鈥淢oreover, thanks to the internal complexity of their millions of parameters, there鈥檚 no ironclad way to figure out why an AI system came up with a particular answer. AI doesn鈥檛 even need to get to court to create problems of legality; this inherent opacity is the antithesis of any kind of due process. Moreover, language models like ChatGPT make unreliable witnesses because they are actually trained to produce untruths. Such systems aren鈥檛 optimised on facts but on producing plausible output (a very different thing). Even when they sound right, they are literally making things up. Woe betides the unwary citizen that turns to AI itself for legal advice; many have already been roasted by unsympathetic judges when it turns out they cited fabricated case law.鈥

鈥淎I also amplifies the other dimension of the Post Office scandal - the sustained institutional cruelty towards the sub-postmasters. Like bureaucracy, AI algorithms are a way of organising large systems where abstractions create a wall between a system and those it is applied to so that the latter are reduced to a collection of disparate labels and categories. Perhaps it鈥檚 not surprising, then, that the synergy of state institutions and algorithms has already shown a tendency to scale structural violence. In the Netherlands, an algorithm falsely accused tens of thousands of families of defrauding the child benefits system - ordered to repay the money, many were left with crippling debts and social exclusion. In Australia, the Robodebt algorithm labelled 400,000 people as guilty of welfare fraud. This also led to innumerable ruined lives as privatised debt collectors pursued people on the margins, many of whom already had disabilities or mental health issues. As with the Post Office, the Robodebt scheme was known internally to be flawed but was defended to the hilt for years via institutional, political and legal bullying.

鈥淢any families targeted by the Dutch algorithm were from minority communities, and it seems the Post Office prosecutions also came with a hefty dose of racism. Their own internal investigation assigned archaic racial codes like 鈥楥hinese/Japanese types鈥, 鈥楧ark Skinned European Types鈥 and 鈥楴egroid Types鈥 to suspect sub-postmasters. A move to AI systems will reproduce this kind of racial discrimination at an industrial scale, as AI not only ingests the racism embedded in its training data but projects it through reductive classifications and exclusions. And yet, despite the well-documented problems with AI, politicians of all stripes are committed to its mass adoption.

鈥淭he unwavering belief that sci-fi tech can solve social challenges is captured by the Prime Minister鈥檚 claim to 鈥渉arness the incredible potential of AI to transform our hospitals and schools鈥, somehow imagining that this will substitute for shortages of teachers and properly paid medical staff or fix the literally collapsing ceilings in the buildings. The Labour Party, meanwhile, proposes AI as a way to tackle the rise in school absenteeism; another case of taking a complex issue involving vulnerable families and replacing the much-needed care with the calculative power of cloud-based computation.

鈥淲hile some of this is the usual attempt to grab news headlines, there are deeper ideological commitments at play. AI is seen as the way to revive the economic system by intensifying trends that have been playing out since the 1970s; reducing job security by replacing workers with automation, and privatising the remaining public services. To promote AI is privatisation by the back door, as it inevitably means a transfer of control to tech corporations. However, this handover to AI will generate more miscarriages of justice as it proceeds to override the voices of those whose lives it affects. If we only listened to the very public statements by Silicon Valley figureheads like Altman, Andreessen and Thiel and their visions for the coming society, we would realise that they too, like the Post Office prosecutions, are 鈥渁n affront to public conscience鈥. The Horizon IT scandal, despite its very real horrors, will come to seem quaintly English by comparison with the collateral damage caused by their transhumanist techno-fantasies.

鈥淎 little-known detail of the Post Office scandal is that, due to some astoundingly bad decisions made in the 1990s, English law presumes that computer evidence is reliable. This at least is fixable by a simple switch of perspective; computer evidence should not be trusted unless evidence can be produced as to its reliability. There is simply no comprehensive metric or test that can be put before a court to remove reasonable doubt that an AI is making things up. However, we can鈥檛 wait for AI-driven scandals to come to court before recognising this because by then the harm will be done. AI can鈥檛 be trusted and should be kept out of any decision-making that might affect people鈥檚 lives, no matter how modest.

"The open question is how we should enact such protections. The common theme of Horizon and the other algorithmic injustices is the merging of bureaucracy and computation into a machinery that ran amok over the 鈥榣ittle people鈥. In all these cases the supposed checks and balances completely failed to hold the system to account. It seems unlikely that anaemic measures like the algorithmic audits proposed in the freshly-inked EU AI Act are going to have any real traction.

"The powerful lesson of the Robodebt and Post Office debacles is that the machine is only stopped by ordinary people coming together. In both cases, it was the self-organisation of the people affected and their allies which threw the necessary spanner in the works but only, of course, after immense effort and suffering. Surely it鈥檚 better to recognise early on how unjustified confidence in algorithmic authority becomes cover for what the Australian Royal Commission called 鈥渧enality, incompetence and cowardice鈥. Now, not later, is the time to challenge the ideology of 鈥楢I everywhere鈥 and push instead for people-centred solutions.鈥

Dr McQuillan鈥檚 research focuses on the resonances between forms of computational operation and their specific social consequences, focusing on machine learning & AI.  

Dan McQuillan is the author of In his book, he challenges the social logic of AI, and calls for us to resist AI as we know it and restructure it by prioritising the common good over algorithmic optimisation.