Author, An Artificial Revolution: on Power, Politics and AI
Last spring, as students in Britain were prevented from taking their final exams by the pandemic, an algorithm was deployed to estimate what their final grades would have been and therefore should be. Unfortunately, the algorithm delivered higher grades for private school students, lower grades for state school pupils. This sorry outcome highlighted an inherent quality of data crunched by technology: the possibility, some argue the tendency, for the exercise to scale up existing racial and economic inequality.
Stories of algorithmic failures and biases proliferate, encompassing facial recognition systems unable to recognise people of colour, financial services granting less credit to women and biased adverts serving the lower paying jobs to women. All this is the outcome of unfettered use of historic data, often combined with labels and criteria which introduce bias by proxy. This happens, for example, around the definitions of what is ‘good’ or ‘successful’ as those are hardly neutral concepts.
These stories occupy our discussions – and yet the belief that a data and algorithms driven society is somehow desirable continues to permeate. Alexander Campolo and Kate Crawford talk about ‘enchanted determinism’: “a discourse that presents deep learning techniques as magical, outside the scope of present scientific knowledge, yet also deterministic, in that deep learning systems can nonetheless detect patterns that give unprecedented access to people’s identities, emotions and social character.” This explains newspapers headlines such as ‘Can algorithms prevent suicide?’ or ‘can AI read emotions?’. Nobody has come close to such remarkable results, yet the dream that a machine can read emotions or stop people from ending their life is stronger than ever. A better headline would be “Can we get real about AI ?”
I am starting this way because this is where, in my opinion, privacy comes in. As I was growing up, studying politics and law, privacy was very much related to the idea of being left alone – of navigating the streets without being watched let alone recognised. Part of this still holds true, especially when it comes to facial recognition. The problem with this technology is not only its bias (at least bias can be largely fixed) but – even more relevantly, the fact that when deployment in the real world, the technology is wrapped around the most vulnerable in our society. Its use will be racialised, its societal impact will be shabby and shoddy regardless of its technical quality.
The need and desire to be left alone does still relate to privacy but, increasingly, privacy has merged with other areas, two in particular: control and autonomy. Control, because privacy has become enmeshed with socio-economic and race injustice. In a similar way to how women experience control around their reproductive rights and bodies, the use of technologies such as AI has turned ordinary levels of privacy into the menace of control wrapped around segments of our society. Predictive technologies are a clear example of this: in determining vulnerabilities or risk of defaulting payments, these tools end up mapping out a path by creating a sequence between past, present and future.
The second feature of privacy today relates to the concept of autonomy. A lot of the machine learning tools we see in use might seem frivolous in comparison to what is viewed as much more serious, e.g. robotics, and yet they have an incredible effect on us: recommenders, search engines, behavioural algorithmic tools and ad targeting. They all shape our idea of the world by selecting the content we see; they erode democracies by serving personalised information to each of us, thus reducing the common ground we need to converse and share ideas; and operate as gates to the world as we know it, thus mediating our relationships with it.
All of this reduces our autonomy and independent thinking. Although we are more familiar with the allocative harm of algorithms (whether someone receives a loan or bail, for example) the harm caused by algorithms as gates to the world is arguably even more disturbing. Representational harms are equally disturbing in my view – they have long-term effects, and they are hard to identify. Technologies such as voice assistants (often with female voices, and there to serve, never to challenge) can leave an enormous imprint on the younger generation and influence their identity.
Privacy has come to encompass control and autonomy – and it is time for us to reconcile with that, and rethink how we conceive it. This means to view privacy as an enormous public value, not an individual one, thus requiring architecture for its safeguarding in the way we would normally treasure a common good. But this also means differentiating it from data protection, which concerns the protection of personal data and the technologies we can use to harness the value of information and share it around the world. This is crucial – especially at a time where data protection is getting conflated with digital protectionism. But privacy is something else and the thinking around it is more important now than ever as machine learning technologies and automated decision systems occupy an increasingly large space in our society.
I hope 2021 will be the year we protect our autonomy and enact the rules and legislation we need to impose serious limits on an ecosystem that uses tech to scale up injustice rather than helping us resolve it. The pandemic has shown us how interconnected we are, from the level of our local community to across the globe. If we are able to view privacy as a duty to each other, we can build the structures and governance we need to fully harness our humanity, and technology with it.
Author, An Artificial Revolution: on Power, Politics and AI
Subscribe us and get latest news and updates to your inbox directly.
Stef,28 Dec 2020
I totally agree on each word you nailed down. Apart from this, I feel very strong about it, to enable the individuals to understand what really happens to them, when data is not protected. Whenever I do a training, doesn’t matter if for HR or IT or senior management, I explain them, what fresh fennel in the supermarket has to do with privacy, and what could happen, if you replace the fennel by a bottle of Gin… I teach them, why nowadays supermarkets often knows quicker about a pregnancy in the family or your partner is cheating around. Why it is so wonderful, that the privacy rules ask to delete illness times after two years from their personal file, … just imagine, you apply for another job and somebody can still find out that you had long absences … From this moment onwards it is not anymore another law to follow, they feel the need of protection by this law and they start to act different. In their private and in their business life.