AI Facial Profiling, Levels of Paranoia
In this digital age, biometric-based surveillance systems that incorporate artificial intelligence are becoming more common. AI companies claim that the facial recognition technology associated with machine learning can analyze the physical characteristics of a person’s face to draw personality traits that can be predictors of behavior and thus discern subtle patterns of “suspect” personality types. More and more, states are implementing complex intelligent control systems to help and speed up the procedure of border crossing, using risk assessment tools to analyse biometric data related to individuals to prevent potential threats and support the border guard in his decision making.
However, the application of AI driven systems in areas where decisions have a real impact on our lives should not be done under the sole pretext of technical efficiency. In fact, profiling systems driven by machine learning techniques are not intrinsically neutral. They reflect the priorities, preferences and prejudices —the biased view— of those who shape artificial intelligence. The choice of which data is collected and its processing rules, which remain opaque, may generate bias in the results of the algorithms. These do not make objective predictions because they have been written by human beings, who have introduced, deliberately or not, their own prejudices. As a result, algorithmic bias promise to exacerbate inequality and discrimination.
Fascinated by the mutations of technological innovations and algorithmic governmentality, this project exposes the moments of drift or perversion of this technology, its misuses and its infrastructures and is inspired by the psychometric research papers of two scientists from University of Shanghai from 2016 who claimed to have been able to train an AI to detect the criminal potential of a person based only on a photo of his face. The year after, researchers at Stanford University published another paper in which they said they had trained another AI to detect people’s sexual orientation based solely on a photo of their face.
Taking the world of firearms as a starting point, AI Facial Profiling, Levels of Paranoia explores the disturbing uses of automatic processes of human being binary classification controlled by artificial intelligence by proposing a facial profiling system, a computer vision and pattern recognition system that detects the ability of an individual to handle firearms and predicts his potential danger from a biometric analysis of his face.
This device takes the world of firearms as a starting point. It is based on a camera - weapon that captures faces as well as a machine with an artificial intelligence and a mechanical system that classifies the profiled persons into two categories, those who present a high risk of being a threat and those who present a lower risk. A convolutional neural network is trained with two datasets. The first set includes 7535 images and was created by automating the extraction of the faces of more than 9000 Youtube videos on which people appear to be handling or shooting firearms, while the second set of 7540 images of faces comes from a random set of people posing for selfies. It also includes a mechanical system that generates intelligence cards with profiled people’s photography and then categorizes them into two typologies. These elements are combined into a “physiognomic machine” based on mechanical principles directly inspired by industrial systems evoking the processes of assembly line production and the labeling of the human being. Mechanisation symbolises the dehumanisation of algorithmic classification and refers to the legitimisation of empowering decision-making by these systems.
Between fiction and reality, this art installation proposes a staging inspired by security infrastructures and takes the individual as the starting point for a critical reflection about algorithmic biases and the bipolarization of human taxonomies; a narrative that explores the ethical limits of these AI driven artefacts, more centred on the practical applications and their impacts and raises the question of the trust placed in algorithmic processes that make decisions with real impacts on human lives.
This project is the result of a collaboration with Laurent Weingart, software and security engineer and Marc Wettstein, mechatronic engineer. It has received the kind support of the International Committee of the Red Cross and the Fonds Cantonal d'Art Contemporain de Genève.
AI Facial Profiling, Levels of Paranoia
Mirage Festival #7 — Turbulences
Credits: video courtesy of Matcha and thumbnail photo courtesy of Marion Bornaz.
CICR — Digital Risk in Situations of Armed Conflict / CodeNode, London 2018