FR | EN

Deep learning and facial discrimination offence

I am not being offended and not claiming it, but here, I cannot be unmoved.

Deep Learning is actually being a bit overused... like any novelty, this technology is intended to help solve any current and future issues with 100% certainty.

When it comes to classify a book content from their cover only (« Deep Neural Network Learns to Judge Books on Their Covers »), this can be amusing while being sceptical about the usefulness of such research and on the quality metric used in learning. After all, we all know that we can’t judge a book by its cover...

But, when it comes to classify individuals in such distinct categories, whether or not criminal, and this merely based on a photo... it’s just not real.. but nightmares (« Neural Network Learns to Identify Criminals by Their Faces »). But now, there are some recent publications on this topic: « Automated Inference on Criminality using Face Images » ou « Machine-Vision Algorithm Learns to Judge People by Their Faces ». Fortunately, I'm not the only one being shocked by these approaches and the same authors of works cited in the two relevant references stepped up to remind the limitations of the approach. For example, Alexander Todorov of Princeton University recalls that "it is not possible to determine someone’s general character or behaviour from their face photo only. It is very easy to determine if a person is lacking sleep based on the paleness of their skin or dark circles... and it is even recommended to use these signals to stop these people from carrying out tasks that require alertness, such as handling dangerous machinery. But if you use those same principles to predict that the person is in general, this is totally wrong! » (« Concerns as face recognition tech used to ‘identify’ criminals »).

After all, this approach and the debate it generates are quite similar to what happened at the beginning of the 19th century, on phrenology. "In this era where systematics rules, and the phrenology, trend, Cesare Lombroso (1835-1909) sought to find a statistical a link between the face and customs, especially when they are suspicious." In “L'Homme criminel”, criminal-born - man crazy morale - epileptic (1878, five successive editions until 1897), he brought up the 'primitive' forms intended to characterise meandering and crime. "(Wikipedia - Phrenology - 03 January 2017)

Something positive came out of that, namely the study of the uniqueness of the characteristics of skulls and body identification by forensic but the rest went overboard as irrelevant.

You might say, well, non-relevance needs to be demonstrated, or statistical bias. And here, it seems that my first thought was correct: If you compare identity photos taken during detention of criminals to public pictures published on social networks, you are exposed to a major bias. The linkage of artificial intelligence can only effectively, distinguish between standard techniques in taking photos when entering in jail and a report intended for our public identity...

And even if we're trying afterwards to compensate for this bias with other statistical techniques, the final result is disputable.

It is notable that the same authors have not hesitated to continue their work on such a controversial topic, namely, the automatic categorisation of images of women posted on the Internet in terms of their attractiveness (« There will be no sexist bias in the approach, of course!»).

Deep learning and facial discrimination offence

References

Agilytic ID Soft Caraïbes Capflow ING AXA

Partners

Finaxys Cream Consulting Parallel Consulting Deloitte Front Consulting Tobania Business&Decision Partners Consulting Koda Staff