MY KOLKATA EDUGRAPH
ADVERTISEMENT
Regular-article-logo Saturday, 04 May 2024

Face-recognition tech and its consequences

In the hands of the State or market forces, it can become a potent tool of surveillance of citizens and consumers

Nirupam Hazra Published 23.07.20, 01:47 AM
Face-recognition technology in the age of artificial intelligence is a matter of as much concern as physiognomy.

Face-recognition technology in the age of artificial intelligence is a matter of as much concern as physiognomy. Shutterstock

Charles Darwin was once almost prevented from boarding a ship by the captain for a strange reason. Looking at Darwin’s nose, the captain, Robert Fitzroy, had wondered whether Darwin possessed sufficient determination and energy required for such a voyage. Recounting the incident, Darwin wrote, “He [the captain] was an ardent disciple of Lavater, and was convinced that he could judge a man’s character by the outline of his features...”

Fitzroy was a believer in physiognomy, the 18th-century pseudoscience, which claimed to determine character from appearance. Johann Kaspar Lavater, a Swiss writer, was the man who gave the art of face-reading the status of science, thereby generating popular interest. Lavater made physiognomy so popular that its influence extended to the realms of art, aesthetics and literature of the 18th and 19th centuries.

ADVERTISEMENT

However, the practice of judging someone on the basis of facial features had its origin in antiquity. The Greek mathematician, Pythagoras, was believed to have selected his students on the basis of their physical appearance. Aristotle spent considerable time and energy studying the relationship between physical characteristics and moral character. Even Leonardo da Vinci was not unconcerned about the mysteries of the human visage. But it was Lavater who revived the art of decoding human character from the face, so much so that facial features became the method of determining one’s social standing and moral character. During a time of rapid social change and colonial expansion, physiognomy emerged as an instrument, albeit a flawed one, to identify criminals and for classifying the colonized. Physiognomy actually perpetuated social and racial prejudices in the garb of ‘objective knowledge’. In this mission, it was accompanied by another pseudoscience: phrenology — the study of human skull to determine mental faculties. The popularity of these disciplines stemmed from their ability to make man-made social hierarchies appear ‘natural’. They also provided a sense of security and certainty at a time of social upheaval and traced the origin of social deviance to particular communities, cultures or races. These pseudoscientific endeavours animated the scientific quest for perfection in human beings. Inspired by Darwin’s theory, Francis Galton embarked on his Eugenics project to raise a “highly-gifted race of men”. What followed was the genocidal experiment of Nazi race ‘science’.

But physiognomy is not dead yet; it has, arguably, made a technologically sophisticated reappearance in the 21st century. Face-recognition technology in the age of artificial intelligence is a matter of as much concern as physiognomy. The popular technology uses the database of human faces collated from various sources and analyses facial features to create an individual ‘facial signature’. This facial signature is then widely used for various purposes, from tracking an offender to identifying passengers at an airport, making payments at stores, unlocking smartphones and so on.

But these benefits come with their share of hazards. Face-recognition technology has its own limitations. It works on probability, not with certainty. The accuracy of the technology has been a matter of concern, especially in low-light or hazy environments. The lack of accuracy may lead to misidentification, impersonation, false conviction and have other unforeseen consequences. Inaccuracy has led administrations in San Francisco and California to ban the use of this technology for police and other government agencies. Face-recognition technology can threaten rights and liberty. An Israeli start-up courted controversy by classifying facial data into categories like terrorists, paedophiles and white-collar criminals, even though the firm conceded that it has experienced a number of false positives in such categorization. Significantly, a high degree of accuracy may have a serious impact on social diversity and social relationships. For example, a study shows that face-recognition algorithm has an 81 per cent success rate in determining the sexual orientation of subjects. Unrestrained use of this technology may make us see the world through the prism of prejudices and preferences.

This leads us to the aspect of ‘neutrality’ of technology. Technologies are considered ‘neutral’ on account of being free of human interventions and subjective biases. But the neutrality of technologies depends on the nature of data they are fed. This leaves enough room for prejudices to creep in. This technology displays racial biases; research shows that it often registers a higher number of false positives with particular ethnicities like Native Americans and Asians.

Another area of concern is privacy. Unlike other forms of biometric data, face data can be accessed, analysed and stored from a distance sans consent. Face-recognition technology is thus more intrusive and surreptitious. Cheap smartphones and ubiquitous CCTVs powered by face-recognition technology can track or stalk any target. Therefore, it opens a new frontier on the privacy-debate.

The concern of privacy paves the way for surveillance. In the hands of the State or market forces, face-technology can become a potent tool of surveillance of citizens and consumers. The Chinese government has already started using face-recognition technology to identify ‘trouble-makers’ and ‘law-breakers’ with the help of its vast database of citizens. Authoritarian regimes, armed with such a powerful technology, can easily conduct mass surveillance and target selective groups like religious and sexual minorities, refugees, immigrants and dissidents. China’s use of the technology for Uighur Muslims is one example of such community surveillance. It poses a far greater challenge in a democracy. India experimented with this technology to identify rioters during the riots in Delhi even though it is yet to develop comprehensive guidelines regarding its use or misuse.

These concerns notwithstanding, technological progress cannot or should not be reversed. What can be done is to regulate its application for judicious and beneficial use.

Follow us on:
ADVERTISEMENT