Log Out


Crime and punishment in the age of big data

Crime data cannot erase the past, but can it foretell future crimes as predictive policing expects it to?

By The Editorial Board

  • Published 12.01.19, 9:04 AM
  • Updated 12.01.19, 9:04 AM
From Sherlock Holmes to police forces in parts of India, a library of information about crimes and criminals has always been central to policing.
From Sherlock Holmes to police forces in parts of India, a library of information about crimes and criminals has always been central to policing. iStock

What does the administration of justice depend on? Believers of the Old Testament would say wisdom — the virtue that the guided Solomon. Non-believers might rely on dispassionate objectivity. Administration of justice would also require a threat of punishment, the basis of the State’s coercive power to enforce laws. But before wisdom can be exercised to mete out punishment, the justice system needs information. From Bow Street Runners, the first professional police force in London, and the fictional detective, Sherlock Holmes, to police forces in parts of India with their ‘KD’ or known depredator lists, a library of information about crimes and criminals has always been central to policing. The latest weapon in the arsenal of law enforcers is artificial intelligence. Predictive policing — based on an algorithm that has been fed years of crime data — is now set to be introduced in West Midlands and may ultimately be rolled out across Britain. This model is already used in many places in the United States of America and the Maharashtra police, too, is reportedly developing similar technology.

The age of big data has changed the conventional relationship between crime and information. The police no longer have to put in hours of labour to gather the most basic information. People willingly disclose more about themselves than is needed on social media, and there are drones, street cameras and automatic licence-plate readers for the more reticent. All of this combined, though, is less menacing than predictive policing. A computer programme is loaded with years of ‘crime data’ — which, more often than not, do not reflect actual crime. It includes such things as 911 calls, reports of incidents supposedly seen by policemen on patrol — these are no more reliable than the ‘KD’ lists were as they can be informed by race, class and other biases. For instance, a 2012 study by Drug Policy Alliance revealed that in Washington, black people were arrested 2.9 times and Latinos and Native Americans 1.6 times more than white people for possession of marijuana. Yet, departments of public health in the US have shown that white people smoke as much pot as any other community. The latter fact, though, will not be reflected in the algorithm. A system that analyses years of biased reports cannot but churn out prejudiced predictions. This is nothing but putting a glossy veneer on old-school racial profiling. Worse, now it will be given the stamp of neutral machine judgment.

Most police departments use this technology to increase patrolling in ‘hotspots’ where the algorithm thinks a crime will be committed. However, this is not as simple as it sounds. The imbalances in the data mean that the police are likely to be repeatedly sent to the same areas — studies show that in the West, the system prioritizes black neighbourhoods and areas with a homeless population — while other places remain under-supervised. Moreover, the entire model is based on crimes that are reported. What about those that go unreported every day? Communities that have had unpleasant run-ins with law enforcement are typically unwilling to report crimes. As are those keen on protecting their social standing. Will not predictive policing’s greed for data and eternal memory further discourage them from speaking up? Technology that was designed to take the human race into the future now poses the risk of setting it back by decades.

About Author