Like any technology, predictive policing has its pros and cons. On the previous page, we outlined some of the relevant social groups impacted by the technology and described how law enforcement generally views it in positive terms. Let’s expand on that now by taking a look at how predictive policing came about it in the first place.
One way to do this is by using Thomas Hughes’s “Infrastructure and Innovation” framework, which describes how new technologies don’t emerge in a vacuum, but rather integrate into an existing infrastructure, characterized by prominent styles of operation, organizations, and artifacts and reverse salients that must be worked around. For example, whenever a new version of the iPhone operating system comes out, its innovation is limited by its place in the larger infrastructure of mobile phone technology. It must integrate with the latest cellular and feature standards and take into account reverse salients like maintaining compatibility with older iPhones.
A Natural Improvement…
When we look at predictive policing in this way, we see that, while the technology is championed by law enforcement as being “the future”, it really is just a natural improvement over older policing methods. In the mid-90s, the NYPD began performing statistical analyses of crime reports known as “Compstat” (The Guardian). In a way, PredPol is just a modern, data-driven version of Compstat: a natural improvement of analytical policing given new sources of data and advanced computational methods.
The question then becomes: how effective is it? PredPol’s website claims that in Santa Cruz, burglaries decreased by 11% and robberies dropped by 27% in just the first year of using the software. A report in the Journal of Engineering by Santa Clara University claims similar decreases in Los Angeles, adding that, in general, “PredPol successfully outperforms current best practices by 66 percent.” But do these numbers tell the whole story? What do they fail to capture?
… That Still Needs Us
Hughe’s framework also draws attention to the fact the computer isn’t always right — at least not by itself. Police departments still have to rely on human intuition in order to mitigate false positives and disregard faulty data– and to keep the software itself up-to-date! These are all artifacts of the existing infrastructure that predictive policing must integrate with in order to be useful. The aforementioned Santa Clara report says as much when it describes how PredPol is the result of “…advanced mathematics developed over more than six years, computer learning, cloud computing, and the indispensable experience of veteran police.”
That human element — that “indispensable experience” — cannot be ignored. For example, if left unchecked, predictive policing could cause police officers to assume the presence of criminal activity based solely on the output of the software. As Michael Thomsen writes in an article for Forbes, this kind of presumption could be seen as an “an extension of a pathology of law enforcement that assumes a criminal pretext by its mere presence in a community.” He also writes that if police officers start assuming that predictive policing technologies are 100% accurate and irrefutable, then it might distance “people from any direct power within the structures around them.” In other words, it could obfuscate responsibility within the organizational structures of police departments.
The legal framework surrounding predictive policing can also be seen as a reverse salient. For example, some systems use social media data as a source of information. But legislation about the ethical use of this data is still being created; many policing laws predate the kind of data mining that predictive technologies rely on. And finally, the technology also necessitates a re-evaluation of traditional methods used to obtain reasonable suspicion according to the Fourth Amendment. In an article in the Emory Law Journal, Andrew Ferguson writes that “in addition to finding the proper Fourth Amendment analogy, or articulating the reasonable suspicion factors, courts will have to focus on why certain environmental factors might contribute to future crime or why the absence of those environmental vulnerabilities could undermine the logic of the algorithm.” In other words, the “black box” mechanics of predictive algorithms will need to be opened up and examined in order to properly conform to reasonable suspicion requirements for investigation.
Inspecting the inner workings of these algorithms also allows us to see all kinds of social and political issues that have significant implications for the technology’s continued use. In the next section, we take a closer look at some of the political consequences of predictive policing.