AI Tools in Policing Raise Questions of Justice and Efficiency

Police departments across the United States are increasingly adopting artificial intelligence (AI) tools in their operations, yet experts warn that these technologies may perpetuate existing injustices rather than enhance public safety. Incidents involving AI facial recognition have led to the wrongful arrests of numerous individuals, particularly people of color, raising concerns about the effectiveness and ethical implications of these systems.

Recent analyses reveal that AI systems sometimes misidentify individuals as suspects in crimes they did not commit. For instance, some people were found to be miles away from the crime scenes where they were allegedly implicated. Critics argue that this trend highlights a troubling reliance on technology that is inherently incapable of understanding human context. According to journalist and AI expert Graham Lovelace, “This technology can be highly unreliable, and it can cause harm.”

The systemic biases inherent in policing, particularly against marginalized communities, are reflected in how AI systems are trained. These technologies often learn from historical data that includes instances of over-policing, which suggests that AI may reinforce rather than mitigate existing biases. Lovelace elaborates that officers may automatically trust AI-generated leads due to a cultural conditioning that overvalues technological authority.

AI’s Role in Law Enforcement

The use of predictive policing tools like Geolitica and ShotSpotter illustrates the potential pitfalls of automated law enforcement. Geolitica, for example, has been criticized for branding neighborhoods as crime hotspots based on police activity rather than actual criminal behavior. This practice can lead to increased police presence in already vulnerable areas, further marginalizing residents.

A joint study by The Markup and Wired indicated that out of over 23,000 predictions made by Geolitica for the Plainfield, New Jersey police department in 2018, fewer than 100 aligned with actual crimes. Similarly, a 2023 audit from the New York City Comptroller’s Office showed that only 8 to 20 percent of alerts from ShotSpotter corresponded with real shootings. Critics argue that this inefficiency wastes valuable police resources and heightens tensions in communities.

Despite these findings, the New York City Police Department continues to invest heavily in ShotSpotter technology, spending approximately $54 million between 2015 and 2025. In January 2023, the NYPD extended its contract with ShotSpotter for an additional $22 million. Former Mayor Eric Adams defended the tool’s efficacy, insisting it is essential for ensuring public safety.

Concerns About Transparency and Accountability

The push for AI in policing raises broader questions about transparency and the potential misuse of data. Many police departments have been criticized for their lack of openness regarding the technology they deploy. In St. Paul, Minnesota, officials require that AI-generated work must be verified for accuracy and bias, but the effectiveness of this oversight remains uncertain.

Critics like Tiffany Cában, a New York City Council member and abolitionist, argue that the deployment of AI tools often serves as a facade for entrenched practices of surveillance and control in low-income communities of color. She states, “When police are sent over and over again into communities for no reason and on high alert… it’s just a recipe for disaster.”

As technology companies continue to profit from the growing demand for AI in law enforcement, the implications of their use become increasingly significant. Startups such as Flock Safety have emerged, boasting their AI tools as enhancing policing capabilities. However, these claims are often met with skepticism, given the historical context of policing and the potential for AI to further entrench biases.

The need for robust oversight and accountability mechanisms is clear. Without proper governance, police departments may continue to rely on flawed technologies that can exacerbate existing injustices. This reliance on AI not only threatens individual rights but can also undermine public trust in law enforcement.

Ultimately, the integration of AI into policing raises troubling questions about the balance between technological advancement and the preservation of justice. As law enforcement agencies navigate this landscape, it is essential that they prioritize ethical considerations and transparency to ensure that the deployment of AI serves to protect rather than harm communities.