Can AI help eliminate police brutality?
Following the police shooting of Jacob Blake, America is once again thrust into the debate of how to cure and prevent police brutality. Is artificial intelligence the answer? Possibly, but there may be a few caveats.
In his book, ‘The Reasonable Robot: Artificial Intelligence and the Law’ (Cambridge University Press, 2020), Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA, have examined the ways that AI can solve some of the terrors we're faced with everyday, as well as the issues that can come up with them.
"AI is able to automate aspects of law enforcement in ways that may be more transparent, unbiased, and racially neutral than people," Abbott said. "AI may therefore be a key part of the solution to racism in the criminal justice system."
Abbott argues that human agency causes discrimination, making AI seem like a great alternative — but that comes with its own set of concerns: Can algorithms discriminate too? In one study, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) program more often labelled black defendants who subsequently did not reoffend as high risk, while labelling white defendants who reoffended as low risk. COMPAS argued in response that there was a roughly equal proportion of white and black defendants at any specific risk level.
"Biases are an inevitable part of both AI and human decision-making, but some are morally and legally unacceptable," Abbott explained. "Biased algorithms are not the result of a person deliberately engineering AI to be, say, racist but they might arise if, for example, connectionist AI learns based on biased training data. If human judges have historically sentenced defendants in a discriminatory fashion, AI might do so in the future."