Concerns Rise Over AI Racial Bias as Facial Recognition “Predicts Criminality”

See all insights
June 25, 2020

The debate over racial bias in tech has been renewed as a university in America claims it can “predict criminality” through facial recognition.

Researchers at Harrisburg University claim that they can “predict if someone is a criminal based solely on a picture of their face” through software “intended to help law enforcement prevent crime”.

One member from the Harrisburg research team in particular claimed that “Identifying the criminality of [a] person from their facial image will enable a significant advantage for law-enforcement agencies and other intelligence agencies to prevent crime from occurring.”

The university have said that this research would be included in a Springer Nature book, however, Springer have stated that this was “at no time” accepted, claiming that the research “went through a thorough peer preview process. The series editor’s decision to reject the final paper was made on Tuesday 16 June and was officially communicated to the authors on Monday 22 June” 

Whilst the Harrisburg researchers claim their technology holds “no racial bias” through its operations, there has still been considerable backlash from this research - 1,700 academics signing an open letter demanding this research stays unpublished.

The Coalition for Critical Technology, and organisers of this open letter, have stated that “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.” and that “all publishers must refrain from publishing similar studies in the future”.

The group have raised attention to the distorted data that feeds this perception of what a criminal “looks like”, pointing to a number of studies that suggests harsher treatment for ethnic minorities throughout the criminal justice system.

Computer-science researcher at Cambridge University Krittika D’Silva commented: “It is irresponsible for anyone to think they can predict criminality based solely on a picture of a person’s face.”

“The implications of this are that crime ‘prediction’ software can do serious harm – and it is important that researchers and policymakers take these issues seriously”

D’Silva also points to the number of studies revealing machine-learning to hold various different biases “Numerous studies have shown that machine-learning algorithms, in particular face-recognition software, have racial, gendered, and age biases”   

Harrisburg University have decided not to publish this paper on the facial recognition software, stating the news release that outlined the research, titled “A Deep Neural Network Model to Predict Criminality Using Image Processing” has been removed at the involved faculty’s request, and further that publication the research was going to appear in has since decided against this. The university state on their website:

“Academic freedom is a universally acknowledged principle that has contributed to many of the world’s most profound discoveries. This University supports the right and responsibility of university faculty to conduct research and engage in intellectual discourse, including those ideas that can be viewed from different ethical perspectives. All research conducted at the University does not necessarily reflect the views and goals of the University.”

Dani Katz
Dani Katz
Founder Director

Dani’s actuarial experience and passion are key. He is a strong advocate of innovation, optimism and communication, both within the team and for the clients. Dani’s ability and experience with data ensure that we always maximise value and efficiency for every project, enabling us to unlock hidden value for the clients business.

Share this article
Get the latest insights in your inbox

Sign up to get the latest Optalitix updates and news straight to your inbox.

Thank you. You're now subscribed!
Oops! Something went wrong while submitting the form.

Improve your efficiency and reduce costs today