World

Algorithm used in Catalan prisons has ‘substantial deficiencies,’ audit finds

Algorithm used in Catalan prisons has ‘substantial deficiencies,' audit finds

A recent audit of a Spanish algorithm used by the criminal system found “substantial deficiencies” in the reliability of its decisions on how likely prisoners are to reoffend.

Eticas, an algorithmic auditing company, did what they say is the first adversarial audit of RisCanvi, an algorithm that calculates the risk of an inmate committing a reoffence based on a series of weighted characteristics. 

The company conducted interviews with inmates, lawyers and psychologists and compared data from 3,600 released prisoners in 2015, the only public data set available according to the team. 

Eticas' audit uncovered “substantial deficiencies in RisCanvi's reliability,” and said it “fails to achieve AI's core goal: standardising outcomes and reducing discretion,” a press release for the audit read

“RisCanvi is a system that is not known by those whom it impacts the most, inmates; that is not trusted by many of those who work with it, who are also not trained on its functioning and weights,” the audit said. 

RisCanvi is the latest algorithm used by the criminal system to be criticised. Academic studies into these types of prediction systems in the UK and US have found they misclassified dangerous inmates or gave African Americans longer detention periods before trial

‘Opaque' way of calculating risk

Antonio Andres Pueyo started working on RisCanvi in 2009 with his team at the University of Barcelona at the request of the Catalan government. 

In Spain, a judge typically receives a report about a prisoner who is applying for parole that has information about where the inmate is housed and a history of their behaviour while in prison, according to a report from the newspaper El Pais. Conclusions about an inmate's reoffending risk by RisCanvi are included in this report. 

Initially, the technology was supposed to target reoffending risk among certain prisoners, like murderers and sex offenders but was expanded to include several types of reoffence among the general prisoner population, according to Eticas' audit

As of 2022, the audit said RisCanvi directly affects the cases of roughly 7,713 people. 

Andres Pueyo said RisCanvi's algorithm assigns each inmate a score on 43 risk factors across five different categories including level of education, history of violence, or mental health or addiction issues. The risk factors were determined by previous studies about reoffending and Andres Pueyo's own study of inmates. 

Inmates undergo interviews every six months where their collected data is then put into the system and reviewed by over 100 validators. They're then assigned a determinant of risk: red for high, yellow for medium and green for low. 

The audit says the exact calculations behind each risk are “opaque,” and interviews with frontline staff show confusion around how risk is finally determined. 

“There's no measurement and no inside data on how people are managing (the algorithm's) suggestions,” according to Gemma Galdon Clavell, the CEO of Eticas.

“What we've seen in other fields is the way that people incorporate AI systems really depends on their existing values”.

Andres Pueyo argues that RisCanvi does what it is supposed to do by reducing the justice department's errors when releasing an inmate that could potentially reoffend. The technology also makes it easier for professionals across the justice system to work together, he continued. 

The algorithm has also changed substantially according to Andres Pueyo since 2015, the year Eticas used for their analysis. 

In 2016, it started differentiating between different types of reoffenses, like self-directed violence, convicted rape, and violence against other inmates, which he believes leads to fewer false classifications of inmates as reoffenders.

Mistakes in detection due to ‘political decisions'

Spanish audit firm Dribia got access to Catalonia's ministry of justice data from 2015 to 2020 and published a separate audit earlier this year. 

Their analysis found there were no “serious discriminatory biases” in the RisCanvi system based on age, gender or nationality. 

Yet the algorithm did not detect that inmates under 30 who committed violence in prison were likely to reoffend and it underestimated the risk that young people and women would commit self-inflicted violence. 

Andres Pueyo claimed that the system's code of points can also be changed by those in charge to release more people from prison or not.

“You can balance what number of any false negative or positive (reoffences) for what ([RisCanvi)] admits or permits,” he said.

The Dribia report recommends that the Catalonian government put in place a system based on modern (AI) with new risk factors and benchmarks. 

If that's not possible, the report suggests “that, at the very least, the current models would have to be evaluated”. 

Euronews reached out to the Catalonian department of justice but didn't receive a reply by the time of publication. 

Other examples of algorithms in criminal justice cases

RisCanvi isn't the only example of algorithm systems being used in Spain, Europe or North America that are facing criticism. 

Eticas' Galdon Clavell also conducted an audit on VioGen, a software that assists Spanish police officers when recording an incident of gender-based violence. It generates a score on 35 factors of how likely a perpetrator will be to commit additional violence and creates a “custom security plan” with the survivor. 

Just three weeks ago, VioGen was called into question when a New York Timesinvestigation found that Lobna Hehmid, a Spanish woman murdered by her husband in July, was given a “low risk” assessment by its algorithm. 

The UK's Offender Assessment Tool (Oasys), put in place in 2001, is controversial because officials have not been able to see the data used by the algorithm over the last two decades to decide which inmates pose a risk to society, according to a piece in The Conversation. 

In the US, the COMPAS system makes decisions on bail and sentencing. A 2016 investigation by ProPublica found the software suggested black people would be more likely to reoffend than white people even though they were not. 

To address these issues, the European Commission is funding a project called the Fair Predictions of Gender-Sensitive Recidivism with the University of the Aegean in Greece to establish a “bias-free AI system” that fairly determines reoffending risk, according to their website.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button