A number of MEPs have been identified as “at risk” of criminal behavior after using a mock crime prediction tool developed by the non-governmental organization Fair Trials to highlight the discriminatory and unfair nature of predictive policing systems.
The Online tool – which asks questions designed to draw on the kind of data police across Europe are using to “predict” whether someone will commit a crime – was launched on 31 January 2023, and is used by MEPs and members of the public across the European Union (EU).
Predictive Policing System Both individuals and locations can be used to profile and “predict” crime, which Fair Trials says is determined by a variety of data about education, family life and background, welfare benefits and involvement with government services such as housing, ethnicity. Nationality, credit score, and whether anyone has contacted the police before, even as a victim.
Individuals profiled as “at risk” based on this information face a range of serious consequences, from being subject to routine stops and searches to having their children removed by social services. Profiles and predictions are also used to inform pre-trial detention, prosecution, sentencing and probation decisions.
As a result of the tool’s use, Fair Trials said more than 1,000 emails were subsequently sent to MEPs by members of the public calling on them to ban predictive policing systems. The EU’s upcoming Artificial Intelligence (AI) legislation.
“Our interactive predictive tool shows how unfair and discriminatory these systems are. It may seem unbelievable that law enforcement and criminal justice authorities are predicting crime based on people’s background, class, ethnicity and association, but that’s the reality of what’s happening in the EU,” Griff Ferris, senior legal and policy officer at Fair Trial
“There is overwhelming evidence that this predictive policing and criminal justice system leads to injustice, reinforces inequality and undermines our rights. The only way to protect people and their rights across Europe is to ban these criminal prediction and profiling systems, against people and places.”
Socialists and Democrats (S&D) MEP Petar Vitanov, who was profiled by the mock tool as a “medium risk” of future offending, said such an unreliable, biased and unfair system had no place in the EU.
“I never thought we’d live in a sci-fi dystopia where machines would ‘predict’ whether we’re going to commit a crime,” he said. “I grew up in a low-income neighborhood, in a poor Eastern European country, and the algorithm profiled me as a potential criminal.”
Renewal MEP and member of the legal affairs committee, Karen Melchior, who was identified as “at risk”, added that automated judging of people’s behavior would lead to discrimination and irreversibly change people’s lives.
“We cannot allow the misuse of funds for partisan and haphazard technology from an independent judiciary, as well as proper work and well-funded policing,” he said. “Promised skills in cleaning up after bad decisions are lost – when we catch them. At worst, we risk destroying the lives of innocent people. The use of predictive policing systems must be banned.”
Other MEPs identified as a risk by the tool and who have subsequently expressed their support for banning predictive policing systems, include Cornelia Ernst, Timo Walken, Petar Vitanov, Birgit Seipel, Kim Van Sparentak, Tineke Streak And Monica Semedo.
Civil society groups such as Fair Trials, European Digital Rights (EDRi) and others have long argued that because the underlying data used in predictive policing systems are drawn from data sets that reflect society’s historical structural biases and inequalities, the use of such systems is therefore racist. People and communities will be disproportionately targeted for surveillance, interrogation, detention and ultimately imprisonment by the police.
In March 2020, Evidence submitted to the United Nations (UN) by the UK’s Equality and Human Rights Commission (EHRC). said the use of predictive policing “can replicate and magnify patterns of discrimination in policing, while legitimizing biased processes”.
Lawmakers have come to the same conclusion. In the UK, for example, a House of Lords inquiry into police use of algorithmic technology Noted that predictive policing tools tend to create a “vicious circle” and “perpetuate pre-existing patterns of disparity” because they direct police patrols to low-income, already over-policed areas based on historical arrest data.
“Due to increased police presence, it is likely that a higher proportion of crimes committed in those areas will be detected than in areas where there is no additional police. The data will reflect this increased detection rate as an increased crime rate, which will feed into the tool and embed itself into the next set of predictions,” it said.
Although two MEPs in charge of overseeing and revising the EU’s AI laws said in April 2022 that the use of AI-powered predictive policing tools to “assess individual risk” should be banned on the grounds that it “violates human dignity and the presumption of innocence”. The proposed ban extends only to personal assessments And space-based predictive systems are not used to profile area or location.
Sarah Chander, a senior policy adviser at EDRi, told Computer Weekly at the time that profiling neighborhoods for crime risk has similar effects to profiling individuals, in that it “may increase the experience of discriminatory policing for racialized and poor communities”.
Civil society groups have called for: on several occasions – for Predictive policing systems will be banned altogether.
when correction As AI continues to be enacted, limited restrictions on predictive policing systems have yet to be extended to space-based systems. MEPs’ next vote on the law is scheduled for late March 2023, with the exact date yet to be confirmed.