Over 1,000 researchers sign letter opposing ‘crime predicting’ AI

More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural...

San Francisco hopes AI will prevent bias in prosecutions

San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón...

UK government investigates AI bias in decision-making

The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives.

A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous...

Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’

Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when...

AI is at risk of bias due to serious gender gap problem

AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap. Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about to be everywhere, and it matters that it’s representative of those it serves. In a report published this week, the WEF wrote:

“The equal contribution of women and men in...

ACLU finds Amazon’s facial recognition AI is racially biased

A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western...