City of Melbourne trials Nokia’s AI tech to keep streets clean and safe

The City of Melbourne is trialing AI technology from Nokia to help increase the cleanliness and safety of the area’s streets.

The local government area is located in Victoria, Australia, and has an area of 37 square kilometers and a population of around 183,756. Illegal waste dumping in the city is a problem that causes both hygiene and safety problems.

Using Nokia’s Scene Analytics AI technology, the city hopes to gain a deeper understanding of waste disposal...

Reintroduction of facial recognition legislation receives mixed responses

The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses.

An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.)

“We do not have to forgo privacy and justice for safety,” said Senator Markey. “This legislation is about rooting out systemic racism and stopping invasive technologies from becoming irreversibly embedded...

Amazon will continue to ban police from using its facial recognition AI

Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes.

The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where facial recognition services – from various providers – were found to be inaccurate and/or misused by law enforcement.

Amazon has now...

ACLU joins over 50 groups in calling for Homeland Security to halt use of Clearview AI

The American Civil Liberties Union (ACLU) has joined over 50 other rights and advocacy groups in calling for the Department of Homeland Security (DHS) to halt the use of Clearview AI’s controversial facial recognition system.

In a letter (PDF) addressed to DHS Secretary Alejandro Mayorkas, the signatories wrote: “The undersigned organizations have serious concerns about the federal government’s use of facial recognition technology provided by private company Clearview AI. We...

EU regulation sets fines of €20M or up to 4% of turnover for AI misuse

A leaked draft of EU regulation around the use of AI sets hefty fines of up to €20 million or four percent of global turnover (whichever is greater.)

The regulation (PDF) was first reported by Politico and is expected to be announced next week on April 21st.

In the draft, the legislation’s authors wrote:

“Some of the uses and applications of artificial intelligence may generate risks and cause harm to interests and rights that are protected by Union law....

Police use of Clearview AI’s facial recognition increased 26% after Capitol raid

Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for...

CDEI launches a ‘roadmap’ for tackling algorithmic bias

A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven...

AI tool detects child abuse images with 99% accuracy

A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy.

The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images.

According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared...

Researchers create AI bot to protect the identities of BLM protesters

Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they've been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on...

UK and Australia launch joint probe into Clearview AI’s mass data scraping

The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK...