Police in Australia have employed the use of Clearview AI’s controversial facial recognition to tackle child exploitation.
The Australian Federal Police (AFP) admitted to using Clearview AI’s system despite not having a legislative framework in place for the technology.
Deputy commissioner Karl Kent said that the AFP trialled the facial recognition system but has not entered any formal arrangements with Clearview AI to procure their technology.
In a statement, opposition party Labor called for Home Affairs Minister Peter Dutton to explain whether the AFP’s investigations into child exploitation were jeopardised by the use of Clearview AI’s technology without legal authorisation:
“Peter Dutton must immediately explain what knowledge he had of Australian Federal Police officers using the Clearview AI facial recognition tool despite the absence of any legislative framework in relation to the use of identity-matching services.”
Clearview AI’s facial recognition was used specifically by the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) to support their vital work.
“The trial was to assess the capability of the Clearview AI system in the context of countering child exploitation,” wrote the AFP.
ACCCE’s testing took place between 2nd November 2019 and 22nd January 2020.
“Searches included images of known individuals, and unknown individuals related to current or past investigations relating to child exploitation,” the AFP said. “Outside of the ACCCE Operational Command there was no visibility that this trial had commenced.”
Clearview AI’s facial recognition has come under stiff opposition due to its controversial practices and extensive links to the far-right.
Hoan Ton-That, founder of Clearview AI, claims to have disassociated from far-right views, movements, and individuals. Ton-That told Huffington Post recently that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world.”
Clearview AI’s facial recognition system uses a large database consisting of billions of scraped images from across the web. Activists believe the system infringes on people’s right to privacy as they never gave permission for their images to be stored and used in such a way.
“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.