Google fires ethical AI researcher Timnit Gebru after critical email

A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase...

CDEI launches a ‘roadmap’ for tackling algorithmic bias

A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven...

Synthesized’s free tool aims to detect and remove algorithmic biases

Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by...

Information Commissioner clears Cambridge Analytica of influencing Brexit

A three-year investigation by the UK Information Commissioner's office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it's taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found...

Google returns to using human YouTube moderators after AI errors

Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They're the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering...

How can AI-powered humanitarian engineering tackle the biggest threats facing our planet?

Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity.

The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water, basic sanitation, electricity, internet, quality education, and...

University College London: Deepfakes are the ‘most serious’ AI crime threat

Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little...

Google’s Model Card Toolkit aims to bring transparency to AI

Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you've got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream...

Musk predicts AI will be superior to humans within five years

Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk...

UK and Australia launch joint probe into Clearview AI’s mass data scraping

The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK...