CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust

Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed.

CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the advisory body polled over 12,000 people to gauge sentiment around how such technologies are being used.

Edwina Dunn, Deputy...

Researchers find systems to counter deepfakes can be deceived

Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.

The researchers, from the University of California - San Diego, first presented their findings at the WACV 2021 conference.

Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:

"Our work shows that attacks on deepfake detectors could be a real-world threat.

More alarmingly, we demonstrate that it's...

Google is leaking AI talent following ethicist’s controversial firing

Some high-profile AI experts have departed Google after the controversial firing of leading ethicist Timnit Gebru.

Gebru was fired from Google after criticising the company’s practices in an email following a dispute over a paper she was told not to publish which questioned whether language models can be too big and whether they can increase prejudice and inequalities. In her email, Gebru also expressed frustration at the lack of progress in hiring women at...

Facebook is developing a news-summarising AI called TL;DR

Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just...

Google fires ethical AI researcher Timnit Gebru after critical email

A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase...

Synthesized’s free tool aims to detect and remove algorithmic biases

Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by...

Information Commissioner clears Cambridge Analytica of influencing Brexit

A three-year investigation by the UK Information Commissioner's office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it's taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found...

Google returns to using human YouTube moderators after AI errors

Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They're the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering...

Microsoft: The UK must increase its AI skills, or risk falling behind

A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness.

The research, titled AI Skills in the UK, shines a spotlight on some concerning issues.

For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries to see how the UK is doing in comparison to the rest of the world.

Most notably, compared to the rest of the world, the UK is seeing a higher failure...

University College London: Deepfakes are the ‘most serious’ AI crime threat

Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little...