An independent AI ethics research centre is set to receive $7.5 million of funding courtesy of the folks at Facebook.
The new research centre is called the Institute for Ethics in Artificial Intelligence and was created in collaboration with the Technical University of Munich (TUM).
Facebook, like many companies, is fighting outside concerns about the development of AI and its potential societal impact. The centre should help to ensure Facebook keeps up with ethical best practices.
Joaquin Quiñonero Candela, Director of Applied Machine Learning at Facebook, wrote in a blog post:
“At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do — from the data labels we use, to the individual algorithms we build, to the systems they are a part of.
We’re developing new tools like Fairness Flow, which can help generate metrics for evaluating whether there are unintended biases in certain models. We also work with groups like the Partnership for AI, of which Facebook is a founding member, and the AI4People initiative.
However, AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics.”
Independent, evidence-based research will be conducted by the institute to provide insight and guidance for society, industry, legislators, and decision-makers across the private and public sectors.
Furthermore, the institute wants to specifically address the concerns surrounding AI such as safety, privacy, fairness, and transparency.
Dr. Christoph Lütge, TUM Professor and head of the institute, commented:
“At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy.
Our evidence-based research will address issues that lie at the interface of technology and human values. Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms.
We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction.”
AI has the potential to do immense good in the world by improving things like healthcare, productivity, and enhancing lifestyles. On the other hand, it could be devastating if used for military purposes, replacing entry-level jobs, or if things such as facial recognition continue to suffer from bias.
Hearing independent AI ethics centres open is always welcome, let’s just hope enough companies are willing to invest time in listening as well as money.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.