Facebook launches AI ethics ‘institute’ in Munich
Facebook Inc. is providing an initial grant of US$7.5 million over five years to a new institute dedicated to the ethics of Artificial Intelligence (AI), at the Technical University of Munich (TUM) in Germany.
The new center will investigate issues around AI safety, fairness, privacy and transparency. It will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.
In a blog post, Joaquin Quiñonero Candela, the Director of Applied Machine Learning at Facebook, said, “as AI technology increasingly impacts people and society, the academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy, and works for them.”
Candela explained that AI is foundational to everything that the company does, affecting data labels, individual algorithms, and overall system architectures.
“We’re developing new tools like Fairness Flow, which can help generate metrics for evaluating whether there are unintended biases in certain models,” he added.
The company works with groups like the Partnership for AI, of which Facebook is a founding member, and the AI4People initiative.
“However, AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics,” said Candela.
TUM is one of the top-ranked universities worldwide in the field of AI, with work extending from fundamental research to applications in fields like robotics, to the study of the social implications of AI.
A result of the partnership, the ‘Institute for Ethics in Artificial Intelligence’ will leverage TUM’s outstanding academic expertise, resources and global network to pursue rigorous ethical research into the questions evolving technologies raise.
The Institute will also benefit from Germany’s position at the forefront of the conversation surrounding ethical frameworks for AI — including the creation of government-led ethical guidelines on autonomous driving — and its work with European institutions on these issues.
Through its work, the Institute will seek to contribute to the broader conversation surrounding ethics and AI, pursuing research that can help provide tangible frameworks, methodologies, and algorithmic approaches to advise AI developers and practitioners on ethical best practices to address real-world challenges.
The independent Institute will be led by TUM Professor Dr. Christoph Lütge, and it will identify specific research questions and convene researchers focused on AI ethics and governance-related issues.
“At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy,” Dr. Lütge said.
“Our evidence-based research will address issues that lie at the interface of technology and human values. Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms,” he explained.
He also elaborated that the institute will also deal with transparency and accountability, for example, in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction.