EU publishes new guidelines on ethical AI
As artificial intelligence technology becomes more advanced and, at the same time, in wider day-to-day use across a wide array of industries, the ethical implications of these technologies have come under the spotlight.
Just a few days after Google was forced to put its controversial AI ethics board on the scrapheap in the face of internal pressure, the European Commission (EC) has launched an independent AI ethics group aimed at achieving “trustworthy” AI development.
“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice president for the digital single market, commenting on the launch of the ethical AI framework, which was published in draft in December last year.
“It is only with trust that our society can fully benefit from technologies,” he added.
Trustworthy AI has three components according to the EC, which should be met throughout the system’s entire life cycle. Firstly, it should be lawful and compliant with all applicable laws and regulations; it should adhere to ethical principles and values; and lastly, it should be robust from a technical and social perspective— even with good intentions, AI systems can cause unintentional harm.
In its seven-point plan, the EC adds that AI systems should “support human agency”, not decrease it, while citizens should have full control over their data— and that their data won’t be used to discriminate against them.
Meanwhile, transparency and traceability of AI systems should be ensured, with diversity, non-discrimination and fairness vital to their accessibility. AI systems should promote positive social change, enhance sustainability and ecological responsibility, and mechanisms put in place to ensure systems are accountable for their outcome.
The announcement comes ahead of a pilot phase later this year, which will call on companies and public bodies to provide feedback. The goal for the initiative is to build Europe’s reputation for cutting-edge, innovative and “trustworthy” AI projects.