Artificial intelligence (AI) software poses risks to society including tracking and identifying individuals, ‘scoring’ people without their knowledge, and powering lethal autonomous weapons systems, an influential EU group has warned.
The risks were outlined by the European Commission’s high-level expert group on AI when it published its ethics guidelines for trustworthy AI this week.
At the same time it launched a pilot project to test the guidance in practice.
With growing use of AI in legal practice, many of the issues raised have resonance.
Academic lawyers sat on the group, including experts from the universities of Birmingham and Oxford.
Several years in the making, the guidelines are the final version of proposals made in draft at the beginning of the year, which urged that AI be both human-centric and trustworthy.
The EU’s ambition is to boost spending on AI to €20bn (£17bn) annually over the next decade. The bloc is currently behind Asia and North America in private investment in AI.
In order for AI to be trustworthy and thereby gain public acceptance, the group recommended that it had three components: it should be lawful, complying with all laws and regulations; it should be ethical; and it should be robust from a technical and social perspective, so it did not cause harm unintentionally.
Those developing and using AI should bear in mind that while the technology could bring benefits, it could also impact negatively on “democracy, the rule of law and distributive justice, or on the human mind itself”.
The experts continued: “AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools.
“AI systems will continue to impact society and citizens in ways that we cannot yet imagine.”
Noteworthy risks included face recognition technology, the use of involuntary biometric data – such as “lie detection [or] personality assessment through micro expressions” – and automatic identification that raised legal and ethical concerns.
They also highlighted “citizen scoring in violation of fundamental rights”. Any such system must be transparent and fair, with mechanisms allowing the challenging and rectifying of discriminatory scores.
“This is particularly important in situations where an asymmetry of power exists between the parties,” they added.
The final example of risk brought about by AI was of lethal autonomous weapon systems, such as “learning machines with cognitive skills to decide whom, when and where to fight without human intervention”.
They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”
Leave a Comment