Peter Griffin, Contributor. 22 April 2021, 10:24 am
In New Zealand, we have the AI Charter, a voluntary agreement to stick to a pragmatic set of principles governing how artificial intelligence is used in Government agencies.
In the European Union, politicians have just taken their plans for governance of AI much further, announcing proposals to ban “AI systems considered a clear threat to the safety, livelihoods and rights of people”.
Breaching the proposed rules – running AI systems deemed to be illegal, would result in fines worth up to six per cent of a company’s global turnover. That has potentially massive implications for companies developing and implementing artificial intelligence systems. But European Commission digital chief Margrethe Vestager said the proposals, which also call for tiger regulation around the use of biometrics, were “innovation-friendly”.
The ban would only apply when “the safety and fundamental rights of EU citizens are at stake”.
But the EU’s executive arm has floated a bill that would also seek to prevent the development of a social credit system such as that operated by China’s government and to limit the use of AI by law enforcement agencies.
AI systems or applications “that manipulate human behaviour to circumvent users’ free will”, through use of “subliminal techniques” would also be in the EU’s sights. The proposal has already been criticised by business groups for its vague wording, while human rights groups note the numerous exemptions that effectively water down the “ban”.
The proposal needs to adopted by the European Parliament to go into effect after which a “coordinated plan” would see the provisions apply across the European Union. That would make it the most sweeping regulation of AI the world has seen.
It is in line with the EU’s regulation of the digital economy, which saw it introduce the General Data Protection Regulation (GDPR) to bolster consumer data rights and pursue several Big Tech players for anti-competitive practices.
Many will welcome the transparency the new provisions would require, helping to overcome the “black box” nature of many AI systems. For instance, an AI program used to sort CVs in recruitment would need to prove its accuracy and fairness. Chatbots would need to declare that their human users are actually talking to a machine.
The “high risk” AI applications that could be subject to “strict obligations” in the EU
High-risk: AI systems identified as high-risk include AI technology used in:
Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
Safety components of products (e.g. AI application in robot-assisted surgery);
Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).