<< Retour aux articles
Image  Catelijne Muller, membre du CESE (Groupe des travailleurs) et rapporteure de l'avis du CESE sur l'intelligence artificielle

“The EESC is not in favour of an e-personality for the most autonomous robots”

Tech&droit - Intelligence artificielle
16/06/2017
Four month after the European Parliament report, the European economic and social committee has just issued its own opinion on AI, which is quite different than the European Parliament's one...
Actualités du droit : The European Economic and Social Committee has just issued an opinion on AI. Is this an own-initiative opinion or a referral?
Catelijne MULLER : This is an own-initiative opinion. These kind of opinions are meant to be agenda-setting and deal with issues that, according to the EESC, need the attention of policy makers, but also of EU and local social partners and other stakeholders.
 
AdD : In France, Europe and elsewhere reports on AI have multiplied over the last two years. Should we not coordinate and centralise discussions?
C. M. : It is encouraging that the topic is receiving broad attention throughout the world, but I fully agree. The impact of AI is of a cross border nature and supra-national strategies and policies need to be developed. But someone must take the lead. That is why the first recommendation of the report is that the EU take global pole-position in this. The EU can drive a centralised, informed and balanced debate on AI, involving all relevant stakeholders: policy-makers, industry, social partners, consumers, NGOs, educational and care institutions, and experts and academics from various disciplines (AI, safety, ethics, economics, law, but also psychology and philosophy).
 
What benefits will the development of AI bring to society ?
C. M. : AI could bring benefits in a large number of areas:  consider applications in sustainable agriculture, safer transport, a safer financial system, better medicine, safer work. But these advantages can only be sustainably achieved if the challenges and risks that come with a disruptive technology as AI, are well addressed.  
 
What are the specific areas which raise the most challenging societal issues ?
C. M. : We identified societal impact-domains: ethics, safety, privac, transparency and accountability, work, education and skills, (in)equality and inclusiveness, law and regulations, governance and democracy, warfare, superintelligence. 
 
The report recommends that specialised data sets be open to allow the testing of AI in the public sphere. After open-source algorithms, would we also need open data for specific data?
C. M. : The development of one of the fastest growing areas of AI, that of machine learning, relies (at least for now) on vast amounts of data, from which it 'learns'. There is a general tendency to believe that data is by definition objective, but this is a misconception. Data can be wrong, it can be messy, it can be incomplete, it is easily manipulated and it can (and often appears to be) biased. If we want reliable, responsible and ethical AI, we need to provide it with high quality data to learn from.
 
In your report, you stress several times that it is not so much the reflection around super-intelligence which matters but rather the debate on the impact of current AI applications. Could you please clarify this point ?
C. M. : The EESC does not consider the discussion on superintelligence less important, it only notices that it currently tends to overshadow the debate on the impact of applications of AI that are already among us. The report does in fact also call for critical monitoring of the developments of AI from a broad perspective, in order to respond appropriately and in good time to major and disruptive developments ("game-changers"), one of which being notable or significant leaps in the development of AI, which may be precursors to achieving "general AI" (after which superintelligence will not be far off, according to experts).
 
The report calls for the development of European standards for the verification and validation of AI systems. Can you expand upon this ?
C. M. : As with any product, AI systems can (and should) be validated and monitored, based on standards for safety, but also for transparency, comprehensibility, privacy, and ethical values. The EESC calls for the development of these norms and standards, but also for a European AI certification or label, proving the high quality of the system. Think of an "AI made in Europe"-label. This could improve trust in AI but also give the EU an important competitive advantage.  
 
What does your report recommend in terms of security, accountability and reliability of artificial intelligence? Is the EESC in favour of a “red” or reset button?
C. M. : The EESC calls for a human-in-command approach to AI, including the precondition that the development of AI be responsible, safe and useful, where machines remain machines and people retain control over them. It is up to us if, how and when we want to use AI in our society. We should always keep that in mind. Technology does not have to overwhelm us. If it can be done, that does not mean that it has to be done. AI can potentially bring wonderful solutions to our biggest challenges, but only if we manage it well.
 
 
Does the EESC support robot and bot taxation ?
C. M. : In an earlier opinion the EESC identified the possibility of a digital dividend, while some exports call for shared ownership of AI by workers and employers. The EESC recognises the trend of technological changes favouring capital, whereby innovation primarily benefits those who own it and could increase inequality, so it attaches importance to research to solutions for these effects: however a fair balance should be struck between developing AI that benefits people and potential hindering effects resulting from the solutions. 
 
 
Is the EESC is in favour of an e-personality for the most autonomous robots ?
C. M. : The EESC is not in favour of an e-personality for the most autonomous robots. This entails an unacceptable risk of moral hazard. Liability law has several functions, among which a preventive, behaviour correcting one, that will be undermined by any form of legal personality of robots or AI, among other things.
 
Interview conducted by Gaëlle Marraud des Grottes
 
Source : Actualités du droit