Artificial intelligence: the opinion of the CNIL and its counterparts on the future European regulation
On 18 June 2021, the CNIL, its counterparts and the European Data Protection Supervisor adopted an opinion on the European Commission's proposal for a regulation on AI. This is an essential first step in building a coherent European digital strategy that respects fundamental rights and freedoms.
The CNIL and its European counterparts welcome the European Commission's proposal to establish harmonised rules on artificial intelligence in order to preserve individual freedoms. Although the proposal is likely to evolve considerably as the European Parliament and Council amend the text, the European protection authorities have deemed it essential to take a position through the publication of an opinion.
In particular, the CNIL notes 4 fundamental points:
The need to draw red lines for the future use of AI
Data protection authorities have welcomed the European Commission's willingness to clarify prohibited uses in order to build ethical and trusted AI in the EU. However, in their opinion of 18 June, the European Data Protection Board (EDPB) and the European Data Protection Supervisor indicate the need to broaden the scope of prohibited AI systems and to clarify their definition, as the safeguards envisaged in the proposed Regulation are not considered satisfactory.
Thus, in view of the extremely high risks posed by remote biometric identification of individuals in public spaces (face, gait, fingerprint, voice recognition, etc.), the European data protection authorities propose that the exceptions to the general prohibition to be removed. The opinion also recommends a ban on biometric systems used to classify individuals into groups based on alleged ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union. The use of AI systems to infer a person's emotions is also considered highly undesirable and should also be subject to a ban in principle (except in very specific cases, such as certain health purposes). Finally, systems used for social scoring should be systematically prohibited.
Although this opinion is advisory and does not constitute a declination of the legal framework in force, it must nevertheless be put into perspective with several public statements by the CNIL calling for a democratic debate on new video uses in September 2018, concerning facial recognition in November 2019 and calling for vigilance on the use of so-called "smart" cameras and thermal cameras in June 2020 in the context of the COVID-19 epidemic.
The CNIL therefore considers that a clarification of the framework, specifying what is permitted and what is prohibited, is to the benefit of citizens but also of professionals. It would enable the latter to know whether the products they might offer are authorised or not, without differences in interpretation depending on the sector or the Member State.
The challenge of articulation with the GDPR
Data protection authorities have welcomed the risk-based approach adopted by the European Commission. This should allow the regulatory effort to be focused only on a limited volume of AI systems said to be "high risk" for fundamental rights.
The CNIL and its counterparts also noted that these systems would in the overwhelming majority of cases be called upon to use personal data, thus implying a major issue of articulation of the regulation on artificial intelligence with the GDPR and the Law Enforcement Directive. The European data protection authorities thus indicated in their opinion that the classification of an AI system as "high risk" did not mean that its use was authorised and could be deployed in all cases. Indeed, compliance with the legal obligations arising from EU legislation - including on personal data protection - must be a precondition for entry into the European market as a CE marked product.
The importance of harmonised governance
In their opinion, the CNIL and its counterparts ask that the governance of the "European Artificial Intelligence Board" (EAIB) be clarified, both to guarantee its independence and to strengthen its powers and thus enable it to exercise real control, particularly when implementing AI systems on a European scale.
Moreover, the CNIL considers, as well as its counterparts, that data protection authorities should be designated as national supervisory authorities for artificial intelligence. In particular, it believes that such a designation would facilitate the proper implementation of the future AI Regulation and the creation of a European artificial intelligence ecosystem favourable towards innovation:
- In application of the GDPR and the Law Enforcement Directive, the CNIL already regulates AI systems involving personal data, in order to guarantee the protection of fundamental rights and more particularly the right to data protection. In practice, the CNIL is therefore regularly called upon to exchange with AI solution providers and to evaluate such systems.
- Implementing a regulatory proposal of the ambition of the European Commission's requires a competent and experienced regulator.
- The regulatory approach chosen should ensure that it provides a coherent framework and a clearly identified interlocutor for professionals, thereby minimising legal uncertainty and administrative complexity. This should be done by avoiding a multiplication of supervisory authorities or coordinating different implementations in each Member State and, instead, aiming to ensure a consistent interpretation of the provisions on algorithms across the EU.
- The European Commission's proposal for a Regulation gives the European Data Protection Supervisor the power of competent authority for AI systems implemented by the European institutions and agencies. Such a choice clearly indicates the interest of relying on an existing regulator for the implementation of the Regulation and in particular on data protection authorities, given the strong overlap with their prerogatives.
Essential support for innovation
Apart from the prohibited cases, the proposed regulation should support innovation and the design of AI systems in line with European values and principles. The opinion of the data protection authorities notes that it will be up to the regulator to know how to combine protection requirements with an advanced understanding of the technological challenges of solution providers in order to propose a balanced view of regulation.
The proposed Regulation provides for these support measures - and in particular regulatory sandboxes - to be implemented by the competent national authorities. In France, the CNIL already has the task of supporting professionals, and in particular the most innovative players, towards compliance. A skill that will prove necessary for the future AI regulator.
CNIL actions to support innovation
- The CNIL offers thematic workshops and webinars as part of its participation in French Tech Central and has contributed to the support of French Tech since 2014.
- It will continue to develop this activity in 2021 by strengthening its digital innovation laboratory (LINC).
- At the beginning of 2021, the CNIL launched its first "personal data sandbox" call for innovative projects, some of which are based on artificial intelligence, to benefit from enhanced support in order to produce a service or product that complies with the regulations in force and respects privacy.