PANAME: a partnership for privacy auditing of AI models

26 June 2025

CNIL, ANSSI, PEReN and the IPoP project of PEPR Cybersecurity are launching PANAME, a project aimed at developing a tool for auditing the privacy of AI models.

Audit models for compliance

The opinion adopted by the European Data Protection Board (EDPB) in December 2024 states that the GDPR applies, in many cases, to AI models trained on personal data, due to their memorization capabilities (see LINC article Small taxonomy of AI system attacks).

It also points out that, in order to take an AI model trained on personal data out of the scope of GDPR, it is very often necessary to demonstrate in an analysis that it is resistant to privacy attacks.

The CNIL will soon be publishing recommendations to help AI actors carry out and document this analysis.

Resources hardly available 

Over the last ten years or so, researchers have been working increasingly on privacy attacks. However, they are often implemented at an experimental level, for scientific publications. Several obstacles have thus been identified to their appropriation by industry for AI auditing:

  • Scattered and abundant academic literature: finding one's way around resources on the subject can take time and a high level of technical expertise, especially for smaller players;
     
  • Use cases not always adapted to an industrial context: even when available in open source, these techniques require significant development work before they can be used in production;
     
  • Lack of standardization: no unified framework currently exists for formalizing the coding of confidentiality tests.  

A consortium with complementary skills to create a new tool

To meet the challenges of compliance and remove identified obstacles, CNIL and its partners are launching the PANAME (Privacy Auditing of AI Models) project.

For 18 months, PEReN, ANSSI, the IPoP project of PEPR (Priority Research Programs and Equipment) Cybersecurity and CNIL will be working together to develop a software library available in whole or in part in open source, designed to unify the way in which model privacy is tested.

Each partner will contribute according to its area of expertise: 

  • PEReN will be primarily responsible for developing the library;
  • ANSSI will contribute its cyber expertise, particularly in the context of attacks on IT systems;
  • The IPoP Project will provide scientific leadership for the project;
  • CNIL will steer the project and provide the legal framework.

The aim of the tool is to enable efficient and cost-effective implementation of certain technical privacy assessment tests that players in the AI ecosystem are likely to carry out to assess the GDPR compliance of an AI model. Test phases with government agencies and industry are planned to ensure that the tool is developed in a way that is consistent with their context of use.