Securing the processing

12 September 2022

Analysing risks and preventing flaws and attacks.

Identifying and protecting against attack patterns

Much recent research shows that AI systems can be attacked or diverted from their purpose. These emerging attack patterns must be known to the provider and user of the AI system. Given the unprecedented nature of these attacks, precaution should be taken wherever possible.

 

Has a risk analysis been carried out?

Does it take into account the specific attack patterns of AI algorithms?

Are the attack patterns for the AI methods used known to the provider and the user? Is there a study of the scientific literature and a monitoring system? 

Have measures been taken to protect against attacks by data poisoning, adversarial attacks, model evasion or membership inference?

Which measures?

Logging to improve supervision

By maintaining a log throughout the chain, users of the AI system must be able to identify and explain abnormal behaviour.

 

Is an action logging system in place? Does it cover software or hardware changes to the AI system, requests to the AI system, and system input and output data?

Is there automatic analysis of the logs?

Would it allow the identification of attempted attacks such as membership inference or model poisoning (especially in the case of continuous learning)?

Are there other measures in place to control the quality of the system's output downstream, and can they protect against attacks?

Controlling access

Whether the AI system is used in a physical or software system, access that could allow modification of the system must be reduced and supervised.

 

Is there a specific protocol for changes to the AI system?

What is the protocol?

Are different levels of clearance planned in order to control changes to the system and limit access?

Are the code changes versioned?

Has a continuous integration and distribution process, usually referred to as the CI/CD pipeline, been implemented?

Are tests used to check that changes made do not lead to a regression in system performance or security?

Are they performed in an automated and binding manner?

Are they comprehensive?

Do they allow a quick return to the last functional version?

Securing all stages of the processing

Without prejudice to the security measures specific to the AI system used, the usual security measures for processing involving personal data or with consequences for individuals must be put in place.

 

What security measures have been taken?

Is there redundancy in order to guarantee system availability?

Could a secondary server perform processing if the main system were rendered non-operational?

Has an audit (internal or external) been carried out?

If so, by which organisation?

Which techniques and methodologies have been used to test the processing?

Has a risk management system been put in place?

Have the recommendations of the CNIL security guide been applied?

Examining the nature of the models

In some cases, the parameters and models resulting from model learning can be considered as personal data. The security measures must then be adapted.

 

If the model has been trained using personal data, has a study of the risks of re-identification/membership inference been carried out on the aggregates resulting from the learning?

Which methods are used to limit these risks?

Are the parameters of the model then considered personal?

Is the level of security applied to the parameters of the model appropriate and sufficient with regard to the security obligation imposed by the GDPR?

Once the training is complete, do the algorithm parameters contain samples of the training data (as in the case of some clustering algorithms which identify, record and then build on certain key data from the training set)?

In this case, are the parameters of the algorithm subject to the security measures applicable to personal data?

 

No information is collected by the CNIL.


Would you like to contribute?

Write to ia[@]cnil.fr