About us
Espace utilisateur
Education
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Thesis
Home   /   Thesis   /   Integrity, availability and confidentiality of embedded AI in post-training stages

Integrity, availability and confidentiality of embedded AI in post-training stages

Artificial intelligence & Data intelligence Cyber security : hardware and sofware Technological challenges

Abstract

With a strong context of regulation of AI at the European scale, several requirements have been proposed for the "cybersecurity of AI" and more particularly to increase the security of complex modern AI systems. Indeed, we are experience an impressive development of large models (so-called “Foundation” models) that are deployed at large-scale to be adapted to specific tasks in a wide variety of platforms and devices. Today, models are optimized to be deployed and even fine-tuned in constrained platforms (memory, energy, latency) such as smartphones and many connected devices (home, health, industry…).

However, considering the security of such AI systems is a complex process with multiple attack vectors against their integrity (fool predictions), availability (crash performance, add latency) and confidentiality (reverse engineering, privacy leakage).

In the past decade, the Adversarial Machine Learning and privacy-preserving machine learning communities have reached important milestones by characterizing attacks and proposing defense schemes. Essentially, these threats are focused on the training and the inference stages. However, new threats surface related to the use of pre-trained models, their unsecure deployment as well as their adaptation (fine-tuning).

Moreover, additional security issues concern the fact that the deployment and adaptation stages could be “on-device” processes, for instance with cross-device federated learning. In that context, models are compressed and optimized with state-of-the-art techniques (e.g., quantization, pruning, Low Rank Adaptation) for which their influence on the security needs to be assessed.

The objectives are:
(1) Propose threat models and risk analysis related to critical steps, typically model deployment and continuous training for the deployment and adaptation of large foundation models on embedded systems (e.g., advanced microcontroller with HW accelerator, SoC).
(2) Demonstrate and characterize attacks, with a focus on model-based poisoning.
(3) Propose and develop protection schemes and sound evaluation protocols.

Laboratory

Département Systèmes (LETI)
Service Sécurité des Systèmes Electroniques et des Composants
Laboratoire des Systèmes Embarqués Sécurisés
Paris-Saclay
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down