About us
Espace utilisateur
Education
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Thesis
Home   /   Thesis   /   Security blind spots in Machine Learning systems: modeling and securing complex ML pipeline and lifecycle

Security blind spots in Machine Learning systems: modeling and securing complex ML pipeline and lifecycle

Artificial intelligence & Data intelligence Cyber security : hardware and sofware Technological challenges

Abstract

With a strong context of regulation of AI at the European scale, several requirements have been proposed for the "cybersecurity of AI" and more particularly to increase the security of AI systems and not only the core ML models. This is important especially as we are experience an impressive development of large models that are deployed to be adapted to specific tasks in a large variety of platforms and devices. However, considering the security of the overall lifecycle of an AI system is far more complex than the constraint, unrealistic traditional ML pipeline, composed of a static training, then inference steps.

In that context, there is an urgent need to focus on core operations from a ML system that are poorly studied and are real blind spot for the security of AI systems with potentially many vulnerabilities. For that purpose, we need to model the overall complexity of an AI system thanks to MLOps (Machine Learning Operations) that aims to encapsulate all the processes and components including data management, deployment and inference steps as well as the dynamicity of an AI system (regular data and model updates).

Two major “blind spots” are model deployment and systems dynamicity. Regarding deployment, recent works highlight critical security issues related to model-based backdoor attacks processed after training time by replacing small parts of a deep neural network. Additionally, other works focused on security issues against model compression steps (quantization, pruning) that are very classical steps performed to deploy a model into constrained inference devices. For example, a dormant poisoned model may become active only after pruning and/or quantization processes. For systems dynamicity, several open questions remain concerning potential security regressions that may occur when core models of an AI system are dynamically trained and deployed (e.g., because of new training data or regular fine-tuning operations).

The objectives are:
1. model security of modern AI systems lifecycle with a MLOps framework and propose threat models and risk analysis related to critical steps, typically model deployment and continuous training
2. demonstrate and characterize attacks, e.g., attacks targeting the model optimization processes, fine tuning or model updating
3. propose and develop protection schemes and sound evaluation protocols.

Laboratory

Département Systèmes (LETI)
Service Sécurité des Systèmes Electroniques et des Composants
Laboratoire des Systèmes Embarqués Sécurisés
Paris-Saclay
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down