About us
Espace utilisateur
Education
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Thesis
Home   /   Thesis   /   Securing Generative AI Model: Detection of Advanced Backdoor Attacks

Securing Generative AI Model: Detection of Advanced Backdoor Attacks

Artificial intelligence & Data intelligence Cyber security : hardware and sofware Technological challenges

Abstract

This PhD aims to investigate and detect backdoor attacks within generative AI model ecosystems, including standalone models, retrieval-augmented generation systems (RAG), and LLM-based agent. The research will focus on developing novel detection and defense mechanisms against stealthy trigger-based attacks, emphasizing real-world deployment scenarios and robust evaluation benchmarks. In addition to developing defense mechanisms and releasing the code as open source, the thesis also aims to provide the scientific community with a comprehensive evaluation framework.

Context: Many users (persons, institutions, NGOs and even industries) are currently not in a position to develop their own AI agents. Thus, they may download open-source genAI models or LLM-based agents that are typically designed to be highly accessible and user-friendly, requiring minimal to no technical expertise. This practice is widespread due to the large number of open-source models and LLM agent implementations available online (e.g. Hugging Face hosts over two million public models). Unfortunately, the behavioral integrity of the downloaded model is never verified, and the model may have been previously backdoored. There is therefore an urgent need to provide defense mechanisms capable of scanning the components of a generative AI system (models and knowledge bases) and identifying those that have been poisoned.

Objective: The research will focus on developing novel detection and defense mechanisms against stealthy trigger-based attacks, emphasizing real-world deployment scenarios and robust evaluation benchmarks. In addition to developing defense mechanisms and releasing the code as open source, the thesis also aims to provide the scientific community with a comprehensive evaluation framework.

Laboratory

Département d’Instrumentation Numérique
Service Monitoring, Contrôle et Diagnostic
Laboratoire Instrumentation Intelligente, Distribuée et Embarquée
Paris-Saclay
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down