Physical-attack-assisted cryptanalysis for error-correcting code-based schemes
The security assessment of post-quantum cryptography, from the perspective of physical attacks, has been extensively studied in the literature, particularly with regard to the ML-KEM and ML-DSA standards, which are based on Euclidean lattices. Furthermore, in March 2025, the HQC scheme, based on error-correcting codes, was standardized as an alternative key encapsulation mechanism to ML-KEM. Recently, Soft-Analytical Side-Channel Attacks (SASCA) have been used on a wide variety of algorithms to combine information related to intermediate variables in order to trace back to the secret, providing a form of “correction” to the uncertainty associated with profiled attacks. SASCA is based on probabilistic models called “factor graphs,” to which a “belief propagation” algorithm is applied. In the case of attacks on post-quantum cryptosystems, it is theoretically possible to use the underlying mathematical structure to process the output of a SASCA attack in the form of cryptanalysis. This has been demonstrated, for example, on ML-KEM. The objective of this thesis is to develop a methodology and the necessary tools for cryptanalysis and residual complexity calculation for cryptography based on error-correcting codes. These tools will need to take into account information (“hints”) obtained from a physical attack. A second part of the thesis will be to study the impact that this type of tool can have on the design of countermeasures.
Assisted generation of complex computational kernels in solid mechanics
The behavior laws used in numerical simulations describe the physical characteristics of simulated materials. As our understanding of these materials evolves, the complexity of these laws increases. Integrating these laws is a critical step for the performance and robustness of scientific computations. Therefore, this step can lead to intrusive and complex developments in the code.
Many digital platforms, such as FEniCS, FireDrake, FreeFEM, and Comsol, offer Just-In-Time (JIT) code generation techniques to handle various physics. This JIT approach significantly reduces the time required to implement new simulations, providing great versatility to the user. Additionally, it allows for optimization specific to the cases being treated and facilitates porting to various architectures (CPU or GPU). Finally, this approach hides implementation details; any changes in these details are invisible to the user and absorbed by the code generation layer.
However, these techniques are generally limited to the assembly steps of the linear systems to be solved and do not include the crucial step of integrating behavior laws.
Inspired by the successful experience of the open-source project mgis.fenics [1], this thesis aims to develop a Just-In-Time code generation solution dedicated to the next-generation structural mechanics code Manta [2], developed by CEA. The objective is to enable strong coupling with behavior laws generated by MFront [3], thereby improving the flexibility, performance, and robustness of numerical simulations.
The selected PhD candidate should have a solid background in computational science and a strong interest in numerical simulation and C++ programming. They should be capable of working independently and demonstrate initiative. The doctoral student will benefit from guidance from the developers of MFront and Manta (CEA), as well as the developers of the A-Set code (a collaboration between Mines-Paris Tech, Onera, and Safran). This collaboration within a multidisciplinary team will provide a stimulating and enriching environment for the candidate.
Furthermore, the thesis work will be enhanced by the opportunity to participate in conferences and publish articles in peer-reviewed scientific journals, offering national and international visibility to the thesis results.
The PhD will take place at CEA Cadarache, in south-eastern France, in the Nuclear Fuel Studies Department of the Institute for Research on Nuclear Systems for Low-Carbon Energy Production (IRESNE)[4]. The host laboratory is the LMPC, whose role is to contribute to the development of the physical components of the PLEIADES digital platform [5], co-developed by CEA and EDF.
[1] https://thelfer.github.io/mgis/web/mgis_fenics.html
[2] MANTA : un code HPC généraliste pour la simulation de problèmes complexes en mécanique. https://hal.science/hal-03688160
[3] https://thelfer.github.io/tfel/web/index.html
[4] https://www.cea.fr/energies/iresne/Pages/Accueil.aspx
[5] PLEIADES: A numerical framework dedicated to the multiphysics and multiscale nuclear fuel behavior simulation https://www.sciencedirect.com/science/article/pii/S0306454924002408
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.
AI-Driven Network Management with Large Language Models LLMs
The increasing complexity of heterogeneous networks (satellite, 5G, IoT, TSN) requires an evolution in network management. Intent-Based Networking (IBN), while advanced, still faces challenges in unambiguously translating high-level intentions into technical configurations. This work proposes to overcome this limitation by leveraging Large Language Models (LLMs) as a cognitive interface for complete and reliable automation.
This thesis aims to design and develop an IBN-LLM framework to create the cognitive brain of a closed control loop on the top of an SDN architecture. The work will focus on three major challenges: 1) developing a reliable semantic translator from natural language to network configurations; 2) designing a deterministic Verification Engine (via simulations or digital twins) to prevent LLM "hallucinations"; and 3) integrating real-time analysis capabilities (RAG) for Root Cause Analysis (RCA) and the proactive generation of optimization intents.
We anticipate the design of an IBN-LLM architecture integrated with SDN controllers, along with methodologies for the formal verification of configurations. The core contribution will be the creation of an LLM-based model capable of performing RCA and generating optimization intents in real-time. The validation of the approach will be ensured by a functional prototype (PoC), whose experimental evaluation will allow for the precise measurement of performance in terms of accuracy, latency, and resilience.
IO access scheduling on magnetic tapes using machine learning
Numerical simulations are used to obtain responses to physical phenomena that
cannot be reproduced, either because they are too dangerous or too expensive.
The models used for these simulations are increasingly complex, in terms of
size and precision, and require access to increasingly large computing and
data storage capacities. To this end, and in order to optimize costs, the use
of mass storage technologies such as magnetic tapes is critical. However, to
ensure good overall system performance, the development of algorithms and
mechanisms related to data placement and tape access scheduling is essential.
The objective of the thesis is to study the technology of magnetic tapes, as
well as existing mechanisms such as RAO (Recommended Access Order) or request
retention; and to implement new strategies for the optimization of magnetic
tape performance.