AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
Physics informed deep learning for non-destructive testing
This PhD project lies within the field of Non-Destructive Testing (NDT), which encompasses a range of techniques used to detect defects in structures (cables, materials, components) without causing any damage. Diagnostics rely on physical measurements (e.g., reflectometry, ultrasound), whose interpretation requires solving inverse problems, which are often ill-posed.
Classical approaches based on iterative algorithms are accurate but computationally expensive, and difficult to embed for near-sensor, real-time analysis. The proposed research aims to overcome these limitations by exploring physics-informed deep learning approaches, in particular:
* Neural networks inspired by traditional iterative algorithms (algorithm unrolling),
* PINNs (Physics-Informed Neural Networks) that incorporate physical laws directly into the learning process,
* Differentiable models that simulate physical measurements (especially reflectometry).
The goal is to develop interpretable deep models in a modular framework for NDT, that can run on embedded systems. The main case study will focus on electrical cables (TDR/FDR), with possible extensions to other NDT modalities such as ultrasound. The thesis combines optimization, learning, and physical modeling, and is intended for a candidate interested in interdisciplinary research across engineering sciences, applied mathematics, and artificial intelligence.
Grounding and reasoning over space and time in Vision-Language Models (VLM)
Recent Vision-Language Models (VLMs) like BLIP, LLaVA, and Qwen-VL have achieved impressive results in multimodal tasks but still face limitations in true spatial and temporal reasoning. Many current benchmarks conflate visual reasoning with general knowledge and involve shallow reasoning tasks. Furthermore, these models often struggle with understanding complex spatial relations and dynamic scenes due to suboptimal visual feature usage. To address this, recent approaches such as SpatialRGPT, SpaceVLLM, VPD, and ST-VLM have introduced techniques like 3D scene graph integration, spatio-temporal queries, and kinematic instruction tuning to improve reasoning over space and time. This thesis proposes to build on these advances by developing new instruction-tuned models with improved data representation and architectural innovations. The goal is to enable robust spatio-temporal reasoning for applications in robotics, video analysis, and dynamic environment understanding.
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.
A theoretical framework for the task-based optimal design of Modular and Reconfigurable Serial Robots for rapid deployment
The innovations that gave rise to industrial robots date back to the sixties and seventies. They have enabled a massive deployment of industrial robots that transformed factory floors, at least in industrial sectors such as car manufacturing and other mass production lines.
However, such robots do not fit the requirements of other interesting applications that appeared and developed in fields such as in laboratory research, space robotics, medical robotics, automation in inspection and maintenance, agricultural robotics, service robotics and, of course, humanoids. A small number of these sectors have seen large-scale deployment and commercialization of robotic systems, with most others advancing slowly and incrementally to that goal.
This begs the following question: is it due to unsuitable hardware (insufficient physical capabilities to generate the required motions and forces); software capabilities (control systems, perception, decision support, learning, etc.); or a lack of new design paradigms capable to meet the needs of these applications (agile and scalable custom-design approaches)?
The unprecedented explosion of data science, machine learning and AI in all areas of science, technology and society may be seen as a compelling solution, and a radical transformation is taking shape (or is anticipated), with the promise of empowering the next generations of robots with AI (both predictive and generative). Therefore, research can tend to pay increasing attention to the software aspects (learning, decision support, coding etc.); perhaps to the detriment of more advanced physical capabilities (hardware) and new concepts (design paradigms). It is however clear that the cognitive aspects of robotics, including learning, control and decision support, are useful if and only if suitable physical embodiments are available to meet the needs of the various tasks that can be robotized, hence requiring adapted design methodologies and hardware.
The aim of this thesis is thus to focus on design paradigms and hardware, and in particular on the optimal design of rapidly-produced serial robots based on given families of standardized « modules » whose layout will be optimized according to the requirements of the tasks that cannot be performed by the industrial robots available on the market. The ambition is to answer the question of whether and how a paradigm shift may be possible for the design of robots, from being fixed-catalogue to rapidly available bespoke type.
The successful candidate will enrol at the « Ecole Doctorale Mathématiques, STIC » of Nantes Université (ED-MASTIC), and he or she will be hosted for three years in the CEA-LIST Interactive Robotics Unit under supervision of Dr Farzam Ranjbaran. Professors Yannick Aoustin (Nantes) and Clément Gosselin (Laval) will provide academic guidance and joint supervision for a successful completion of the thesis.
A follow-up to this thesis is strongly considered in the form of a one-year Post-Doctoral fellowship to which the candidate will be able to apply, upon successful completion of all the requirements of the PhD Degree. This Post-Doctoral fellowship will be hosted at the « Centre de recherche en robotique, vision et intelligence machine (CeRVIM) », Université Laval, Québec, Canada.
Development of an online measurement method for radioactive gases based on porous scintillators
As the national metrology laboratory for ionizing radiation, the Henri Becquerel National Laboratory (LNE-LNHB) of the French Alternative Energies and Atomic Energy Commission (CEA) operates unique facilities dedicated to radionuclide metrology. These include various setups for producing liquid-phase standards, as well as systems for mixing radioactive gases. In previous research projects, a specific installation was developed for the generation of radioactive gas atmospheres [1], with the aim of creating new testing and calibration methods that meet the needs of both research and industry.
One of the major current challenges is to reproduce environmental conditions as realistically as possible in order to better address actual regulatory requirements—particularly regarding volumetric activity and measurement conditions. This general issue applies to all radioactive substances, but is especially critical for volatile radioactive substances. Over the past several years, through numerous projects and collaborations, CEA/LNHB has been exploring new detection methods that outperform traditional liquid scintillation techniques. Among these innovations are new porous inorganic scintillators [1], which enable not only online detection but also online separation (“unmixing”) of pure beta-emitting radionuclides—this technique has been patented [2].
The objective of this PhD project is to develop, implement, and optimize these measurement methods through applications to:
- Pure radioactive gases,
- Multicomponent mixtures of pure beta-emitting radioactive gases—using porous scintillators for unmixing and identification,
- Liquid scintillation counting, more generally, where this unmixing capability has recently been demonstrated at LNHB and is currently being prepared for publication.
The unmixing technique is of particular interest, as it significantly simplifies environmental monitoring by scintillation, especially in the case of ³H and ¹4C mixtures. Currently, such analyses require multiple bubbler samplings, mixing with scintillation cocktail, and triple-label methods—procedures that involve several months of calibration preparation and weeks of experimentation and processing.
This PhD will be closely aligned with a second doctoral project on Compton-TDCR [1] (2025–2028), aimed at determining the response curve of the scintillators.
The scientific challenges of the project are tied to radionuclide metrology and combine experimentation, instrumentation, and data analysis to develop innovative measurement techniques. Key objectives include:
- Developing a method for beta-emitter unmixing in scintillation, based on initial published and patented concepts.
- Assessing the precision of the unmixing method, including associated uncertainties and decision thresholds.
- Validating the unmixing technique using the laboratory’s radioactive gas test bench [1], with various radionuclides such as 3H, 14C, 133Xe, 85Kr, 222Rn,... or via conventional liquid scintillation counting.
- Enhancing the unmixing model, potentially through the use of machine learning or artificial intelligence tools, particularly for complex multicomponent mixtures.
Internalisation of external knowledge by foundation models
To perform an unknown task, a subject (human or robot) has to consult external information, which involves a cognitive cost. After several similar experiments, it masters the situation and can act automatically. The 1980s and 1990s saw explorations in AI using conceptual graphs and schemas, but their large-scale implementation was limited by the technology available at the time.
Today's neural models, including transformers and LLM/VLMs, learn universal representations through pre-training on huge amounts of data. They can be used with prompts to provide local context. Fine-tuning allows these models to be specialised for specific tasks.
RAG and GraphRAG methods can be used to exploit external knowledge, but their use for inference is resource-intensive. This thesis proposes a cognitivist approach in which the system undergoes continuous learning. It consults external sources during inference and uses this information to refine itself regularly, as it does during sleep. This method aims to improve performance and reduce resource consumption.
In humans, these processes are linked to the spatial organisation of the brain. The thesis will also study network architectures inspired by this organisation, with dedicated but interconnected “zones”, such as the vision-language and language models.
These concepts can be applied to the Astir and Ridder projects, which aim to exploit foundation models for software engineering in robotics and the development of generative AI methods for the safe control of robots.
Custom synthesis of diamond nanoparticles for photocatalytic hydrogen production
Diamond nanoparticles (nanodiamonds) are used in nanomedicine, quantum technologies, lubricants and advanced composites [1-2]. Our recent results show that nanodiamond can also act as a photocatalyst, enabling the production of hydrogen under solar illumination [3]. Despite its wide band gap, its band structure is adaptable according to its nature and surface chemistry [4]. Moreover, the controlled incorporation of dopants or sp2 carbon leads to the generation of additional bandgap states that enhance the absorption of visible light, as shown in a recent study involving our group [5]. The photocatalytic performance of nanodiamonds is therefore highly dependent on their size, shape and concentration of chemical impurities. It is therefore essential to develop a "tailor-made" nanodiamond synthesis method, in which these different parameters can be finely controlled, in order to provide a supply of "controlled" nanodiamonds, which is currently lacking.
This PhD aims to develop a bottom-up approach to grow nanodiamond using a sacrificial template (silica beads) to which diamond seeds < 10 nm are attached by electrostatic interaction. The growth of diamond nanoparticles from these seeds will be achieved by microwave-enhanced chemical vapor deposition (MPCVD) using a homemade rotating reactor available at CEA NIMBE. After growth, the CVD-NDs will be collected after dissolution of the sacrificial pattern. Preliminary experiments have demonstrated the feasibility of this approach with the synthesis of faceted <100 nm nanodiamonds (so called CVD-ND), as shown in the scanning electron microscopy image.
During the PhD work, the nature of the diamond seeds (ultra-small NDs [size ˜ 5 nm] synthesized by detonation or HPHT, or molecular derivatives of adamantane) as well as CVD growth parameters will be studied to achieve better controlled CVD-NDs in terms of crystallinity and morphology. Nanodiamonds doped with boron or nitrogen will be also considered, playing on the gas phase composition. The crystalline structure, morphology and surface chemistry will be studied at CEA NIMBE using SEM, X-ray diffraction and Raman, infrared and photoelectron spectroscopies. A detailed analysis of the crystallographic structure and structural defects will be carried out by high-resolution transmission electron microscopy (collaboration). CVD FNDs will then be exposed to gas-phase treatments (air, hydrogen) to modulate their surface chemistry and stabilize them in water. The photocatalytic performance for hydrogen production under visible light of these different CVD-NDs will be evaluated and compared using the photocatalytic reactor recently installed at CEA NIMBE.
References
[1] Nunn et al., Current Opinion in Solid State and Materials Science, 21 (2017) 1.
[2] Wu et al., Angew. Chem. Int. Ed. 55 (2016) 6586.
[3] Marchal et al., Adv. Energy Sustainability Res., 2300260 (2023) 1-8.
[4] Miliaieva et al., Nanoscale Adv. 5 (2023) 4402.
[5] Buchner et al., Nanoscale 14 (2022) 17188.
Modeling and prediction of electromagnetic emissions from power converters using deep learning
In recent years, electromagnetic compatibility (EMC) in power converters based on wide bandgap (WBG) semiconductors has attracted growing interest, due to the high switching speeds and increased frequencies they enable. While these devices improve power density and system efficiency, they also generate more complex conducted and radiated emissions that are challenging to control. In this context, this thesis focuses on the prediction, modeling, and characterization of electromagnetic interference (EMI) (> 30 MHz), both conducted and radiated, in high-frequency power electronic systems. The work is based on a multi-subsystem partitioning method and an iterative co-simulation approach, combined with in situ characterization to capture non-ideal and nonlinear phenomena. In addition, deep learning techniques are employed to model EMI behavior using both measured and simulated data. Generative artificial intelligence (Generative AI) is also leveraged to automatically generate representative and diverse configurations commonly encountered in power electronics, thereby enabling efficient exploration of a wide range of EMI scenarios. This hybrid approach aims to enhance analysis accuracy while accelerating simulation and design phases.
Reducing the complexity of France's building stock to better anticipate anticipate energy demand flexibility and the integration of solar solar resources
The aim of this work is to respond to the current challenges of energy transition in the building sector, France's leading energy consumer. French public policies are currently proposing far-reaching solutions, such as support for energy-efficient home renovation and incentives for the installation of renewable energy production systems. On a large scale, this is leading to structural changes for both building managers and energy network operators. As a result, players in the sector need to review their energy consumption and carbon impact forecasts, integrating flexibility solutions adapted to the French standard. Some flexibility levers are already in place to meet the challenges of energy and greenhouse gas emission reduction, but others need to be anticipated, taking into account long-term scenarios for energy renovation and the deployment of renewable energy sources, particularly photovoltaic energy, across the whole of France. The issue of massification is therefore an underlying one. That's why this thesis proposes to implement a methodology for reducing the size of the French installed base based on previously defined criteria. In particular, the aim will be to define a limited number of reference buildings that are statistically representative of the behavior resulting from the application of flexibility strategies that meet the challenges of energy efficiency and limiting greenhouse gas emissions. To this end, the CSTB (Centre Scientifique et Technique du Bâtiment) is developing and making available a database of French buildings (BDNB: Base de Données Nationale des Bâtiments), containing information on morphology, uses, construction principles and energy consumption and performance.