Long-term and non-invasive plant monitoring using MIR spectroscopy
The LCO (french acronym for Optical Sensors Laboratory) develops innovative Silicium integrated photonic components (optical sources, waveguides, photodetectors, etc), sensors, and eventually systems. From upstream technological research to industrial transfers, those sensors apply in various fields such as environment, health, and security.
One of the laboratory research topic is mid-infrared spectroscopy of dense samples, using a photothermal detection technology. As we got convincing results applying our sensors for monitoring humans physiological parameters, we now wish to adapt them to plants. First laboratory trials reveal encouraging results, but their interpretation is at this stage out of reach because of the complexity of the measure, and the case study itself. Tackling this problematic is the thesis objective.
To achieve it, the candidate will establish an experimental program with the help of instrumentation and plant biology specialists. He will have access to the laboratory computational and experimental resources, as well as the CEA-Grenoble prototyping capabilities.
Study of the corrosion behaviour of complex multi-element materials/coatings in H2SO4 and HNO3 environments
This thesis is part of the CROCUS (miCro laboRatory fOr antiCorrosion solUtion design) project. The aim of this project is to develop a micro-laboratory for in situ corrosion analysis that can be brought into line with processes for synthesising anti-corrosion materials or coatings
By testing a wide range of alloy compositions using AESEC (a technique providing access to elementally resolved electrochemistry), the project will provide a real opportunity to build up a corrosion database in different corrosive environments, whether natural or industrial, with varying compositions, concentrations, pH and temperatures.
The aim of the thesis will be to study the corrosion behaviour of promising multi-element complex materials/coatings using electrochemical techniques coupled with AESEC.
The first part of this work concerns the determination of the limits of use of these promising alloys as a function of the proton concentration in H2SO4 and HNO3 media for temperatures ranging from room temperature to 80°C. The passivity of these alloys as a function of acid concentration will be studied using electrochemical techniques (voltammetry, impedance, AESEC).
The presence of certain minor elements in the composition of these alloys, such as molybdenum, may have a beneficial effect on corrosion behaviour. To this end, the passivation mechanisms involved will be studied using model materials (Ni-Cr-Mo), electrochemical techniques (cyclic and/or linear voltammetry, impedance spectroscopy and AESEC) and surface analysis.
The second part deals with the transition between passivity and transpassivity, and in particular the occurrence or non-occurrence of intergranular corrosion (IGC) as a function of oxidising conditions (presence of oxidising ions). The aim will be to determine the different kinetics (comparison between grain and grain boundary corrosion rates), as well as to validate the models set up to study IGC in steels.
Finally, the student will participate in the development of a materials database for corrosion in aggressive environments, whether natural or industrial, with different compositions, concentrations, pH and temperatures, enabling the development of new generations of corrosion-resistant materials or coatings through the use of digital design and artificial intelligence optimisation tools.
GenPhi : 3D Generative AI conditioned by geometry, structure and physics
The aim of this thesis is to design new 3D model generators based on Generative Artificial Intelligence (GenAI), capable of producing faithful, coherent and physically viable shapes. While 3D generation has become essential in many fields, current automatic generation approaches suffer from limitations in terms of respecting geometric, structural and physical constraints. The goal is to develop methods for integrating constraints related to geometry, topology, internal structure and physical laws, both stationary (equilibrium, statics) and dynamic (kinematics, deformation), right from the generation stage. The study will combine geometric perception, semantic enrichment and physical simulation approaches to produce robust, realistic 3D models that can be directly exploited without human intervention.
Robust and Secure Federated Learning
Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing their raw data. While this decentralized setup is appealing for privacy-sensitive domains like healthcare and finance, it is not inherently secure: model updates can leak private information, and malicious clients can corrupt training.
To tackle these challenges, two main strategies are used: Secure Aggregation, which protects privacy by hiding individual updates, and Robust Aggregation, which filters out malicious updates. However, these goals can conflict—privacy mechanisms may obscure signs of malicious behavior, and robustness methods may violate privacy.
Moreover, most research focuses on model-level attacks, neglecting protocol-level threats like message delays or dropped updates, which are common in real-world, asynchronous networks.
This thesis aims to explore the privacy–robustness trade-off in FL, identify feasible security models, and design practical, secure, and robust protocols. Both theoretical analysis and prototype implementation will be conducted, leveraging tools like Secure Multi-Party Computation, cryptographic techniques, and differential privacy.
AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
Physics informed deep learning for non-destructive testing
This PhD project lies within the field of Non-Destructive Testing (NDT), which encompasses a range of techniques used to detect defects in structures (cables, materials, components) without causing any damage. Diagnostics rely on physical measurements (e.g., reflectometry, ultrasound), whose interpretation requires solving inverse problems, which are often ill-posed.
Classical approaches based on iterative algorithms are accurate but computationally expensive, and difficult to embed for near-sensor, real-time analysis. The proposed research aims to overcome these limitations by exploring physics-informed deep learning approaches, in particular:
* Neural networks inspired by traditional iterative algorithms (algorithm unrolling),
* PINNs (Physics-Informed Neural Networks) that incorporate physical laws directly into the learning process,
* Differentiable models that simulate physical measurements (especially reflectometry).
The goal is to develop interpretable deep models in a modular framework for NDT, that can run on embedded systems. The main case study will focus on electrical cables (TDR/FDR), with possible extensions to other NDT modalities such as ultrasound. The thesis combines optimization, learning, and physical modeling, and is intended for a candidate interested in interdisciplinary research across engineering sciences, applied mathematics, and artificial intelligence.
Grounding and reasoning over space and time in Vision-Language Models (VLM)
Recent Vision-Language Models (VLMs) like BLIP, LLaVA, and Qwen-VL have achieved impressive results in multimodal tasks but still face limitations in true spatial and temporal reasoning. Many current benchmarks conflate visual reasoning with general knowledge and involve shallow reasoning tasks. Furthermore, these models often struggle with understanding complex spatial relations and dynamic scenes due to suboptimal visual feature usage. To address this, recent approaches such as SpatialRGPT, SpaceVLLM, VPD, and ST-VLM have introduced techniques like 3D scene graph integration, spatio-temporal queries, and kinematic instruction tuning to improve reasoning over space and time. This thesis proposes to build on these advances by developing new instruction-tuned models with improved data representation and architectural innovations. The goal is to enable robust spatio-temporal reasoning for applications in robotics, video analysis, and dynamic environment understanding.
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.
A theoretical framework for the task-based optimal design of Modular and Reconfigurable Serial Robots for rapid deployment
The innovations that gave rise to industrial robots date back to the sixties and seventies. They have enabled a massive deployment of industrial robots that transformed factory floors, at least in industrial sectors such as car manufacturing and other mass production lines.
However, such robots do not fit the requirements of other interesting applications that appeared and developed in fields such as in laboratory research, space robotics, medical robotics, automation in inspection and maintenance, agricultural robotics, service robotics and, of course, humanoids. A small number of these sectors have seen large-scale deployment and commercialization of robotic systems, with most others advancing slowly and incrementally to that goal.
This begs the following question: is it due to unsuitable hardware (insufficient physical capabilities to generate the required motions and forces); software capabilities (control systems, perception, decision support, learning, etc.); or a lack of new design paradigms capable to meet the needs of these applications (agile and scalable custom-design approaches)?
The unprecedented explosion of data science, machine learning and AI in all areas of science, technology and society may be seen as a compelling solution, and a radical transformation is taking shape (or is anticipated), with the promise of empowering the next generations of robots with AI (both predictive and generative). Therefore, research can tend to pay increasing attention to the software aspects (learning, decision support, coding etc.); perhaps to the detriment of more advanced physical capabilities (hardware) and new concepts (design paradigms). It is however clear that the cognitive aspects of robotics, including learning, control and decision support, are useful if and only if suitable physical embodiments are available to meet the needs of the various tasks that can be robotized, hence requiring adapted design methodologies and hardware.
The aim of this thesis is thus to focus on design paradigms and hardware, and in particular on the optimal design of rapidly-produced serial robots based on given families of standardized « modules » whose layout will be optimized according to the requirements of the tasks that cannot be performed by the industrial robots available on the market. The ambition is to answer the question of whether and how a paradigm shift may be possible for the design of robots, from being fixed-catalogue to rapidly available bespoke type.
The successful candidate will enrol at the « Ecole Doctorale Mathématiques, STIC » of Nantes Université (ED-MASTIC), and he or she will be hosted for three years in the CEA-LIST Interactive Robotics Unit under supervision of Dr Farzam Ranjbaran. Professors Yannick Aoustin (Nantes) and Clément Gosselin (Laval) will provide academic guidance and joint supervision for a successful completion of the thesis.
A follow-up to this thesis is strongly considered in the form of a one-year Post-Doctoral fellowship to which the candidate will be able to apply, upon successful completion of all the requirements of the PhD Degree. This Post-Doctoral fellowship will be hosted at the « Centre de recherche en robotique, vision et intelligence machine (CeRVIM) », Université Laval, Québec, Canada.
Development of an online measurement method for radioactive gases based on porous scintillators
As the national metrology laboratory for ionizing radiation, the Henri Becquerel National Laboratory (LNE-LNHB) of the French Alternative Energies and Atomic Energy Commission (CEA) operates unique facilities dedicated to radionuclide metrology. These include various setups for producing liquid-phase standards, as well as systems for mixing radioactive gases. In previous research projects, a specific installation was developed for the generation of radioactive gas atmospheres [1], with the aim of creating new testing and calibration methods that meet the needs of both research and industry.
One of the major current challenges is to reproduce environmental conditions as realistically as possible in order to better address actual regulatory requirements—particularly regarding volumetric activity and measurement conditions. This general issue applies to all radioactive substances, but is especially critical for volatile radioactive substances. Over the past several years, through numerous projects and collaborations, CEA/LNHB has been exploring new detection methods that outperform traditional liquid scintillation techniques. Among these innovations are new porous inorganic scintillators [1], which enable not only online detection but also online separation (“unmixing”) of pure beta-emitting radionuclides—this technique has been patented [2].
The objective of this PhD project is to develop, implement, and optimize these measurement methods through applications to:
- Pure radioactive gases,
- Multicomponent mixtures of pure beta-emitting radioactive gases—using porous scintillators for unmixing and identification,
- Liquid scintillation counting, more generally, where this unmixing capability has recently been demonstrated at LNHB and is currently being prepared for publication.
The unmixing technique is of particular interest, as it significantly simplifies environmental monitoring by scintillation, especially in the case of ³H and ¹4C mixtures. Currently, such analyses require multiple bubbler samplings, mixing with scintillation cocktail, and triple-label methods—procedures that involve several months of calibration preparation and weeks of experimentation and processing.
This PhD will be closely aligned with a second doctoral project on Compton-TDCR [1] (2025–2028), aimed at determining the response curve of the scintillators.
The scientific challenges of the project are tied to radionuclide metrology and combine experimentation, instrumentation, and data analysis to develop innovative measurement techniques. Key objectives include:
- Developing a method for beta-emitter unmixing in scintillation, based on initial published and patented concepts.
- Assessing the precision of the unmixing method, including associated uncertainties and decision thresholds.
- Validating the unmixing technique using the laboratory’s radioactive gas test bench [1], with various radionuclides such as 3H, 14C, 133Xe, 85Kr, 222Rn,... or via conventional liquid scintillation counting.
- Enhancing the unmixing model, potentially through the use of machine learning or artificial intelligence tools, particularly for complex multicomponent mixtures.