Long-term and non-invasive plant monitoring using MIR spectroscopy
The LCO (french acronym for Optical Sensors Laboratory) develops innovative Silicium integrated photonic components (optical sources, waveguides, photodetectors, etc), sensors, and eventually systems. From upstream technological research to industrial transfers, those sensors apply in various fields such as environment, health, and security.
One of the laboratory research topic is mid-infrared spectroscopy of dense samples, using a photothermal detection technology. As we got convincing results applying our sensors for monitoring humans physiological parameters, we now wish to adapt them to plants. First laboratory trials reveal encouraging results, but their interpretation is at this stage out of reach because of the complexity of the measure, and the case study itself. Tackling this problematic is the thesis objective.
To achieve it, the candidate will establish an experimental program with the help of instrumentation and plant biology specialists. He will have access to the laboratory computational and experimental resources, as well as the CEA-Grenoble prototyping capabilities.
GenPhi : 3D Generative AI conditioned by geometry, structure and physics
The aim of this thesis is to design new 3D model generators based on Generative Artificial Intelligence (GenAI), capable of producing faithful, coherent and physically viable shapes. While 3D generation has become essential in many fields, current automatic generation approaches suffer from limitations in terms of respecting geometric, structural and physical constraints. The goal is to develop methods for integrating constraints related to geometry, topology, internal structure and physical laws, both stationary (equilibrium, statics) and dynamic (kinematics, deformation), right from the generation stage. The study will combine geometric perception, semantic enrichment and physical simulation approaches to produce robust, realistic 3D models that can be directly exploited without human intervention.
Towards Reliable and Autonomous Workflow Coordination in Agentic AI-Based Systems
The rise of Large Language Models (LLMs) and agentic AI systems is transforming how complex workflows are designed and managed. Unlike traditional centralized orchestration, modern workflows must support distributed, autonomous agents operating across cloud, edge, and on-premise environments. These agents collaborate with humans and other systems, adapt to evolving goals, and cross organizational and trust boundaries. This paradigm shift is especially relevant in domains like cybersecurity and healthcare emergency response, where workflows must be dynamically constructed and executed under uncertainty. In such settings, rigid automation falls short—agentic workflows require decentralized, secure, and auditable orchestration.
This thesis explores how to enable such systems, asking: How can we achieve secure, distributed orchestration in environments where agentic AI operates autonomously? It will propose a formal modeling framework for distributed agentic workflows, protocols for auditable, privacy-preserving coordination, and a reference architecture with real-world proofs of concept in cybersecurity and healthcare.
Robust and Secure Federated Learning
Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing their raw data. While this decentralized setup is appealing for privacy-sensitive domains like healthcare and finance, it is not inherently secure: model updates can leak private information, and malicious clients can corrupt training.
To tackle these challenges, two main strategies are used: Secure Aggregation, which protects privacy by hiding individual updates, and Robust Aggregation, which filters out malicious updates. However, these goals can conflict—privacy mechanisms may obscure signs of malicious behavior, and robustness methods may violate privacy.
Moreover, most research focuses on model-level attacks, neglecting protocol-level threats like message delays or dropped updates, which are common in real-world, asynchronous networks.
This thesis aims to explore the privacy–robustness trade-off in FL, identify feasible security models, and design practical, secure, and robust protocols. Both theoretical analysis and prototype implementation will be conducted, leveraging tools like Secure Multi-Party Computation, cryptographic techniques, and differential privacy.
VHEE Radiotherapy with Electron Beams from a Laser-Plasma Accelerator
The research programs conducted at the Lasers Interations and Dynamics Laboratory of the French Atomic Energy Commission (CEA) aim to understand the fundamental processes involved in light-matter interactions and their applications. As part of the CEA-LIDYL, the Physics at High Intensity (PHI) group conducts studies of laser-matter interactions at extreme intensities, for which matter turns into an ultra-relativistic plasma. Using theory, simulations and experiments, researchers develop and test new concepts to control the laser-plasma interaction with the aim of producing novel relativistic electron and X-UV attosecond light sources, with potential applications to fundamental research, medicine and industry.
In collaboration with the Lawrence Berkeley National Laboratory, the group strongly contributes to the development of the code WarpX used for the high-fidelity modelling of laser-maIer interactuons. It also pioneered the study and control of remarkable optical components called ‘plasma mirrors’, which can be obtained by focusing a high-power laser with high contrast on an initially solid target. In the past five years, the PHI group has developed core concepts exploiting plasma mirrors to manipulate extreme light for pushing the frontiers of high-field Science. One of these concepts uses plasma mirrors as high-charge injectors to level up the charge produced in laser-plasma accelerators (LPAs) to enable their use for medical studies such very high energy electrons (VHEE) radiotherapy. This concept is being implemented at CEA on the UHI100 100 TW laser facility in 2025 to deliver 100 MeV - 200 MeV electron beams with 100 pC charge/bunch for the study of high-dose rate deposition of VHEE electrons on biological samples.
In this context, the PhD candidate will use our simulation tool WarpX to optimize the properties of the electron beam produced by LPAs for VHEE studies (electron beam quality and final energy). He/She will then study how the LPA electron beam deposits its energy in water samples (as biological medium) using Geant4. This will help assessing dose deposition at ultra-high dose rate and develop novel dosimetry techniques for VHEE LPA electron beams. Finally, the Reactive Oxygen Species (ROS) production and fate in water will be studied using the Geant4-DNA toolkit. This module has mainly data tabulated at electron energies below 10 MeV and will therefore require measures cross-section of water-ionization processes from experiments at 100 MeV. This will be performed on the UHI100 100 TW laser by the DICO group of the CEA-LIDYL, in collaboration with the PHI group.
AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
Grounding and reasoning over space and time in Vision-Language Models (VLM)
Recent Vision-Language Models (VLMs) like BLIP, LLaVA, and Qwen-VL have achieved impressive results in multimodal tasks but still face limitations in true spatial and temporal reasoning. Many current benchmarks conflate visual reasoning with general knowledge and involve shallow reasoning tasks. Furthermore, these models often struggle with understanding complex spatial relations and dynamic scenes due to suboptimal visual feature usage. To address this, recent approaches such as SpatialRGPT, SpaceVLLM, VPD, and ST-VLM have introduced techniques like 3D scene graph integration, spatio-temporal queries, and kinematic instruction tuning to improve reasoning over space and time. This thesis proposes to build on these advances by developing new instruction-tuned models with improved data representation and architectural innovations. The goal is to enable robust spatio-temporal reasoning for applications in robotics, video analysis, and dynamic environment understanding.
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.
A theoretical framework for the task-based optimal design of Modular and Reconfigurable Serial Robots for rapid deployment
The innovations that gave rise to industrial robots date back to the sixties and seventies. They have enabled a massive deployment of industrial robots that transformed factory floors, at least in industrial sectors such as car manufacturing and other mass production lines.
However, such robots do not fit the requirements of other interesting applications that appeared and developed in fields such as in laboratory research, space robotics, medical robotics, automation in inspection and maintenance, agricultural robotics, service robotics and, of course, humanoids. A small number of these sectors have seen large-scale deployment and commercialization of robotic systems, with most others advancing slowly and incrementally to that goal.
This begs the following question: is it due to unsuitable hardware (insufficient physical capabilities to generate the required motions and forces); software capabilities (control systems, perception, decision support, learning, etc.); or a lack of new design paradigms capable to meet the needs of these applications (agile and scalable custom-design approaches)?
The unprecedented explosion of data science, machine learning and AI in all areas of science, technology and society may be seen as a compelling solution, and a radical transformation is taking shape (or is anticipated), with the promise of empowering the next generations of robots with AI (both predictive and generative). Therefore, research can tend to pay increasing attention to the software aspects (learning, decision support, coding etc.); perhaps to the detriment of more advanced physical capabilities (hardware) and new concepts (design paradigms). It is however clear that the cognitive aspects of robotics, including learning, control and decision support, are useful if and only if suitable physical embodiments are available to meet the needs of the various tasks that can be robotized, hence requiring adapted design methodologies and hardware.
The aim of this thesis is thus to focus on design paradigms and hardware, and in particular on the optimal design of rapidly-produced serial robots based on given families of standardized « modules » whose layout will be optimized according to the requirements of the tasks that cannot be performed by the industrial robots available on the market. The ambition is to answer the question of whether and how a paradigm shift may be possible for the design of robots, from being fixed-catalogue to rapidly available bespoke type.
The successful candidate will enrol at the « Ecole Doctorale Mathématiques, STIC » of Nantes Université (ED-MASTIC), and he or she will be hosted for three years in the CEA-LIST Interactive Robotics Unit under supervision of Dr Farzam Ranjbaran. Professors Yannick Aoustin (Nantes) and Clément Gosselin (Laval) will provide academic guidance and joint supervision for a successful completion of the thesis.
A follow-up to this thesis is strongly considered in the form of a one-year Post-Doctoral fellowship to which the candidate will be able to apply, upon successful completion of all the requirements of the PhD Degree. This Post-Doctoral fellowship will be hosted at the « Centre de recherche en robotique, vision et intelligence machine (CeRVIM) », Université Laval, Québec, Canada.
Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices
The goal of this PhD thesis is to develop methods that enhance resource efficiency in federated learning (FL), with particular attention to the constraints and heterogeneity of client resources. The work will first focus on the classical client-server FL architecture, before extending the investigation to decentralised FL settings. The proposed methods will be studied in the context of both federated model training and distributed fine-tuning of large models, such as large language models (LLMs).