VHEE Radiotherapy with Electron Beams from a Laser-Plasma Accelerator
The research programs conducted at the Lasers Interations and Dynamics Laboratory of the French Atomic Energy Commission (CEA) aim to understand the fundamental processes involved in light-matter interactions and their applications. As part of the CEA-LIDYL, the Physics at High Intensity (PHI) group conducts studies of laser-matter interactions at extreme intensities, for which matter turns into an ultra-relativistic plasma. Using theory, simulations and experiments, researchers develop and test new concepts to control the laser-plasma interaction with the aim of producing novel relativistic electron and X-UV attosecond light sources, with potential applications to fundamental research, medicine and industry.
In collaboration with the Lawrence Berkeley National Laboratory, the group strongly contributes to the development of the code WarpX used for the high-fidelity modelling of laser-maIer interactuons. It also pioneered the study and control of remarkable optical components called ‘plasma mirrors’, which can be obtained by focusing a high-power laser with high contrast on an initially solid target. In the past five years, the PHI group has developed core concepts exploiting plasma mirrors to manipulate extreme light for pushing the frontiers of high-field Science. One of these concepts uses plasma mirrors as high-charge injectors to level up the charge produced in laser-plasma accelerators (LPAs) to enable their use for medical studies such very high energy electrons (VHEE) radiotherapy. This concept is being implemented at CEA on the UHI100 100 TW laser facility in 2025 to deliver 100 MeV - 200 MeV electron beams with 100 pC charge/bunch for the study of high-dose rate deposition of VHEE electrons on biological samples.
In this context, the PhD candidate will use our simulation tool WarpX to optimize the properties of the electron beam produced by LPAs for VHEE studies (electron beam quality and final energy). He/She will then study how the LPA electron beam deposits its energy in water samples (as biological medium) using Geant4. This will help assessing dose deposition at ultra-high dose rate and develop novel dosimetry techniques for VHEE LPA electron beams. Finally, the Reactive Oxygen Species (ROS) production and fate in water will be studied using the Geant4-DNA toolkit. This module has mainly data tabulated at electron energies below 10 MeV and will therefore require measures cross-section of water-ionization processes from experiments at 100 MeV. This will be performed on the UHI100 100 TW laser by the DICO group of the CEA-LIDYL, in collaboration with the PHI group.
AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
Grounding and reasoning over space and time in Vision-Language Models (VLM)
Recent Vision-Language Models (VLMs) like BLIP, LLaVA, and Qwen-VL have achieved impressive results in multimodal tasks but still face limitations in true spatial and temporal reasoning. Many current benchmarks conflate visual reasoning with general knowledge and involve shallow reasoning tasks. Furthermore, these models often struggle with understanding complex spatial relations and dynamic scenes due to suboptimal visual feature usage. To address this, recent approaches such as SpatialRGPT, SpaceVLLM, VPD, and ST-VLM have introduced techniques like 3D scene graph integration, spatio-temporal queries, and kinematic instruction tuning to improve reasoning over space and time. This thesis proposes to build on these advances by developing new instruction-tuned models with improved data representation and architectural innovations. The goal is to enable robust spatio-temporal reasoning for applications in robotics, video analysis, and dynamic environment understanding.
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.
A theoretical framework for the task-based optimal design of Modular and Reconfigurable Serial Robots for rapid deployment
The innovations that gave rise to industrial robots date back to the sixties and seventies. They have enabled a massive deployment of industrial robots that transformed factory floors, at least in industrial sectors such as car manufacturing and other mass production lines.
However, such robots do not fit the requirements of other interesting applications that appeared and developed in fields such as in laboratory research, space robotics, medical robotics, automation in inspection and maintenance, agricultural robotics, service robotics and, of course, humanoids. A small number of these sectors have seen large-scale deployment and commercialization of robotic systems, with most others advancing slowly and incrementally to that goal.
This begs the following question: is it due to unsuitable hardware (insufficient physical capabilities to generate the required motions and forces); software capabilities (control systems, perception, decision support, learning, etc.); or a lack of new design paradigms capable to meet the needs of these applications (agile and scalable custom-design approaches)?
The unprecedented explosion of data science, machine learning and AI in all areas of science, technology and society may be seen as a compelling solution, and a radical transformation is taking shape (or is anticipated), with the promise of empowering the next generations of robots with AI (both predictive and generative). Therefore, research can tend to pay increasing attention to the software aspects (learning, decision support, coding etc.); perhaps to the detriment of more advanced physical capabilities (hardware) and new concepts (design paradigms). It is however clear that the cognitive aspects of robotics, including learning, control and decision support, are useful if and only if suitable physical embodiments are available to meet the needs of the various tasks that can be robotized, hence requiring adapted design methodologies and hardware.
The aim of this thesis is thus to focus on design paradigms and hardware, and in particular on the optimal design of rapidly-produced serial robots based on given families of standardized « modules » whose layout will be optimized according to the requirements of the tasks that cannot be performed by the industrial robots available on the market. The ambition is to answer the question of whether and how a paradigm shift may be possible for the design of robots, from being fixed-catalogue to rapidly available bespoke type.
The successful candidate will enrol at the « Ecole Doctorale Mathématiques, STIC » of Nantes Université (ED-MASTIC), and he or she will be hosted for three years in the CEA-LIST Interactive Robotics Unit under supervision of Dr Farzam Ranjbaran. Professors Yannick Aoustin (Nantes) and Clément Gosselin (Laval) will provide academic guidance and joint supervision for a successful completion of the thesis.
A follow-up to this thesis is strongly considered in the form of a one-year Post-Doctoral fellowship to which the candidate will be able to apply, upon successful completion of all the requirements of the PhD Degree. This Post-Doctoral fellowship will be hosted at the « Centre de recherche en robotique, vision et intelligence machine (CeRVIM) », Université Laval, Québec, Canada.
Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices
The goal of this PhD thesis is to develop methods that enhance resource efficiency in federated learning (FL), with particular attention to the constraints and heterogeneity of client resources. The work will first focus on the classical client-server FL architecture, before extending the investigation to decentralised FL settings. The proposed methods will be studied in the context of both federated model training and distributed fine-tuning of large models, such as large language models (LLMs).
Development of an online measurement method for radioactive gases based on porous scintillators
As the national metrology laboratory for ionizing radiation, the Henri Becquerel National Laboratory (LNE-LNHB) of the French Alternative Energies and Atomic Energy Commission (CEA) operates unique facilities dedicated to radionuclide metrology. These include various setups for producing liquid-phase standards, as well as systems for mixing radioactive gases. In previous research projects, a specific installation was developed for the generation of radioactive gas atmospheres [1], with the aim of creating new testing and calibration methods that meet the needs of both research and industry.
One of the major current challenges is to reproduce environmental conditions as realistically as possible in order to better address actual regulatory requirements—particularly regarding volumetric activity and measurement conditions. This general issue applies to all radioactive substances, but is especially critical for volatile radioactive substances. Over the past several years, through numerous projects and collaborations, CEA/LNHB has been exploring new detection methods that outperform traditional liquid scintillation techniques. Among these innovations are new porous inorganic scintillators [1], which enable not only online detection but also online separation (“unmixing”) of pure beta-emitting radionuclides—this technique has been patented [2].
The objective of this PhD project is to develop, implement, and optimize these measurement methods through applications to:
- Pure radioactive gases,
- Multicomponent mixtures of pure beta-emitting radioactive gases—using porous scintillators for unmixing and identification,
- Liquid scintillation counting, more generally, where this unmixing capability has recently been demonstrated at LNHB and is currently being prepared for publication.
The unmixing technique is of particular interest, as it significantly simplifies environmental monitoring by scintillation, especially in the case of ³H and ¹4C mixtures. Currently, such analyses require multiple bubbler samplings, mixing with scintillation cocktail, and triple-label methods—procedures that involve several months of calibration preparation and weeks of experimentation and processing.
This PhD will be closely aligned with a second doctoral project on Compton-TDCR [1] (2025–2028), aimed at determining the response curve of the scintillators.
The scientific challenges of the project are tied to radionuclide metrology and combine experimentation, instrumentation, and data analysis to develop innovative measurement techniques. Key objectives include:
- Developing a method for beta-emitter unmixing in scintillation, based on initial published and patented concepts.
- Assessing the precision of the unmixing method, including associated uncertainties and decision thresholds.
- Validating the unmixing technique using the laboratory’s radioactive gas test bench [1], with various radionuclides such as 3H, 14C, 133Xe, 85Kr, 222Rn,... or via conventional liquid scintillation counting.
- Enhancing the unmixing model, potentially through the use of machine learning or artificial intelligence tools, particularly for complex multicomponent mixtures.
Internalisation of external knowledge by foundation models
To perform an unknown task, a subject (human or robot) has to consult external information, which involves a cognitive cost. After several similar experiments, it masters the situation and can act automatically. The 1980s and 1990s saw explorations in AI using conceptual graphs and schemas, but their large-scale implementation was limited by the technology available at the time.
Today's neural models, including transformers and LLM/VLMs, learn universal representations through pre-training on huge amounts of data. They can be used with prompts to provide local context. Fine-tuning allows these models to be specialised for specific tasks.
RAG and GraphRAG methods can be used to exploit external knowledge, but their use for inference is resource-intensive. This thesis proposes a cognitivist approach in which the system undergoes continuous learning. It consults external sources during inference and uses this information to refine itself regularly, as it does during sleep. This method aims to improve performance and reduce resource consumption.
In humans, these processes are linked to the spatial organisation of the brain. The thesis will also study network architectures inspired by this organisation, with dedicated but interconnected “zones”, such as the vision-language and language models.
These concepts can be applied to the Astir and Ridder projects, which aim to exploit foundation models for software engineering in robotics and the development of generative AI methods for the safe control of robots.
New experimental constraints on the weak interaction coupling constants by coincidence measurements of complex decay schemes
Accurate experimental knowledge of forbidden non-unique beta transitions, which constitute about one third of all known beta transitions, is an important and very difficult subject. Only a few reliable studies exist in the literature. Indeed, the continuous energy spectrum of these transitions is difficult to measure precisely for various reasons that cumulate: high diffusivity of electrons in matter and non-linearity of the detection system, unavailability of some radionuclides and presence of impurities, long half-lives and complex decay schemes, etc. Accurate theoretical predictions are equally difficult because of the necessity of coupling different models for the atomic, the nuclear and the weak interaction parts in the same, full-relativistic formalism. However, improving our knowledge of forbidden non-unique beta transitions is essential in radioactivity metrology to define the becquerel SI unit in the case of pure beta emitters. This can have a strong impact in nuclear medicine, for the nuclear industry, and for some studies in fundamental physics such as dark matter detection and neutrino physics.
Our recent study, both theoretical and experimental, of the second forbidden non-unique transition in 99Tc decay has highlighted that forbidden non-unique transitions can be particularly sensitive to the effective values of the weak interaction coupling constants. The latter act as multiplicative factors of the nuclear matrix elements. The use of effective values compensates for the approximations used in the nuclear structure models, such as simplified correlations between the nucleons in the valence space, or the absence of core excitation. However, they can only be adjusted by comparing with a high-precision experimental spectrum. The predictability of the theoretical calculations, even the most precise currently available, is thus strongly questioned. While it has already been demonstrated that universal values cannot be fixed, effective values for each type of transition, or for a specific nuclear model, are possible. The aim of this thesis is therefore to establish new experimental constraints on the weak interaction coupling constants by precisely measuring the energy spectra of beta transitions. Ultimately, establishing robust average effective values of these coupling constants will be possible, and a real predictive power for theoretical calculations of beta decay will be obtained.
Most of the transitions of interest for constraining the coupling constants have energies greater than 1 MeV, occur in complex decay schemes and are associated to the emission of multiple gamma photons. In this situation, the best strategy consists in beta-gamma detection in coincidence. The usual detection techniques in nuclear physics are appropriate but they must be extremely well implemented and controlled. The doctoral student will rely on the results obtained in two previous theses. To minimize self-absorption of the electrons in the source, they will have to adapt a preparation technique of ultra-thin radioactive sources developed at LNHB to the important activities that will be required. He will have to implement a new apparatus, in a dedicated vacuum chamber, including a coincidence detection of two silicon detectors and two gamma detectors. Several studies will be necessary, mechanical and by Monte Carlo simulation, to optimize the geometric configuration with regard to the different constraints. The optimization of the electronics, acquisition, signal processing, data analysis, spectral deconvolution and the development of a complete and robust uncertainty budget will all be topics covered. These instrumental developments will make possible the measurement with great precision of the spectra from 36Cl, 59Fe, 87Rb, 141Ce, or 170Tm decays. This very comprehensive subject will allow the doctoral student to acquire instrumental and analytical skills that will open up many career opportunities. The candidate should have good knowledge of nuclear instrumentation, programming and Monte Carlo simulations, as well as a reasonable knowledge of nuclear disintegrations.
Development of ultra-high-resolution magnetic microcalorimeters for isotopic analysis of actinides by X-ray and gamma-ray spectrometry
The PhD project focuses on the development of ultra-high-resolution magnetic microcalorimeters (MMCs) to improve the isotopic analysis of actinides (uranium, plutonium) by X- and gamma-ray spectrometry around 100 keV. This type of analysis, which is essential for the nuclear fuel cycle and non-proliferation efforts, traditionally relies on HPGe detectors, whose limited energy resolution constrains measurement accuracy. To overcome these limitations, the project aims to employ cryogenic MMC detectors operating at temperatures below 100 mK, capable of achieving energy resolutions ten times better than that of HPGe detectors. The MMCs will be microfabricated at CNRS/C2N using superconducting and paramagnetic microstructures, and subsequently tested at LNHB. Once calibrated, they will be used to precisely measure the photon spectra of actinides in order to determine the fundamental atomic and nuclear parameters of the isotopes under study with high accuracy. The resulting data will enhance the nuclear and atomic databases used in deconvolution codes, thereby enabling more reliable and precise isotopic analysis of actinides.