Hydrodynamic simulations of porous materials for ductile damage
The mechanical behavior of metallic materials under highly dynamical loading (schock) and especially their damage behavior is a topic of interest for the CEA-DAM. For tantalum, damage is ductile : by nucleation, growth and coalescence of voids within the material. Usual ductile damage models have been developed using the simplifying assumption that voids are isolated in the materials. However, recent studies by direct simulations explicitly describing a void population in the material (and experimental observations after failure) have shown the importance of void interaction for predicting ductile damage. Yet, the microscopical mechanisms of this interaction remain little known.
The objective of the PhD is to study the growth and coalescence phases of ductile damage through direct numerical simulations of a porous material undergoing dynamic loading. Hydrodynamic simulations, in which voids are explicitly meshed within a continuous matrix, will be used to study relevant scales of length and time. Monitoring the void population throughout the simulation will provide valuable information on the influence of void interaction during ductile damage. Firstly, the bulk behavior will be compared to the one predicted by usual models of isolated voids, showing the macroscopic effect of void interaction. Secondly, the evolution of the size distribution in the void population will be monitored. The last objective will be to understand microscopic void-to-void interaction. In order to take advantage of the wealth of simulation results, approaches based on artificial intelligence (neural networks on the graph associated with the pore population) will be used to learn the link between a void's neighborhood and its growth.
The doctoral student will have the opportunity to develop their skills in shock physics and mechanics, numerical simulations (with access to CEA-DAM supercomputers), and data science.
AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
Design artificial intelligence tools for tracking Fission Product release out of nuclear fuel
The Laboratory for the Analysis of Radionuclide Migration (LAMIR), part of the Institute for Research on Nuclear Systems (IRESNE) at CEA Cadarache, has developed a set of advanced measurement methods to characterize the release of fission products from nuclear fuel during thermal transients. Among these innovative tools is an operando in situ imaging system that enables real-time observation of these phenomena. The large amount of data generated by these experiments requires dedicated digital processing techniques that account for both the specificities of nuclear instrumentation and the underlying physical mechanisms.
The goal of this PhD project is to develop an optimized data processing approach based on state-of-the-art Artificial Intelligence (AI) methods.
In the first phase, the focus will be on processing thermal sequence images to detect and analyze material movements, aiming to identify an optimal image-processing strategy defined by rigorous quantitative criteria.
In the second phase, the methodology will be extended to all experimental data collected during a thermal sequence. The long-term objective is to create a real-time diagnostic tool capable of supporting experiment monitoring and interpretation.
This PhD will be carried out within a collaborative framework between LAMIR, which has recognized expertise in nuclear fuel behavior analysis and imaging, and the Institut Fresnel in Marseille, known for its strong background in image analysis and artificial intelligence.
The candidate will benefit from a multidisciplinary and stimulating research environment, with opportunities to present and publish their work at national and international conferences and in peer-reviewed journals.
Proximal primal-dual method for joint estimation of the object and of unknown acquisition parameters in Computed Tomography.
As part of the sustainable and safe use of nuclear energy in the transition to a carbon-free energy future, the Jules Horowitz research reactor, currently under construction at the CEA Cadarache site, is a key tool for studying the behaviour of materials under irradiation. A tomographic imaging system will be exploited in support of experimental measures to obtain real-time images of sample degradation. This imaging system has extraordinary characteristics due to its geometry and to the size of the objects to be characterized. As a result, some acquisition parameters, which are essential to obtain a sufficient image reconstruction quality, are not known with precision. This can lead to a significant degradation of the final image.
The objective of this PhD thesis is to propose methods for the joint estimation of the object under study and of the unknown acquisition parameters. These methods will be based on modern convex optimization tools. This thesis will also explore machine learning methods in order to automate and optimize the choice of hyperparameters for the problem.
The thesis will be carried out in collaboration between the Marseille Institute of Mathematics (I2M CNRS UMR 7373, Aix-Marseille University, Saint Charles campus) and the Nuclear Measurement Laboratory of the IRESNE institute of the French Alternative Energies and Atomic Energy Commission (CEA Cadarache, Saint Paul les Durance). The doctoral student will work in a stimulating research environment focused on strategic questions related to non-destructive testing. He or she will also have the opportunity to promote his or her research work in France and abroad.
Advancing Health Data Exploitation through Secure Collaborative Learning
Recently, deep learning has been successfully applied in numerous domains and is increasingly being integrated into healthcare and clinical research. The ability to combine diverse data sources such as genomics and imaging enhances medical decision-making. Access to large and heterogeneous datasets is essential for improving model quality and predictive accuracy. Federated learning is currently developed to support this requirement offering an alternative by enabling decentralized model training while ensuring that raw data remains stored locally at the client side. Several open-source frameworks integrate secure computation protocols for federated learning but remains limited in its applicability to healthcare and raises issues related to data sovereignty. In this context, a French framework is currently developed by the CEA-LIST, introduces an edge-to-cloud federated learning architecture that incorporates end-to-end encryption, including fully homomorphic encryption (FHE) and resilience against adversarial threats. Through this framework, this project aims to deliver modular and secure federated learning components that foster further innovation in healthcare AI.
This project will focus on three core themes:
1) Deployment, monitoring and optimization of deep learning models within federated and decentralized learning solutions.
2) Integrating large models in collaborative learning.
3) Developing aggregation methods for non-IID situation.
Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model
Context
Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.
The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.
The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.
PhD project
The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.
The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.
As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.
Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.
The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.
Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.
References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].
Development and validation of surface haptics machine learning algorithms for touch and dexterity assessment in neurodevelopmental disorders
The aim of this PhD thesis is to develop new clinical assessment methods using surface haptics technologies, developed at CEA List, and machine learning algorithms for testing and monitoring tactile-motor integration. In particular, the thesis will investigate and validate the development of a multimodal analytics pipeline that converts surface haptics signals and dexterity exercises inputs (i.e. tactile stimulation events, finger kinematics, contact forces, and millisecond timing) into reliable, interpretable biomarkers of tactile perception and sensorimotor coupling, and then classify normative versus atypical integration patterns with clinical fidelity for assessment.
Expected results: a novel technology and models for the rapid and feasible measurement of tactile-motor deficits in clinical setting, with an initial validation in different neurodevelopmental disorders (i.e. first-episode psychosis, autism spectrum disorder, and dyspraxia). The methods developed and data collected will provide:
(1) an open, versioned feature library for tactile–motor assessment;
(2) classifiers with predefined operating points (sensitivity/specificity);
(3) and an on-device/edge-ready pipeline, i.e. able to run locally on a typical tablet hardware whilst meeting constraints on latency, computing, and data privacy. Success will be measured by reproducibility of features, clinically meaningful effect sizes, and interpretable decision logic that maps back to known neurophysiology rather than artefacts.
Adaptive and explainable Video Anomaly Detection
Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.
This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.
The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.