Representation of Cross Sections based on the Wavelet Expansion Method, and Development of a Dedicated Solver

The deterministic solution of the neutron transport equation traditionally relies on the use of the multigroup approximation to discretize the energy variable. The energy domain is divided using a one-dimensional mesh, where the volume elements are called "groups" in neutronics. Within each group, all physical quantities (neutron flux, cross sections, reaction rates, etc.) are projected using piecewise constant functions. This homogenization of cross sections, which are the input data of the transport equation, becomes particularly challenging in the presence of resonant nuclei, whose cross sections vary rapidly over several decades. Correcting for this requires computationally expensive on-the-fly treatments to improve the accuracy of the transport solution.

The goal of this thesis is to eliminate the need for the multigroup approximation in the resonant energy range by applying a Galerkin projection of the continuous energy equation onto an orthonormal wavelet basis. The candidate will develop a generic expansion method adapted to mixtures of resonant isotopes, including preprocessing of cross sections, selection of the wavelet basis, and determination of an efficient coefficient truncation strategy. A dedicated neutron transport solver will be developed, with a focus on efficient algorithmic implementation using advanced programming techniques suited to modern architectures (GPU, Kokkos). The results of this thesis research will be valorized through publications in peer-reviewed international journals and presentations at scientific conferences.

Modeling and characterization of CFET transistors for enhanced electrical performance

Complementary Field Effect Transistors (CFETs) represent a new generation of vertically stacked CMOS devices, offering a promising path to continue transistor miniaturization and to meet the requirements of high-performance computing.

The objective of this PhD work is to study and optimize the strain engineering of the transistor channel in order to enhance carrier mobility and improve the overall electrical performance of CFET devices. The work will combine numerical modeling of technological processes using finite element methods with experimental characterization of crystalline deformation through transmission electron microscopy coupled with precession electron diffraction (TEM-PED).

The modeling activity will focus on predicting strain distributions and their impact on electrical properties, while accurately accounting for the complexity of the technological stacks and critical fabrication steps such as epitaxy. In parallel, the experimental work will aim to quantify strain fields using TEM-PED and to compare these results with simulation outputs.

This research will contribute to the development of dedicated modeling tools and advanced characterization methodologies adapted to CFET architectures, with the goal of improving spatial resolution, measurement reproducibility, and the overall understanding of strain mechanisms in next-generation transistors.

Scaling Up Dislocation Dynamics Simulations for the Study of Nuclear Material Aging

Materials used in nuclear energy production systems are subjected to mechanical, thermal, and irradiation condition, leading to a progressive evolution of their mechanical properties. Understanding and modeling the underlying physical mechanisms involved is a significant challenge.

Dislocation Dynamics simulation aims to understand the behavior of the material at the crystal scale by explicitly simulating the interactions between dislocations, microstructure, and crystal defects induced by irradiation. The CEA, CNRS, and INRIA have been developing the NUMODIS calculation code for this purpose since 2007 (Etcheverry 2015, Blanchard 2017, Durocher 2018).

More specific work on zirconium alloys (Drouet 2014, Gaumé 2017, Noirot 2025) has allowed the validation and enhancement of NUMODIS's ability to handle these individual physical mechanisms by directly comparing them with experiments, through in situ tensile tests under a transmission electron microscope. However, these studies are limited by the current inability of the NUMODIS code to handle a sufficiently high and representative number of defects, and thus to obtain the mechanical behavior of the grain (~10 microns).

The objective of the proposed work is to implement new algorithms to extend the functionalities of the code, propose and test new numerical algorithms, parallelize certain parts still processed sequentially, and ultimately demonstrate the code's ability to simulate the deformation channeling mechanism in an irradiated zirconium grain.

The work will focus primarily on algorithms for calculating velocities, junction formation, and time integration, requiring both mastery of dislocation physics and the corresponding numerical methods. Algorithms for integration recently proposed by Stanford University and LLNL will be implemented and tested for this purpose.

Significant work will also be devoted to adapting the current code (hybrid MPI-OpenMP parallelism) to new computing machines that favor GPU processors, through the adoption of the Kokkos programming model.

Building on both previous experimental and numerical work, this study will conclude with the demonstration of NUMODIS's ability to simulate the channeling mechanism in an irradiated zirconium grain and to identify or even model the main physical and mechanical parameters involved.

At the interface between several fields, the candidate must have a good foundation in physics and/or mechanics, while being comfortable with programming and numerical analysis.

References:
1. Etcheverry Arnaud, Simulation de la dynamique des dislocations à très grande échelle, Université Bordeaux I (2015).
2. Blanchard, Pierre, Algorithmes hiérarchiques rapides pour l’approximation de rang faible des matrices, applications à la physique des matériaux, la géostatistique et l’analyse de données, Université Bordeaux I (2017).
3. Durocher, Arnaud, Simulations massives de dynamique des dislocations : fiabilité et performances sur architectures parallèles et distribuées (2018).
4. Drouet, Julie, Étude expérimentale et modélisation numérique du comportement plastique
des alliages de zirconium sous et après irradiation (2014).
5. Gaumé, Marine, Étude des mécanismes de déformation des alliages de zirconium
après et sous irradiation (2017).
6. Noirot, Pascal, Etude expérimentale et simulation numérique, à l'échelle nanométrique et en temps réel, des mécanismes de déformation des alliages de zirconium après irradiation (2025).

Detailed Numerical investigations on highly-concentrated bubbly flows

To assess the safety of industrial facilities, the CEA develops, validates, and uses thermohydraulic simulation tools. Its research focuses on modelling two-phase flows using various approaches, from the most detailed to the largest system-scale. In order to better understand two-phase flows, Service of Thermal-hydraulic and Fluid Mechanics (STMF) is working on implementing a multi-scale approach in which high-fidelity simulations (DNS, Direct Numerical Simulation of two-phase flows) are used as “numerical experiments” to produce reference data. This data is then averaged to be compared with models used on a larger scale. This approach is applied to high-pressure flows where the bubbly flow regime is maintained even at very high void fractions. The Laboratory of Development at Local Scales (LDEL) belonging to STMF has developed a DNS method (Front-Tracking) implemented in its open-source thermo-hydraulics code: TRUST/TrioCFD [1] (object-oriented code, C++). In several PhDs, it has been used to perform massively parallel simulations to describe interfaces in detail without resorting to models, for example in groups of bubbles (called swarms) [2][3][4].
Currently applied to low-concentration two-phase bubbly flows (volume fraction less than 12%), the objective of this thesis will be to evaluate and use the method at higher void fractions. Reference HPC simulations of bubble swarms will be conducted on national supercomputers up to gas fractions of 40%. The quality of the results will be evaluated before extracting physical models of bubble interactions under these conditions. The objective of these models is to recover the overall dynamics of the bubble swarm at much lower resolutions, thereby enabling the study of larger systems in disequilibrium (external forcing of imposed turbulence generation, imposed average velocity gradient, etc.).

This work is funded by the French ANR, in collaboration with IMFT and LMFL, in parallel with two other theses with which there will be strong interactions. It will be performed at CEA-Saclay, in the STMF/LDEL laboratory. It includes numerical aspects (validation), computer developments (C++), and a physical analysis of the flows obtained.

Modeling the impact of defects in Steel–Concrete Structures. Identification of critical defects through metamodeling and optimization algorithms

To meet growing constructability challenges, steel–concrete (SC) structures are emerging as a promising alternative to conventional reinforced concrete structures. These elements are composed of infill concrete, two external steel plates, and steel shear studs that ensure composite action. While such structures present a clear interest due to their overall mechanical behavior, the presence of the steel plates prevents visual inspection of the concrete casting quality. It is therefore essential to characterize the impact of possible defects. This is the context of the proposed PhD research. Building upon recent results obtained in the laboratory, the goal is to develop a numerical framework to account for defects in steel–concrete structures. The thesis will be structured in several stages: validation of a modeling strategy for the mechanical behavior of defect-free SC structures, introduction of defects in the simulations and assessment of the applicability of the numerical approach, development of a metamodel and sensitivity analysis, and identification of critical defect configurations through optimization algorithms. One of the operational objectives of this doctoral work is to provide a tool capable of identifying critical defect configurations (size, position, and number) with respect to a given target quantity of interest (such as loss of strength or reduction in average stiffness). The research will therefore rely on the use and further development of state-of-the-art numerical tools in the fields of finite element modeling, optimization techniques, sensitivity analysis, and metamodeling. The thesis will be carried out within a rich collaborative environment, notably in partnership with EDF.

Proximal primal-dual method for joint estimation of the object and of unknown acquisition parameters in Computed Tomography.

As part of the sustainable and safe use of nuclear energy in the transition to a carbon-free energy future, the Jules Horowitz research reactor, currently under construction at the CEA Cadarache site, is a key tool for studying the behaviour of materials under irradiation. A tomographic imaging system will be exploited in support of experimental measures to obtain real-time images of sample degradation. This imaging system has extraordinary characteristics due to its geometry and to the size of the objects to be characterized. As a result, some acquisition parameters, which are essential to obtain a sufficient image reconstruction quality, are not known with precision. This can lead to a significant degradation of the final image.
The objective of this PhD thesis is to propose methods for the joint estimation of the object under study and of the unknown acquisition parameters. These methods will be based on modern convex optimization tools. This thesis will also explore machine learning methods in order to automate and optimize the choice of hyperparameters for the problem.
The thesis will be carried out in collaboration between the Marseille Institute of Mathematics (I2M CNRS UMR 7373, Aix-Marseille University, Saint Charles campus) and the Nuclear Measurement Laboratory of the IRESNE institute of the French Alternative Energies and Atomic Energy Commission (CEA Cadarache, Saint Paul les Durance). The doctoral student will work in a stimulating research environment focused on strategic questions related to non-destructive testing. He or she will also have the opportunity to promote his or her research work in France and abroad.

Modeling of Critical Heat Flux Using Lattice Boltzmann Methods: Application to the Experimental Devices of the RJH

LBM (Lattice Boltzmann Methods) are numerical techniques used to simulate transport phenomena in complex systems. They allow modeling fluid behavior in terms of particles moving on a discrete grid (a "lattice"). Unlike classical methods, which solve the differential equations of fluids directly, LBM simulate the evolution of the fluid particle distribution functions in a discrete space using propagation and collision rules.

The choice of lattice in LBM is a crucial step, as it directly affects the accuracy, efficiency, and stability of the simulations. The lattice determines how fluid particles interact and move through space, as well as how the discretization of space and time is performed.

LBM methods exhibit a natural parallelism because the computations at each grid point are relatively independent. Compared to classical CFD methods, LBM can better capture certain complex phenomena (such as multiphase, turbulent, or porous media flows) because they rely on a mesoscopic modeling of the fluid, directly derived from particle kinetics, rather than on a macroscopic resolution of the Navier–Stokes equations. This approach allows for a finer representation of interfaces, nonlinear effects, and local interactions, which are often difficult to model accurately using classical CFD methods. LBM therefore enables the capture of complex phenomena at a lower computational cost. Recent studies have notably shown that LBM can reproduce the Nukiyama boiling curve (pool boiling) and, consequently, accurately calculate the critical heat flux. This flux corresponds to a bulk boiling, known as a boiling crisis, which results in a sudden degradation of heat transfer.

The critical heat flux is a crucial issue for the experimental devices (DEX) of the Jules Horowitz Reactor, as they are cooled by water either via natural convection (fuel capsule-type devices) or forced convection (loop-type devices). Thus, to ensure the proper cooling of the DEX and reactor safety, it is essential to verify that the critical heat flux is not reached within the studied parameter range. It must therefore be determined with precision. Previous studies conducted on a fuel-capsule-type DEX using the NEPTUNE-CFD code (classical CFD methods) have shown that modeling is limited to regions far from the critical heat flux. In general, flows with high void fractions (greater than 10%) cannot be easily resolved using classical CFD approaches.

The student will first define a lattice to apply LBM to a RJH device under natural convection. They will consolidate the results obtained for the critical heat flux on this configuration by comparing them with available data. Finally, exploratory calculations under forced convection (laminar to turbulent regime) will be conducted.

The student will be hosted at the IRESNE institute.

Design and Optimisation of an innovative process for CO2 capture

A 2023 survey found that two-thirds of the young French adults take into account the climate impact of companies’ emissions when looking for a job. But why stop there when you could actually pick a job whose goal is to reduce such impacts? The Laboratory for Process Simulation and System analysis invites you to pursue a PhD aiming at designing and optimizing a process for CO2 capture from industrial waste gas. One of the key novelties of this project consists in using a set of operating conditions for the process that is different from those commonly used by industries. We believe that under such conditions the process requires less energy to operate. Further, another innovation aspect is the possibility of thermal coupling with an industrial facility.

The research will be carried out in collaboration with CEA Saclay and the Laboratory of Chemical Engineering (LGC) in Toulouse. First, a numerical study via simulations will be conducted, using a process simulation software (ProSIM). Afterwards, the student will explore and propose different options to minimize process energy consumption. Simulation results will be validated experimentally at the LGC, where he will be responsible for devising and running experiments to gather data for the absorption and desorption steps.

If you are passionate about Process Engineering and want to pursue a scientifically stimulating PhD, do apply and join our team!

A macroscale approach to evaluate the long-term degradation of concrete structures under irradiation

In nuclear power plants, the concrete biological shield (CBS) is designed to be very close of the reactor vessel. It is expected to absorb radiation and acts as a load-bearing structure. It is thus exposed during the lifetime of the plant to high level of radiations that can have consequences on the long term. These radiations may result especially in a decrease of the material and structural mechanical properties. Given its key role, it is thus necessary to develop tools and models, to predict the behaviors of such structures at the macroscopic scale.
Based on the results obtained at a lower scale - mesoscopic simulations, from which a better understanding of the irradiation effect can be achieved and experimental results which are expected to feed the simulation (material properties especially), it is thus proposed to develop a macroscopic methodology to be applied to the concrete biological shield. This approach will include different phenomena, among which radiation-induced volumetric expansion, induced creep, thermal defromations and Mechanical loading.
These physical phenomena will be developed within the frame of continuum damage mechanics to evaluate the mechanical degradation at the macroscopic scale in terms of displacements and damage especially. The main challenges of the numerical developments will be the proposition of adapted evolution laws, and particularly the coupling between microstructural damage and damage at the structural level due to the stresses applied on the structure.

Machine Learning-Based Algorithms for Real-Time Standalone Tracking in the Upstream Pixel Detector at LHCb

This PhD aims to develop and optimize next-generation track reconstruction capabilities for the LHCb experiment at the Large Hadron Collider (LHC) through the exploration of advanced machine learning (ML) algorithms. The newly installed Upstream Pixel (UP) detector, located upstream of the LHCb magnet, will play a crucial role from Run 5 onward by rapidly identifying track candidates and reducing fake tracks at the earliest stages of reconstruction, particularly in high-occupancy environments.

Achieving fast and highly efficient tracking is essential to fulfill LHCb’s rich physics program, which spans rare decays, CP-violation studies in the Standard Model, and the characterization of the quark–gluon plasma in nucleus–nucleus collisions. However, the increasing event rates and data complexity expected for future data-taking phases will impose major constraints on current tracking algorithms, especially in heavy-ion collisions where thousands of charged particles may be produced per event.

In this context, we will investigate modern ML-based approaches for standalone tracking in the UP detector. Successful applications in the LHCb VELO tracking system already demonstrate the potential of such methods. In particular, Graph Neural Networks (GNNs) are a promising solution for exploiting the geometric correlations between detector hits, allowing for improved tracking efficiency and fake-rate suppression, while maintaining scalability at high multiplicity.

The PhD program will first focus on the development of a realistic GEANT4 simulation of the UP detector to generate ML-suitable datasets and study detector performance. The next phase will consist in designing, training, and benchmarking advanced ML algorithms for standalone tracking, followed by their optimization for real-time GPU-based execution within the Allen trigger and reconstruction framework. The most efficient solutions will be integrated and validated inside the official LHCb software stack, ensuring compatibility with existing data pipelines and direct applicability to Run-5 operation.

Overall, the thesis will provide a major contribution to the real-time reconstruction performance of LHCb, preparing the experiment for the challenges of future high-luminosity and heavy-ion running.

Top