Design and Optimisation of an innovative process for CO2 capture
A 2023 survey found that two-thirds of the young French adults take into account the climate impact of companies’ emissions when looking for a job. But why stop there when you could actually pick a job whose goal is to reduce such impacts? The Laboratory for Process Simulation and System analysis invites you to pursue a PhD aiming at designing and optimizing a process for CO2 capture from industrial waste gas. One of the key novelties of this project consists in using a set of operating conditions for the process that is different from those commonly used by industries. We believe that under such conditions the process requires less energy to operate. Further, another innovation aspect is the possibility of thermal coupling with an industrial facility.
The research will be carried out in collaboration with CEA Saclay and the Laboratory of Chemical Engineering (LGC) in Toulouse. First, a numerical study via simulations will be conducted, using a process simulation software (ProSIM). Afterwards, the student will explore and propose different options to minimize process energy consumption. Simulation results will be validated experimentally at the LGC, where he will be responsible for devising and running experiments to gather data for the absorption and desorption steps.
If you are passionate about Process Engineering and want to pursue a scientifically stimulating PhD, do apply and join our team!
Topologic optimization of µLED's optical performance
The performance of micro-LEDs (µLEDs) is crucial for micro-displays, a field of expertise at the LITE laboratory within CEA-LETI. However, simulating these components is complex and computationally expensive due to the incoherent nature of light sources and the involved geometries. This limits the ability to effectively explore multi-parameter design spaces.
This thesis proposes to develop an innovative finite element method to accelerate simulations and enable the use of topological optimization. The goal is to produce non-intuitive designs that maximize performance while respecting industrial constraints.
The work is divided into three phases:
- Develop a fast and reliable simulation method by incorporating appropriate physical approximations for incoherent sources and significantly reducing computation times.
- Design a robust topological optimization framework that includes fabrication constraints to generate immediately realizable designs.
- Realize such a metasurface on an existing shortloop in the laboratory. This part is optional and will be tackled only if we manage to seize an Opportunity to finance the prototype, via the inclusion of the thésis inside the "metasurface
topics" of european or IPCEI projets in the lab .
The expected results include optimized designs for micro-displays with enhanced performance and a methodology that can be applied to other photonic devices and used by other laboratories from DOPT.
Modeling and characterization of CFET transistors for enhanced electrical performance
Complementary Field Effect Transistors (CFETs) represent a new generation of vertically stacked CMOS devices, offering a promising path to continue transistor miniaturization and to meet the requirements of high-performance computing.
The objective of this PhD work is to study and optimize the strain engineering of the transistor channel in order to enhance carrier mobility and improve the overall electrical performance of CFET devices. The work will combine numerical modeling of technological processes using finite element methods with experimental characterization of crystalline deformation through transmission electron microscopy coupled with precession electron diffraction (TEM-PED).
The modeling activity will focus on predicting strain distributions and their impact on electrical properties, while accurately accounting for the complexity of the technological stacks and critical fabrication steps such as epitaxy. In parallel, the experimental work will aim to quantify strain fields using TEM-PED and to compare these results with simulation outputs.
This research will contribute to the development of dedicated modeling tools and advanced characterization methodologies adapted to CFET architectures, with the goal of improving spatial resolution, measurement reproducibility, and the overall understanding of strain mechanisms in next-generation transistors.
Development of machine learning algorithms to improve image acquisition and processing in radiological imaging
The Nuclear Measurements Laboratory at the LNPA (Laboratory for the Study of Digital Technologies and Advanced Processes) in Marcoule consists of a team specializing in nuclear measurements in the field. Its activities are divided between developing measurement systems and providing technical expertise to CEA facilities and external partners (ORANO, EDF, IAEA).
The LNPA has been developing and using radiological imagers (gamma and alpha) for several years. Some of the developments have resulted in industrial products, while other imagers are still being developed and improved. Alpha imaging, in particular, is a process that allows alpha contamination zones to be detected remotely. Locating the alpha source is an important step in glove boxes, whether for a cleanup and dismantling project, for maintenance during operation, or for the radiation protection of workers. The alpha camera is the tool that makes alpha mapping accessible remotely and from outside glove boxes.
The objective of the thesis is to develop and implement mathematical prediction and denoising solutions to improve the acquisition and post-processing of radiological images, and in particular alpha camera images.
Two main areas of research will be explored in depth:
- The development of real-time or post-processing image denoising algorithms
- The development of predictive algorithms to generate high-statistics images based on samples of real images.
To do this, an experimental and simulation database will be established to feed the AI algorithms.
These two areas of research will be brought to fruition through the creation of a prototype imager incorporating machine learning capabilities and an image acquisition and processing interface, which will be used in an experimental implementation.
Through this thesis, students will gain solid knowledge of nuclear measurements, radiation/matter interaction, and scientific image processing, and will develop a clear understanding of radiological requirements in the context of remediation/decommissioning projects.
Methods for the Rapid Detection of Gravitational Events from LISA Data
The thesis focuses on the development of rapid analysis methods for the detection and characterization of gravitational waves, particularly in the context of the upcoming LISA (Laser Interferometer Space Antenna) space mission planned by ESA around 2035. Data analysis involves several stages, one of the first being the rapid analysis “pipeline,” whose role is to detect new events and to characterize them. The final aspect concerns the rapid estimation of the sky position of the gravitational wave source and their characteristic time, such as the coalescence time in the case of black hole mergers. These analysis tools constitute the low-latency analysis pipeline.
Beyond its value for LISA, this pipeline also plays a crucial role in the rapid follow-up of events detected by electromagnetic observations (ground or space-based observatories, from radio waves to gamma rays). While fast analysis methods have been developed for ground-based interferometers, the case of space-borne interferometers such as LISA remains an area to be explored. Thus, a tailored data processing method will have to consider the packet-based data transmission mode, requiring event detection from incomplete data. From data affected by artifacts such as glitches, these methods must enable the detection, discrimination, and analysis of various sources.
In this thesis, we propose to develop a robust and effective method for the early detection of massive black hole binaries (MBHBs). This method should accommodate the data flow expected for LISA, process potential artifacts (e.g., non-stationary noise and glitches), and allow the generation of alerts, including a detection confidence index and a first estimate of the source parameters (coalescence time, sky position, and binary mass); such a rapid initial estimate is essential for optimally initializing a more accurate and computationally expensive parameter estimation.
Multi-criteria Navigation of a Mobile Agent applied to nuclear investigation robotics
Mobile robots are increasingly deployed in hazardous or inaccessible environments to perform inspection, intervention, and data collection tasks. However, navigating such environments is far more complex than simple obstacle avoidance: robots must also deal with communication blackouts, contamination risks, limited onboard energy, and incomplete or evolving maps. A previous PhD project (2023–2026) introduced a multi-criteria navigation framework based on layered environmental mapping and weighted decision aggregation, demonstrating its feasibility in simulated, static scenarios.
The proposed thesis aims to extend this approach to dynamic and partially unknown environments, enabling real-time adaptive decision-making. The work will rely on tools from mobile robotics, data fusion, and autonomous planning, supported by experimental facilities that allow realistic validation. The objective is to bring navigation strategies closer to real operational conditions encountered in nuclear dismantling sites and other industrial environments where human intervention is risky. The doctoral candidate will benefit from an active research environment, multidisciplinary collaborations, and strong career opportunities in autonomous robotics and safety-critical intervention systems.
Assisted generation of complex computational kernels in solid mechanics
The behavior laws used in numerical simulations describe the physical characteristics of simulated materials. As our understanding of these materials evolves, the complexity of these laws increases. Integrating these laws is a critical step for the performance and robustness of scientific computations. Therefore, this step can lead to intrusive and complex developments in the code.
Many digital platforms, such as FEniCS, FireDrake, FreeFEM, and Comsol, offer Just-In-Time (JIT) code generation techniques to handle various physics. This JIT approach significantly reduces the time required to implement new simulations, providing great versatility to the user. Additionally, it allows for optimization specific to the cases being treated and facilitates porting to various architectures (CPU or GPU). Finally, this approach hides implementation details; any changes in these details are invisible to the user and absorbed by the code generation layer.
However, these techniques are generally limited to the assembly steps of the linear systems to be solved and do not include the crucial step of integrating behavior laws.
Inspired by the successful experience of the open-source project mgis.fenics [1], this thesis aims to develop a Just-In-Time code generation solution dedicated to the next-generation structural mechanics code Manta [2], developed by CEA. The objective is to enable strong coupling with behavior laws generated by MFront [3], thereby improving the flexibility, performance, and robustness of numerical simulations.
The selected PhD candidate should have a solid background in computational science and a strong interest in numerical simulation and C++ programming. They should be capable of working independently and demonstrate initiative. The doctoral student will benefit from guidance from the developers of MFront and Manta (CEA), as well as the developers of the A-Set code (a collaboration between Mines-Paris Tech, Onera, and Safran). This collaboration within a multidisciplinary team will provide a stimulating and enriching environment for the candidate.
Furthermore, the thesis work will be enhanced by the opportunity to participate in conferences and publish articles in peer-reviewed scientific journals, offering national and international visibility to the thesis results.
The PhD will take place at CEA Cadarache, in south-eastern France, in the Nuclear Fuel Studies Department of the Institute for Research on Nuclear Systems for Low-Carbon Energy Production (IRESNE)[4]. The host laboratory is the LMPC, whose role is to contribute to the development of the physical components of the PLEIADES digital platform [5], co-developed by CEA and EDF.
[1] https://thelfer.github.io/mgis/web/mgis_fenics.html
[2] MANTA : un code HPC généraliste pour la simulation de problèmes complexes en mécanique. https://hal.science/hal-03688160
[3] https://thelfer.github.io/tfel/web/index.html
[4] https://www.cea.fr/energies/iresne/Pages/Accueil.aspx
[5] PLEIADES: A numerical framework dedicated to the multiphysics and multiscale nuclear fuel behavior simulation https://www.sciencedirect.com/science/article/pii/S0306454924002408
Machine Learning-Based Algorithms for Real-Time Standalone Tracking in the Upstream Pixel Detector at LHCb
This PhD aims to develop and optimize next-generation track reconstruction capabilities for the LHCb experiment at the Large Hadron Collider (LHC) through the exploration of advanced machine learning (ML) algorithms. The newly installed Upstream Pixel (UP) detector, located upstream of the LHCb magnet, will play a crucial role from Run 5 onward by rapidly identifying track candidates and reducing fake tracks at the earliest stages of reconstruction, particularly in high-occupancy environments.
Achieving fast and highly efficient tracking is essential to fulfill LHCb’s rich physics program, which spans rare decays, CP-violation studies in the Standard Model, and the characterization of the quark–gluon plasma in nucleus–nucleus collisions. However, the increasing event rates and data complexity expected for future data-taking phases will impose major constraints on current tracking algorithms, especially in heavy-ion collisions where thousands of charged particles may be produced per event.
In this context, we will investigate modern ML-based approaches for standalone tracking in the UP detector. Successful applications in the LHCb VELO tracking system already demonstrate the potential of such methods. In particular, Graph Neural Networks (GNNs) are a promising solution for exploiting the geometric correlations between detector hits, allowing for improved tracking efficiency and fake-rate suppression, while maintaining scalability at high multiplicity.
The PhD program will first focus on the development of a realistic GEANT4 simulation of the UP detector to generate ML-suitable datasets and study detector performance. The next phase will consist in designing, training, and benchmarking advanced ML algorithms for standalone tracking, followed by their optimization for real-time GPU-based execution within the Allen trigger and reconstruction framework. The most efficient solutions will be integrated and validated inside the official LHCb software stack, ensuring compatibility with existing data pipelines and direct applicability to Run-5 operation.
Overall, the thesis will provide a major contribution to the real-time reconstruction performance of LHCb, preparing the experiment for the challenges of future high-luminosity and heavy-ion running.
Euclid Weak Lensing Cluster Cosmology inference
Galaxy clusters, which form at the intersection of matter filaments, are excellent tracers of the large-scale matter distribution in the Universe and are a valuable source of information for cosmology.
The sensitivity of the Euclid space mission (launch in 2023) allow blind detection of galaxy clusters through gravitational lensing (i.e. directly linked to the projected total mass). Combined with its wide survey area (14,000 deg²), Euclid should allow the construction of a galaxy cluster catalogue that is unique in both its size and selection properties.
In contrast to existing cluster catalogues, which are typically based on baryonic content (e.g., X-ray emission from intra-cluster gas, the Sunyaev-Zel’dovich effect in the millimeter regime, or optical emission from galaxies), a catalogue derived from gravitational lensing is directly sensitive to the total mass of the clusters. This makes it truly representative of the underlying cluster population, a significant advantage for both galaxy cluster studies and cosmology.
In this context, we have developed a multi-scale detection method specifically designed to identify galaxy clusters based only on their gravitational lensing signal, which has been pre-selected to produce the Euclid cluster catalogue.
The goal of this PhD project is to build and characterize the galaxy cluster catalogue identified via weak lensing in the data collected during the first year of Euclid observations (DR1), based on this detection method. The candidate will derive cosmological constraints from the modelling of the cluster abundance, using the classical Bayesian framework, and will also investigate the potential of Simulation-Based Inference (SBI) methods for cosmological inference.