Modeling of Critical Heat Flux Using Lattice Boltzmann Methods: Application to the Experimental Devices of the RJH
LBM (Lattice Boltzmann Methods) are numerical techniques used to simulate transport phenomena in complex systems. They allow modeling fluid behavior in terms of particles moving on a discrete grid (a "lattice"). Unlike classical methods, which solve the differential equations of fluids directly, LBM simulate the evolution of the fluid particle distribution functions in a discrete space using propagation and collision rules.
The choice of lattice in LBM is a crucial step, as it directly affects the accuracy, efficiency, and stability of the simulations. The lattice determines how fluid particles interact and move through space, as well as how the discretization of space and time is performed.
LBM methods exhibit a natural parallelism because the computations at each grid point are relatively independent. Compared to classical CFD methods, LBM can better capture certain complex phenomena (such as multiphase, turbulent, or porous media flows) because they rely on a mesoscopic modeling of the fluid, directly derived from particle kinetics, rather than on a macroscopic resolution of the Navier–Stokes equations. This approach allows for a finer representation of interfaces, nonlinear effects, and local interactions, which are often difficult to model accurately using classical CFD methods. LBM therefore enables the capture of complex phenomena at a lower computational cost. Recent studies have notably shown that LBM can reproduce the Nukiyama boiling curve (pool boiling) and, consequently, accurately calculate the critical heat flux. This flux corresponds to a bulk boiling, known as a boiling crisis, which results in a sudden degradation of heat transfer.
The critical heat flux is a crucial issue for the experimental devices (DEX) of the Jules Horowitz Reactor, as they are cooled by water either via natural convection (fuel capsule-type devices) or forced convection (loop-type devices). Thus, to ensure the proper cooling of the DEX and reactor safety, it is essential to verify that the critical heat flux is not reached within the studied parameter range. It must therefore be determined with precision. Previous studies conducted on a fuel-capsule-type DEX using the NEPTUNE-CFD code (classical CFD methods) have shown that modeling is limited to regions far from the critical heat flux. In general, flows with high void fractions (greater than 10%) cannot be easily resolved using classical CFD approaches.
The student will first define a lattice to apply LBM to a RJH device under natural convection. They will consolidate the results obtained for the critical heat flux on this configuration by comparing them with available data. Finally, exploratory calculations under forced convection (laminar to turbulent regime) will be conducted.
The student will be hosted at the IRESNE institute.
A theoretical framework for the task-based optimal design of Modular and Reconfigurable Serial Robots for rapid deployment
The innovations that gave rise to industrial robots date back to the sixties and seventies. They have enabled a massive deployment of industrial robots that transformed factory floors, at least in industrial sectors such as car manufacturing and other mass production lines.
However, such robots do not fit the requirements of other interesting applications that appeared and developed in fields such as in laboratory research, space robotics, medical robotics, automation in inspection and maintenance, agricultural robotics, service robotics and, of course, humanoids. A small number of these sectors have seen large-scale deployment and commercialization of robotic systems, with most others advancing slowly and incrementally to that goal.
This begs the following question: is it due to unsuitable hardware (insufficient physical capabilities to generate the required motions and forces); software capabilities (control systems, perception, decision support, learning, etc.); or a lack of new design paradigms capable to meet the needs of these applications (agile and scalable custom-design approaches)?
The unprecedented explosion of data science, machine learning and AI in all areas of science, technology and society may be seen as a compelling solution, and a radical transformation is taking shape (or is anticipated), with the promise of empowering the next generations of robots with AI (both predictive and generative). Therefore, research can tend to pay increasing attention to the software aspects (learning, decision support, coding etc.); perhaps to the detriment of more advanced physical capabilities (hardware) and new concepts (design paradigms). It is however clear that the cognitive aspects of robotics, including learning, control and decision support, are useful if and only if suitable physical embodiments are available to meet the needs of the various tasks that can be robotized, hence requiring adapted design methodologies and hardware.
The aim of this thesis is thus to focus on design paradigms and hardware, and in particular on the optimal design of rapidly-produced serial robots based on given families of standardized « modules » whose layout will be optimized according to the requirements of the tasks that cannot be performed by the industrial robots available on the market. The ambition is to answer the question of whether and how a paradigm shift may be possible for the design of robots, from being fixed-catalogue to rapidly available bespoke type.
The successful candidate will enrol at the « Ecole Doctorale Mathématiques, STIC » of Nantes Université (ED-MASTIC), and he or she will be hosted for three years in the CEA-LIST Interactive Robotics Unit under supervision of Dr Farzam Ranjbaran. Professors Yannick Aoustin (Nantes) and Clément Gosselin (Laval) will provide academic guidance and joint supervision for a successful completion of the thesis.
A follow-up to this thesis is strongly considered in the form of a one-year Post-Doctoral fellowship to which the candidate will be able to apply, upon successful completion of all the requirements of the PhD Degree. This Post-Doctoral fellowship will be hosted at the « Centre de recherche en robotique, vision et intelligence machine (CeRVIM) », Université Laval, Québec, Canada.
Enhanced Quantum-Radiofrequency Sensor
Through the Carnot SpectroRF exploratory project, CEA Leti is involved in radio-frequency sensor systems based on atomic optical spectroscopy. The idea behind the development is that these systems offer exceptional detection performance. These include high sensitivity´ (~nV.cm-1.Hz-0.5), very wide bandwidths (MHz- THz), wavelength-independent size (~cm) and no coupling with the environment. These advantages surpass the capabilities of conventional antenna-based receivers for RF signal detection.
The aim of this thesis is to investigate a hybrid approach to the reception of radio-frequency signals, combining atomic spectroscopy measurement based on Rydberg atoms with the design of a close environment based on metal and/or charged material for shaping and local amplification of the field, whether through the use of resonant or non-resonant structures, or focusing structures.
In this work, the main scientific question is to determine the opportunities and limits of this type of approach, by analytically formulating the field limits that can be imposed on Rydberg atoms, whether in absolute value, frequency or space, for a given structure. The analytical approach will be complemented by EM simulations to design and model the structure associated with the optical atomic spectroscopy bench. Final characterization will be based on measurements in a controlled electromagnetic environment (anechoic chamber).
The results obtained will enable a model-measurement comparison to be made. Analytical modelling and the resulting theoretical limits will give rise to publications on subjects that have not yet been investigated in the state of the art. The structures developed as part of this thesis may be the subject of patents directly exploitable by CEA.
Design and Optimisation of an innovative process for CO2 capture
A 2023 survey found that two-thirds of the young French adults take into account the climate impact of companies’ emissions when looking for a job. But why stop there when you could actually pick a job whose goal is to reduce such impacts? The Laboratory for Process Simulation and System analysis invites you to pursue a PhD aiming at designing and optimizing a process for CO2 capture from industrial waste gas. One of the key novelties of this project consists in using a set of operating conditions for the process that is different from those commonly used by industries. We believe that under such conditions the process requires less energy to operate. Further, another innovation aspect is the possibility of thermal coupling with an industrial facility.
The research will be carried out in collaboration with CEA Saclay and the Laboratory of Chemical Engineering (LGC) in Toulouse. First, a numerical study via simulations will be conducted, using a process simulation software (ProSIM). Afterwards, the student will explore and propose different options to minimize process energy consumption. Simulation results will be validated experimentally at the LGC, where he will be responsible for devising and running experiments to gather data for the absorption and desorption steps.
If you are passionate about Process Engineering and want to pursue a scientifically stimulating PhD, do apply and join our team!
Kinetics of segregation and precipitation in Fe-Cr-C alloys under irradiation : coupling magnetic, chemical and elastic effects
Ferritic steels are being considered as structural materials in future fission and fusion nuclear reactors. These alloys have highly original properties, due to the coupling between chemical, magnetic and elastic interactions that affect their thermodynamic properties, the diffusion of chemical species and the diffusion of point defects in the crystal. The aim of the thesis will be to model all of these effects at the atomic scale and to integrate them into Monte Carlo simulations in order to model the segregation and precipitation kinetics under irradiation, phenomena that can degrade their properties in use. The atomic approach is essential for these materials, which are subjected to permanent irradiation and for which the laws of equilibrium thermodynamics no longer apply.
The candidate should have a good background in statistical physics or materials science, and be interested in numerical simulations and computer programming. The thesis will be carried out at CEA Saclay's physical metallurgy laboratory (SRMP), in a research environment with recognised experience in multi-scale modelling of materials, with around fifteen theses and post-doctoral contracts in progress on these topics.
A Master 2 internship on the same subject is proposed for spring 2025 and is highly recommended.
Understanding the mechanisms of oxidative dissolution of (U,Pu)O2 in the presence of Ag(II) generated by ozonation
The recycling of plutonium contained in MOx fuels, composed of mixed uranium and plutonium oxides (U,Pu)O2, relies on a key step: the complete dissolution of plutonium dioxide (PuO2). However, PuO2 is known to dissolve only with great difficulty in the concentrated nitric acid used in industrial processes. The addition of a strongly oxidizing species such as silver(II) significantly enhances this dissolution step—this is the principle of oxidative dissolution. Ozone (O3) is used to continuously regenerate the Ag(II) oxidant in solution.
Although this process has demonstrated its efficiency, the mechanisms involved remain poorly understood and scarcely documented. A deeper understanding of these mechanisms is essential for any potential industrial implementation.
The aim of this PhD work is to gain insight into the interaction mechanisms within the HNO3/Ag/O3/(U,Pu)O2 system. The research will be based on a parametric experimental study of increasing complexity. First, the mechanisms of generation and consumption of Ag(II) will be investigated in the simpler HNO3/Ag/O3 system. In a second phase, the influence of various parameters on the oxidative dissolution of (U,Pu)O2 will be examined. The results will lead to the development of a kinetic model describing the dissolution process as a function of the studied parameters.
At the end of this PhD, the candidate—preferably with a background in physical chemistry—will have acquired advanced expertise in experimental techniques and kinetic modeling, providing a strong foundation for a career in academic research or industrial R&D, both within and beyond the nuclear sector.
A macroscale approach to evaluate the long-term degradation of concrete structures under irradiation
In nuclear power plants, the concrete biological shield (CBS) is designed to be very close of the reactor vessel. It is expected to absorb radiation and acts as a load-bearing structure. It is thus exposed during the lifetime of the plant to high level of radiations that can have consequences on the long term. These radiations may result especially in a decrease of the material and structural mechanical properties. Given its key role, it is thus necessary to develop tools and models, to predict the behaviors of such structures at the macroscopic scale.
Based on the results obtained at a lower scale - mesoscopic simulations, from which a better understanding of the irradiation effect can be achieved and experimental results which are expected to feed the simulation (material properties especially), it is thus proposed to develop a macroscopic methodology to be applied to the concrete biological shield. This approach will include different phenomena, among which radiation-induced volumetric expansion, induced creep, thermal defromations and Mechanical loading.
These physical phenomena will be developed within the frame of continuum damage mechanics to evaluate the mechanical degradation at the macroscopic scale in terms of displacements and damage especially. The main challenges of the numerical developments will be the proposition of adapted evolution laws, and particularly the coupling between microstructural damage and damage at the structural level due to the stresses applied on the structure.
Machine Learning-Based Algorithms for Real-Time Standalone Tracking in the Upstream Pixel Detector at LHCb
This PhD aims to develop and optimize next-generation track reconstruction capabilities for the LHCb experiment at the Large Hadron Collider (LHC) through the exploration of advanced machine learning (ML) algorithms. The newly installed Upstream Pixel (UP) detector, located upstream of the LHCb magnet, will play a crucial role from Run 5 onward by rapidly identifying track candidates and reducing fake tracks at the earliest stages of reconstruction, particularly in high-occupancy environments.
Achieving fast and highly efficient tracking is essential to fulfill LHCb’s rich physics program, which spans rare decays, CP-violation studies in the Standard Model, and the characterization of the quark–gluon plasma in nucleus–nucleus collisions. However, the increasing event rates and data complexity expected for future data-taking phases will impose major constraints on current tracking algorithms, especially in heavy-ion collisions where thousands of charged particles may be produced per event.
In this context, we will investigate modern ML-based approaches for standalone tracking in the UP detector. Successful applications in the LHCb VELO tracking system already demonstrate the potential of such methods. In particular, Graph Neural Networks (GNNs) are a promising solution for exploiting the geometric correlations between detector hits, allowing for improved tracking efficiency and fake-rate suppression, while maintaining scalability at high multiplicity.
The PhD program will first focus on the development of a realistic GEANT4 simulation of the UP detector to generate ML-suitable datasets and study detector performance. The next phase will consist in designing, training, and benchmarking advanced ML algorithms for standalone tracking, followed by their optimization for real-time GPU-based execution within the Allen trigger and reconstruction framework. The most efficient solutions will be integrated and validated inside the official LHCb software stack, ensuring compatibility with existing data pipelines and direct applicability to Run-5 operation.
Overall, the thesis will provide a major contribution to the real-time reconstruction performance of LHCb, preparing the experiment for the challenges of future high-luminosity and heavy-ion running.
A formal framework for the specification and verification of distributed processes communication flows in clouds
Clouds are constituted of servers interconnected via the Internet, on which systems can be implemented, making use of applications and databases deployed on the servers. Cloud-based computing is gaining in popularity, and that includes the context of critical systems. As a result, it is useful to define formal frameworks for reasoning about cloud-based systems. One requirement about such a framework is that it enables reasoning about the concepts manipulated in a cloud, which naturally includes the ability to reason about distributed systems, composed of subsystems deployed on different machines and interacting through message passing to implement services. In this context, the ability to reason about communication flows is central. The aim of this thesis is to define a formal framework dedicated to the specification and verification of systems deployed on clouds. This framework will capitalize on the formal framework of "interactions". Interactions are models dedicated to the specification of communication flows between different actors in a system. The thesis work will study how to define structuring (enrichment, composition) and refinement operators to enable the implementation of classical software engineering processes based on interactions.
Numerical simulation of turbulence models on distorted meshes
Turbulence plays an important role in many industrial applications (flow, heat transfer, chemical reactions). Since Direct Simulation (DNS) is often an excessive cost in computing time, Reynolds Models (RANS) are then used in CFD (computational fluid dynamics) codes. The best known, which was published in the 70s, is the k - epsilon model.
It results in two additional non-linear equations coupled to the Navier-Stokes equations, describing the transport, for one, of turbulent kinetic energy (k) and, for the other, of its dissipation rate (epsilon). ). A very important property to check is the positivity of the parameters k and epsilon which is necessary for the system of equations modeling the turbulence to remain stable. It is therefore crucial that the discretization of these models preserves the monotony. The equations being of convection-diffusion type, it is well known that with classical linear schemes (finite elements, finite volumes, etc ...), the numerical solutions are likely to oscillate on distorted meshes. The negative values of the parameters k and epsilon are then at the origin of the stop of the simulation.
We are interested in nonlinear methods allowing to obtain compact stencils. For diffusion operators, they rely on nonlinear combinations of fluxes on either side of each edge. These approaches have proved their efficiency, especially for the suppression of oscillations on very distorted meshes. We can also take the ideas proposed in the literature where it is for example described nonlinear corrections applying on classical linear schemes. The idea would be to apply this type of method on the diffusive operators appearing in the k-epsilon models. In this context it will also be interesting to transform classical schemes of literature approaching gradients into nonlinear two-point fluxes. Fundamental questions need to be considered in the case of general meshes about the consistency and coercivity of the schemes studied.
During this thesis, we will take the time to solve the basic problems of these methods (first and second year), both on the theoretical aspects and on the computer implementation. This can be done in Castem, TrioCFD or Trust development environments. We will then focus on regular analytical solutions and application cases representative of the community.