Modeling of Critical Heat Flux Using Lattice Boltzmann Methods: Application to the Experimental Devices of the RJH
The Lattice Boltzmann Methods (LBM) are numerical techniques used to simulate transport phenomena in complex systems. They allow for the modeling of fluid behavior in terms of particles that move on a discrete grid (a "lattice"). Unlike classical methods, which directly solve the differential equations of fluids, LBM simulates the evolution of distribution functions of fluid particles in a discrete space, using propagation and collision rules. The choice of the lattice in LBM is a crucial step, as it directly affects the accuracy, efficiency, and stability of the simulations. The lattice determines how fluid particles interact and move within space, as well as how the discretization of space and time is performed.
LBM methods exhibit natural parallelism properties, as calculations at each grid point are relatively independent. Although classical CFD methods based on the solution of the Navier-Stokes equations can also be parallelized, the nonlinear terms can make parallelism more difficult to manage, especially for models involving turbulent flows or irregular meshes. Therefore, LBM methods allow, at a lower computational cost, to capture complex phenomena. Recent work has shown that it is possible, with LBM, to reproduce the Nukiyama cooling curve (boiling in a vessel) and thus accurately calculate the critical heat flux. This flux corresponds to a mass boiling, known as the boiling crisis, which results in a sudden degradation of heat transfer.
The critical heat flux is a crucial issue for the Jules Horowitz Reactor, as experimental devices (DEX) are cooled by water in either natural or forced convection. Therefore, to ensure proper cooling of the DEX and the safety of the reactor, it is essential to ensure that, within the studied parameter range, the critical heat flux is not reached. It must therefore be determined with precision.
In the first part of the study, the student will define a lattice to apply LBM methods on an RJH device in natural convection. The student will then consolidate the results by comparing them with available data. Finally, exploratory calculations in forced convection (from laminar to turbulent flow) will be conducted.
Portable GPU-based parallel algorithms for nuclear fuel simulation on exascale supercomputers
In a context where the standards of high performance computing (HPC) keep evolving, the design of supercomputers includes always more frequently a growing number of accelerators or graphics processing units (GPUs) that provide the bulk of the computing power in most supercomputers. Due to their architectural departures from CPUs and still-evolving software environments, GPUs pose profound programming challenges. GPUs use massive fine-grained parallelism, and thus programmers must rewrite their algorithms and code in order to effectively utilize the compute power.
CEA has developed PLEIADES, a computing platform devoted to simulating nuclear fuel behavior, from its manufacture all the way to its exploitation in reactors and its storage. PLEIADES can count on an MPI distributed memory parallelization allowing simulations to run on several hundred cores and it meets the needs of CEA's partners EDF and Framatome. Porting PLEIADES to use the most recent computing infrastructures is nevertheless essential. In particular providing a flexible, portable and high-performance solution for simulations on supercomputers equipped with GPUs is of major interest in order to capture ever more complex physics on simulations involving ever larger computational domains.
Within such a context the present thesis aims at developing and evaluating different strategies for porting computational kernels to GPUs and at using dynamic load balancing methods tailored to current and upcoming GPU-based supercomputers. The candidate will rely on the tools developed at CEA such as the thermo-mechanical solver MFEM-MGIS [1,2] or MANTA [3]. The software solutions and parallel algorithms proposed with this thesis will eventually enable large 3D multi-physics modeling calculations of the behavior of fuel rods on supercomputers comprising thousands of computing cores and GPUs.
The candidate will work at the PLEIADES Fuel Scientific Computing Tools Development Laboratory (LDOP) of the department for fuel studies (DEC - IRESNE, CEA Cadarache). They will be brought to evolve in a multidisciplinary team composed of mathematicians, physicists, mechanicians and computer scientists. Ultimately, the contributions of the thesis aim at enriching the computing platform for nuclear fuel simulations PLEIADES.
References :[1] MFEM-MGIS - https://thelfer.github.io/mfem-mgis/[2]; Th. Helfer, G. Latu. « MFEM-MGIS-MFRONT, a HPC mini-application targeting nonlinear thermo-mechanical simulations of nuclear fuels at mesoscale ». IAEA Technical Meeting on the Development and Application of Open-Source Modelling and Simulation Tools for Nuclear Reactors, June 2022, https://conferences.iaea.org/event/247/contributions/20551/attachments/10969/16119/Abstract_Latu.docx, https://conferences.iaea.org/event/247/contributions/20551/attachments/10969/19938/Latu_G_ONCORE.pdf; [3] O. Jamond et al. «MANTA : un code HPC généraliste pour la simulation de problèmes complexes en mécanique », https://hal.science/hal-03688160
Design and Optimisation of an innovative process for CO2 capture
A 2023 survey found that two-thirds of the young French adults take into account the climate impact of companies’ emissions when looking for a job. But why stop there when you could actually pick a job whose goal is to reduce such impacts? The Laboratory for Process Simulation and System analysis invites you to pursue a PhD aiming at designing and optimizing a process for CO2 capture from industrial waste gas. One of the key novelties of this project consists in using a set of operating conditions for the process that is different from those commonly used by industries. We believe that under such conditions the process requires less energy to operate. Further, another innovation aspect is the possibility of thermal coupling with an industrial facility.
The research will be carried out in collaboration with CEA Saclay and the Laboratory of Chemical Engineering (LGC) in Toulouse. First, a numerical study via simulations will be conducted, using a process simulation software (ProSIM). Afterwards, the student will explore and propose different options to minimize process energy consumption. Simulation results will be validated experimentally at the LGC, where he will be responsible for devising and running experiments to gather data for the absorption and desorption steps.
If you are passionate about Process Engineering and want to pursue a scientifically stimulating PhD, do apply and join our team!
Seismic analysis of the soil-foundation interface: physical and numerical modelling of global tilting and local detachment
Rocking foundations offer a potential mechanism for improving seismic performance by allowing controlled uplift and settlement, but uncertainties in soil-foundation interactions limit their widespread use. Current models require complex numerical simulations, which lack accurate representation of the soil-foundation interface.
The main objective of this thesis is to model the transition from local effects (friction, uplift) to the global response of the structure (rocking, sliding, and settlement) under seismic loads, using a combined experimental and numerical approach. Hence, ensure reliable numerical modeling of rocking structures. Key goals include:
• Investigating sensitivity of physical parameters in seismic response of rocking soil-structure systems using machine learning and numerical analysis.
• Developing and conducting both monotonic and dynamic experimental tests to measure the soil-foundation-structure responses in rocking condition.
• Implementing numerical simulations to account for local interaction effects and validate results with experimental results.
Finally, this research aims to propose a reliable experimental and numerical framework for enhancing seismic resilience in engineering design. This thesis will provide the student with practical engineering, along with expertise in laboratory tests and numerical modeling. The results will be published in international and national journals and presented at conferences, advancing research in the soil and structure dynamics field.
Validation of a Model-Free Data Driven Identification approach for ductile fracture behavior modeling
This research proposes a shift from traditional constitutive modeling to a Data-Driven Computational Mechanics (DDCM) framework which has been recently introduced [1]. Instead of relying on complex constitutive equations, this approach utilizes a database of strain-stress states to model material behavior. The algorithm minimizes the distance between calculated mechanical states and database entries, ensuring compliance with equilibrium and compatibility conditions. This new paradigm aims to overcome the uncertainties and empirical challenges associated with conventional methods.
As a corollary tool for simulations DDCM, Data-Driven Identification (DDI) has emerged as a powerful standalone method for identifying material stress responses [2, 3]. It operates with minimal assumptions about while being model-free, this making it particularly suitable for calibrating complex models commonly used in industry.
Key objectives of this research include adapting DDCM strategies for plasticity [4] and fracture [5], enhancing DDI for high-performance computing, and evaluating constitutive equations. The proposed methodology involves collecting full-field measurement maps from an heterogeneous test, utilizing High-Speed cameras and Digital Image Correlation. It will adapt DDCM for ductile fracture scenarios, implement a DDI solver in a high-performance computing framework, and conduct an assessment of a legacy constitutive model without uncertainties. The focus will be on 316L steel, a material widely used in nuclear engineering.
This thesis is the result of a collaboration between several labs at CEA ans Centrale Nantes which are prominent in computational and experimental mechanics, applied mathematics, software engineering and signal processing.
[1] Kirchdoerfer, Trenton, and Michael Ortiz. "Data-driven computational mechanics." Computer Methods in Applied Mechanics and Engineering 304 (2016): 81-101.
[2] Leygue, Adrien, et al. "Data-based derivation of material response." Computer Methods in Applied Mechanics and Engineering 331 (2018): 184-196.
[3] Dalémat, Marie, et al. "Measuring stress field without constitutive equation." Mechanics of Materials 136 (2019): 103087.
[4] Pham D. et al, Tangent space Data Driven framework for elasto-plastic material behaviors, Finite Elements in Analysis and Design, Volume 216, 2023, https://doi.org/10.1016/j.finel.2022.103895.
[5] P. Carrara, L. De Lorenzis, L. Stainier, M. Ortiz, Data-driven fracture mechanics, Computer Methods in Applied Mechanics and Engineering, Volume 372, 2020, https://doi.org/10.1016/j.cma.2020.113390.
Exploring the High-Frequency fast Electron-Driven Instabilities towards application to WEST
In current tokamaks, the electron distribution is heavily influenced by external heating systems, like Electron Cyclotron Resonance Heating (ECRH) or Lower Hybrid (LH) heating, which generate a large population of fast electrons. This is expected also in next-generation tokamaks, such as ITER, where a substantial part of input power is deposited on electrons. A significant population of fast electrons can destabilize high-frequency instabilities, including Alfvén Eigenmodes (AEs), as observed in various tokamaks. However, this phenomenon remains understudied, especially regarding the specific resonant electron population triggering these instabilities and the impact of electron-driven AEs on the multi-scale turbulence dynamics in the plasma complex environment.
The PhD project aims to explore the physics of high-frequency electron-driven AEs in realistic plasma conditions, applying insights to WEST experiments for in-depth characterization of these instabilities. The candidate will make use of advanced numerical codes, whose expertise is present at the IRFM laboratory, to analyze realistic plasma conditions with fast-electron-driven AE in previous experiments, to grasp the essential physics at play. Code development will also be necessary to capture key aspects of this physics. Once such a knowledge is established, predictive modeling for the WEST environment will guide experiments to observe these instabilities.
Based at CEA Cadarache, the student will collaborate with different teams, from the theory and modeling group to WEST experimental team, gaining diverse expertise in a stimulating environment. Collaborations with EUROfusion task forces will further provide an enriching international experience.
Elementary characterization by neutron activation for the circular economy
As part of the circular economy, a major objective is to facilitate the recycling of strategic materials needed by industry. This requires, first of all, the ability to accurately locate them in industrial components that are no longer in use. Non-destructive nuclear measurement meets this objective, based on prompt gamma neutron activation analysis (PGNAA). This approach involves interrogating the samples to be analyzed with an electrical generator emitting pulses of fast neutrons that thermalize in a polyethylene and graphite cell: between the pulses, radiative capture gamma rays are measured. The advantage of such an approach lies in the fact that high-value elements such as dysprosium or neodymium have a high radiative capture cross-section by thermal neutrons, and that the latter can probe deep into large volumes of matter (several liters).
A previous thesis demonstrated the feasibility of this technique and opened up promising avenues of research, with two complementary strands to make concrete progress towards practical recycling objectives. The first involves experimental and simulation studies of the performance of gamma cascade measurement on cases representative of industrial needs (size and composition of objects, measurement speed). The second will enrich and improve the exploitation of the vast amount of information available from gamma-ray cascade measurements.
In practice, the work will be carried out as part of a collaboration between CEA and the FZJ (ForschungsZentrum Jülich) institute in Germany. The first half of the thesis will be carried out at CEA IRESNE Nuclear Measurement Laboratory. The second half of the thesis will be carried out at the FZJ (Jülich Centre for Neutron Science, JCNS). The German part of the thesis will involve experiments with the FaNGaS device at the Heinz-Maier-Leibnitz Zentrum (MLZ) in Garching.
A macroscale approach to evaluate the long-term degradation of concrete structures under irradiation
In nuclear power plants, the concrete biological shield (CBS) is designed to be very close of the reactor vessel. It is expected to absorb radiation and acts as a load-bearing structure. It is thus exposed during the lifetime of the plant to high level of radiations that can have consequences on the long term. These radiations may result especially in a decrease of the material and structural mechanical properties. Given its key role, it is thus necessary to develop tools and models, to predict the behaviors of such structures at the macroscopic scale.
Based on the results obtained at a lower scale - mesoscopic simulations, from which a better understanding of the irradiation effect can be achieved and experimental results which are expected to feed the simulation (material properties especially), it is thus proposed to develop a macroscopic methodology to be applied to the concrete biological shield. This approach will include different phenomena, among which radiation-induced volumetric expansion, induced creep, thermal defromations and Mechanical loading.
These physical phenomena will be developed within the frame of continuum damage mechanics to evaluate the mechanical degradation at the macroscopic scale in terms of displacements and damage especially. The main challenges of the numerical developments will be the proposition of adapted evolution laws, and particularly the coupling between microstructural damage and damage at the structural level due to the stresses applied on the structure.
3D ultrasound imaging using orthogonal row and column addressing of the matrix array for ultrasonic NDT
This thesis is part of the activities of the Digital Instrumentation Department (DIN) in Non-Destructive Testing (NDT), and aims to design a new, fast and advanced 3D ultrasound imaging method using matrix arrays. The aim will be to produce three-dimensional ultrasound images of the internal volume of a structure that may contain defects (e.g. cracks), as realistically as possible, with improved performance in terms of data acquisition and 3D image computation time. The proposed method will be based on an approach developed in medical imaging based on Row and Column Addressed (RCA) arrays. The first part will focus on the development of new data acquisition strategies for matrix arrays and associated ultrafast 3D imaging using RCA approach in order to deal with conventional NDT inspection configurations. In the second part, developed methods will be validated on simulated data and evaluated on experimental data acquired with a conventional matrix array of 16x16 elements operating in RCA mode. Finally, a real-time proof of concept will be demonstrated by implementing the new 3D imaging methods in a laboratory acquisition system.
Machine Learning-based Algorithms for the Futur Upstream Tracker Standalone Tracking Performance of LHCb at the LHC
This proposal focuses on enhancing tracking performance for the LHCb experiments during Run 5 at the Large Hadron Collider (LHC) through the exploration of various machine learning-based algorithms. The Upstream Tracker (UT) sub-detector, a crucial component of the LHCb tracking system, plays a vital role in reducing the fake track rate by filtering out incorrectly reconstructed tracks early in the reconstruction process. As the LHCb detector investigates rare particle decays, studies CP violation in the Standard Model, and study the Quark-Gluon plasma in PbPb collisions, precise tracking becomes increasingly important.
With upcoming upgrades planned for 2035 and the anticipated increase in data rates, traditional tracking methods may struggle to meet the computational demands, especially in nucleus-nucleus collisions where thousands of particles are produced. Our project will investigate a range of machine learning techniques, including those already demonstrated in the LHCb’s Vertex Locator (VELO), to enhance the tracking performance of the UT. By applying diverse methods, we aim to improve early-stage track reconstruction, increase efficiency, and decrease the fake track rate. Among these techniques, Graph Neural Networks (GNNs) are a particularly promising option, as they can exploit spatial and temporal correlations in detector hits to improve tracking accuracy and reduce computational burdens.
This exploration of new methods will involve development work tailored to the specific hardware selected for deployment, whether it be GPUs, CPUs, or FPGAs, all part of the futur LHCb’s data architecture. We will benchmark these algorithms against current tracking methods to quantify improvements in performance, scalability, and computational efficiency. Additionally, we plan to integrate the most effective algorithms into the LHCb software framework to ensure compatibility with existing data pipelines.