Numerical simulation of turbulence models on distorted meshes

Turbulence plays an important role in many industrial applications (flow, heat transfer, chemical reactions). Since Direct Simulation (DNS) is often an excessive cost in computing time, Reynolds Models (RANS) are then used in CFD (computational fluid dynamics) codes. The best known, which was published in the 70s, is the k - epsilon model.
It results in two additional non-linear equations coupled to the Navier-Stokes equations, describing the transport, for one, of turbulent kinetic energy (k) and, for the other, of its dissipation rate (epsilon). ). A very important property to check is the positivity of the parameters k and epsilon which is necessary for the system of equations modeling the turbulence to remain stable. It is therefore crucial that the discretization of these models preserves the monotony. The equations being of convection-diffusion type, it is well known that with classical linear schemes (finite elements, finite volumes, etc ...), the numerical solutions are likely to oscillate on distorted meshes. The negative values of the parameters k and epsilon are then at the origin of the stop of the simulation.
We are interested in nonlinear methods allowing to obtain compact stencils. For diffusion operators, they rely on nonlinear combinations of fluxes on either side of each edge. These approaches have proved their efficiency, especially for the suppression of oscillations on very distorted meshes. We can also take the ideas proposed in the literature where it is for example described nonlinear corrections applying on classical linear schemes. The idea would be to apply this type of method on the diffusive operators appearing in the k-epsilon models. In this context it will also be interesting to transform classical schemes of literature approaching gradients into nonlinear two-point fluxes. Fundamental questions need to be considered in the case of general meshes about the consistency and coercivity of the schemes studied.
During this thesis, we will take the time to solve the basic problems of these methods (first and second year), both on the theoretical aspects and on the computer implementation. This can be done in Castem, TrioCFD or Trust development environments. We will then focus on regular analytical solutions and application cases representative of the community.

Assisted generation of complex computational kernels in solid mechanics

The behavior laws used in numerical simulations describe the physical characteristics of simulated materials. As our understanding of these materials evolves, the complexity of these laws increases. Integrating these laws is a critical step for the performance and robustness of scientific computations. Therefore, this step can lead to intrusive and complex developments in the code.

Many digital platforms, such as FEniCS, FireDrake, FreeFEM, and Comsol, offer Just-In-Time (JIT) code generation techniques to handle various physics. This JIT approach significantly reduces the time required to implement new simulations, providing great versatility to the user. Additionally, it allows for optimization specific to the cases being treated and facilitates porting to various architectures (CPU or GPU). Finally, this approach hides implementation details; any changes in these details are invisible to the user and absorbed by the code generation layer.

However, these techniques are generally limited to the assembly steps of the linear systems to be solved and do not include the crucial step of integrating behavior laws.

Inspired by the successful experience of the open-source project mgis.fenics [1], this thesis aims to develop a Just-In-Time code generation solution dedicated to the next-generation structural mechanics code Manta [2], developed by CEA. The objective is to enable strong coupling with behavior laws generated by MFront [3], thereby improving the flexibility, performance, and robustness of numerical simulations.

The selected PhD candidate should have a solid background in computational science and a strong interest in numerical simulation and C++ programming. They should be capable of working independently and demonstrate initiative. The doctoral student will benefit from guidance from the developers of MFront and Manta (CEA), as well as the developers of the A-Set code (a collaboration between Mines-Paris Tech, Onera, and Safran). This collaboration within a multidisciplinary team will provide a stimulating and enriching environment for the candidate.

Furthermore, the thesis work will be enhanced by the opportunity to participate in conferences and publish articles in peer-reviewed scientific journals, offering national and international visibility to the thesis results.

The PhD will take place at CEA Cadarache, in south-eastern France, in the Nuclear Fuel Studies Department of the Institute for Research on Nuclear Systems for Low-Carbon Energy Production (IRESNE)[4]. The host laboratory is the LMPC, whose role is to contribute to the development of the physical components of the PLEIADES digital platform [5], co-developed by CEA and EDF.

[1] https://thelfer.github.io/mgis/web/mgis_fenics.html
[2] MANTA : un code HPC généraliste pour la simulation de problèmes complexes en mécanique. https://hal.science/hal-03688160
[3] https://thelfer.github.io/tfel/web/index.html
[4] https://www.cea.fr/energies/iresne/Pages/Accueil.aspx
[5] PLEIADES: A numerical framework dedicated to the multiphysics and multiscale nuclear fuel behavior simulation https://www.sciencedirect.com/science/article/pii/S0306454924002408

Clean Room Activity Simulation Tool Development

During a previous internship, a tool for simulating batch execution in a clean room was developed. This tool takes into account processing times on equipment, equipment failures, and certain holds related to integration. The batches injected into this simulator come from the actual history of the clean room.
The goal of the PhD is to develop a simulator that can prospectively simulate batch execution based on the POR routes of the main themes present or upcoming in the clean room. Based on the POR routes, the tool should be able to generate development batches for technology bricks (short loops), as well as functional batches including test plates and pilot plates. A nomenclature and enrichment of the routes through metadata will need to be carried out to enable the tool to generate batches realistically, both in terms of process and project scheduling.
Different simulation engines will be compared in terms of performance and accuracy. Classical resolution engines (discrete simulation, event-driven, conjunctive graph-based) as well as innovative approaches (primarily reinforcement learning, but also supervised learning) will be studied.
The development and publication of a methodology for creating simulation instances (testbed) will also be carried out during this PhD work.

GPU-ACCELERATED CHARACTERISTICS METHOD FOR 3D NEUTRON TRANSPORT COMBINING THE LINEAR-SURFACE METHOD AND THE AXIAL POLYNOMIAL EXPANSION

This thesis falls within the framework of advancing numerical computation techniques for reactor physics. Specifically, it focuses on the implementation of methods that incorporate higher-order spatial expansions for neutron flux and cross-sections. The primary objective is to accelerate both existing algorithms and those that will be developed through GPU programming. By harnessing the computational power of GPUs, this research aims to enhance the efficiency and accuracy of simulations in reactor physics, thereby contributing to the broader field of nuclear engineering and safety.

Simulation of nuclear glass gels at the mesoscopic scale using a quaternary system.

This research work is part of studies conducted on the long-term behavior of nuclear glass used to immobilize radioactive waste and potentially intended for geological disposal. The challenge lies in understanding the mechanisms of alteration and gel formation (a passivating layer that can slow down the rate of glass alteration) by water and in predicting the kinetics of radionuclide release over the long term.

The proposed simulation approach aims to predict, at a mesoscopic scale, the maturation process of the gel formed during the alteration of glass by water using a ternary “phase field model” composed of silicon, boron, and water (leachate), to which aluminum will be added.

The underlying quaternary mathematical model will consists of a set of coupled nonlinear partial differential equations. These are based on Allen-Cahn and transport equations. The numerical solution of the associated equations is performed using the Lattice Boltzmann Method (LBM) programmed in C++ in the massively parallel LBM_saclay calculation code, which runs on several HPC architectures, both multi-CPUs and multi-GPUs.

The proposed research requires a solid foundation in applied mathematics and programming in order to develop the algorithms necessary for the correct resolution of the new system of strongly coupled equations.

Artificial Intelligence for the Modeling and Topographic Analysis of Electronic Chips

The inspection of wafer surfaces is critical in microelectronics to detect defects affecting chip quality. Traditional methods, based on physical models, are limited in accuracy and computational efficiency. This thesis proposes using artificial intelligence (AI) to characterize and model wafer topography, leveraging optical interferometry techniques and advanced AI models.

The goal is to develop AI algorithms capable of predicting topographical defects (erosion, dishing) with high precision, using architectures such as convolutional neural networks (CNN), generative models, or hybrid approaches. The work will include optimizing models for fast inference and robust generalization while reducing manufacturing costs.

This project aligns with efforts to improve microfabrication processes, with potential applications in the semiconductor industry. The expected results will contribute to a better understanding of surface defects and the optimization of production processes.

Methods for the Rapid Detection of Gravitational Events from LISA Data

The thesis focuses on the development of rapid analysis methods for the detection and characterization of gravitational waves, particularly in the context of the upcoming LISA (Laser Interferometer Space Antenna) space mission planned by ESA around 2035. Data analysis involves several stages, one of the first being the rapid analysis “pipeline,” whose role is to detect new events and to characterize them. The final aspect concerns the rapid estimation of the sky position of the gravitational wave source and their characteristic time, such as the coalescence time in the case of black hole mergers. These analysis tools constitute the low-latency analysis pipeline.

Beyond its value for LISA, this pipeline also plays a crucial role in the rapid follow-up of events detected by electromagnetic observations (ground or space-based observatories, from radio waves to gamma rays). While fast analysis methods have been developed for ground-based interferometers, the case of space-borne interferometers such as LISA remains an area to be explored. Thus, a tailored data processing method will have to consider the packet-based data transmission mode, requiring event detection from incomplete data. From data affected by artifacts such as glitches, these methods must enable the detection, discrimination, and analysis of various sources.

In this thesis, we propose to develop a robust and effective method for the early detection of massive black hole binaries (MBHBs). This method should accommodate the data flow expected for LISA, process potential artifacts (e.g., non-stationary noise and glitches), and allow the generation of alerts, including a detection confidence index and a first estimate of the source parameters (coalescence time, sky position, and binary mass); such a rapid initial estimate is essential for optimally initializing a more accurate and computationally expensive parameter estimation.

Euclid Weak Lensing Cluster Cosmology inference

Galaxy clusters, which form at the intersection of matter filaments, are excellent tracers of the large-scale matter distribution in the Universe and are a valuable source of information for cosmology.
The sensitivity of the Euclid space mission (launch in 2023) allow blind detection of galaxy clusters through gravitational lensing (i.e. directly linked to the projected total mass). Combined with its wide survey area (14,000 deg²), Euclid should allow the construction of a galaxy cluster catalogue that is unique in both its size and selection properties.
In contrast to existing cluster catalogues, which are typically based on baryonic content (e.g., X-ray emission from intra-cluster gas, the Sunyaev-Zel’dovich effect in the millimeter regime, or optical emission from galaxies), a catalogue derived from gravitational lensing is directly sensitive to the total mass of the clusters. This makes it truly representative of the underlying cluster population, a significant advantage for both galaxy cluster studies and cosmology.
In this context, we have developed a multi-scale detection method specifically designed to identify galaxy clusters based only on their gravitational lensing signal, which has been pre-selected to produce the Euclid cluster catalogue.
The goal of this PhD project is to build and characterize the galaxy cluster catalogue identified via weak lensing in the data collected during the first year of Euclid observations (DR1), based on this detection method. The candidate will derive cosmological constraints from the modelling of the cluster abundance, using the classical Bayesian framework, and will also investigate the potential of Simulation-Based Inference (SBI) methods for cosmological inference.

Staggered schemes for the Navier-Stokes equations with general meshes

The simulation of the Navier-Stokes equations requires accurate and robust numerical methods that
take into account diffusion operators, gradient and convection terms. Operational approaches have
shown their effectiveness on simplexes. However, in some models or codes
(TrioCF, Flica5), it may be useful to improve the accuracy of solutions locally using an
error estimator or to take into account general meshes. We are here interested in staggered schemes.
This means that the pressure is calculated at the centre of the mesh and the velocities on the edges
(or faces) of the mesh. This results in methods that are naturally accurate at low Mach numbers .
New schemes have recently been presented in this context and have shown their
robustness and accuracy. However, these discretisations can be very costly in terms of memory and
computation time compared with MAC schemes on regular meshes
We are interested in the "gradient" type methods. Some of them are based on a
variational formulation with pressure unknowns at the mesh centres and velocity vector unknowns on
the edges (or faces) of the cells. This approach has been shown to be effective, particularly in terms of
robustness. It should also be noted that an algorithm with the same degrees of freedom as the
MAC methods has been proposed and gives promising results.
The idea would therefore be to combine these two approaches, namely the "gradient" method with the same degrees of freedom as MAC methods. Initially, the focus will be on recovering MAC schemes on regular meshes. Fundamental
questions need to be examined in the case of general meshes: stability, consistency, conditioning of
the system to be inverted, numerical locking. An attempt may also be made to recover the gains in
accuracy using the methods presented in for discretising pressure gradients.
During the course of the thesis, time will be taken to settle the basic problems of this method (first and
second years), both on the theoretical aspects and on the computer implementation. It may be carried
out in the Castem, TrioCFD, Trust or POLYMAC development environments. The focus will be on
application cases that are representative of the community.

Electrical impédanceTomography for the Study of Two-Phase Liquid Metal/Gas Flows

As part of the sustainable use of nuclear energy within a carbon-free energy mix in combination with renewable energies, fourth-generation fast neutron reactors are crucial for closing the fuel cycle and controlling uranium resources. Ensuring the safety of such a sodium-cooled reactor relies for a significant part on the early detection of gas voids in their circuits. In these opaque and metallic environments, optical imaging methods are ineffective, making it necessary to develop innovative techniques.
This PhD project is part of the development of Electrical Impedance Tomography (EIT) applied to liquid metals, a non-intrusive approach enabling the imaging of local conductivity distributions within a flow.
The work will focus on the study of electromagnetic phenomena in two-phase metal/gas systems, in particular the skin effect and eddy currents generated by oscillating fields.
Artificial-intelligence approaches, such as Physics-Informed Neural Networks (PINNs), will be explored to combine numerical learning with physical constraints and will be compared with purely numerical simulations.
The objective is to establish refined physical models adapted to metallic environments and to design inversion methods robust against measurement noise.
Experiments on Galinstan will be conducted to validate the models and demonstrate the feasibility of detecting gas inclusions in a liquid metal.
This research, carried out at IRESNE Institute of CEA Cadarache, will open new perspectives in electromagnetic imaging for opaque, highly conductive media.

Top