Staggered schemes for the Navier-Stokes equations with general meshes

The simulation of the Navier-Stokes equations requires accurate and robust numerical methods that
take into account diffusion operators, gradient and convection terms. Operational approaches have
shown their effectiveness on simplexes. However, in some models or codes
(TrioCF, Flica5), it may be useful to improve the accuracy of solutions locally using an
error estimator or to take into account general meshes. We are here interested in staggered schemes.
This means that the pressure is calculated at the centre of the mesh and the velocities on the edges
(or faces) of the mesh. This results in methods that are naturally accurate at low Mach numbers .
New schemes have recently been presented in this context and have shown their
robustness and accuracy. However, these discretisations can be very costly in terms of memory and
computation time compared with MAC schemes on regular meshes
We are interested in the "gradient" type methods. Some of them are based on a
variational formulation with pressure unknowns at the mesh centres and velocity vector unknowns on
the edges (or faces) of the cells. This approach has been shown to be effective, particularly in terms of
robustness. It should also be noted that an algorithm with the same degrees of freedom as the
MAC methods has been proposed and gives promising results.
The idea would therefore be to combine these two approaches, namely the "gradient" method with the same degrees of freedom as MAC methods. Initially, the focus will be on recovering MAC schemes on regular meshes. Fundamental
questions need to be examined in the case of general meshes: stability, consistency, conditioning of
the system to be inverted, numerical locking. An attempt may also be made to recover the gains in
accuracy using the methods presented in for discretising pressure gradients.
During the course of the thesis, time will be taken to settle the basic problems of this method (first and
second years), both on the theoretical aspects and on the computer implementation. It may be carried
out in the Castem, TrioCFD, Trust or POLYMAC development environments. The focus will be on
application cases that are representative of the community.

Data assimilation for hypersonic laminar turbulent transition reconstruction

To design a hypersonic vehicle, it is necessary to accurately predict the heat flows at the wall. These flows are strongly constrained by the nature of the boundary layer (laminar/transitional/turbulent). The mechanisms behind the laminar-turbulent transition are complex and still poorly understood. What's more, transitional phenomena are highly dependent on fluctuations in the free flow around the model in the case of wind tunnel testing, or around the craft in the case of flight. These fluctuations are very difficult to measure precisely, which makes the comparison between calculation and testing very complex. To carry out a detailed analysis of flow physics during testing, we need to turn to the results of high-fidelity calculations. It is therefore crucial to be able to reproduce numerically the upstream disturbances encountered. During the course of the thesis, we will be looking to develop data assimilation methods, based on high-fidelity simulation, to invert, i.e. determine fluctuations in the light of observations. The focus will be on assembly techniques based on Bayesian inference. Emphasis will be placed on integrating a priori knowledge of fluctuations. In addition, we will try to reduce the computational cost and quantify the uncertainties on the solution obtained. In particular, the approach will be applied to a flow around the CCF12 (cone-cylinder-flare) geometry realised in the R2Ch wind tunnel at ONERA.

Conditional generative model for dose calculation in radiotherapy

Particle propagation through matter by Monte Carlo method (MC) is known for its accuracy but is sometimes limited in its applications due to its cost in computing resources and time. This limitation is all the more important for dose calculation in radiotherapy since a specific configuration for each patient must be simulated, which hinders its use in clinical routine.

The objective of this thesis is to allow an accelerated and thrifty dose calculation by training a conditional generative model to replace a set of phase space files (PSF), whose architecture will be determined according to the specificities of the problem (GAN, VAE, diffusion models, normalizing flows, etc.). In addition to the acceleration, the technique would produce an important gain in efficiency by reducing the number of particles to be simulated, both in the learning phase and in the generation of particles for the dose calculation (model's frugality).

We propose the following method:
- First, for the fixed parts of the linear accelerator, the use of a conditional generative model would replace the storage of the simulated particles in a PSF, whose data volume is particularly large. The compactness of the model would limit the exchanges between the computing units without the need for a specific storage infrastructure.
- In a second step, this approach will be extended to the final collimation whose complexity, due to the multiplicity of possible geometrical configurations, can be overcome using the model of the first step. A second conditional generative model will be trained to estimate the particle distribution for any configuration from a reduced number of simulated particles.

The last part of the thesis will consist in taking advantage of the gain in computational efficiency to tackle the inverse problem, i.e. optimising the treatment plan for a given patient from a contoured CT image of the patient and a dose prescription.

Monte Carlo methods for the adjoint transport equation: application to radiation shielding problems

The Monte Carlo method is the reference approach for simulating the transport of neutrons and photons, particularly in the field of radiation shielding, due to the very low number of approximations that it introduces. The usual Monte Carlo strategy is based on the sampling of a large number of particle histories, which start from a source, follow the physical laws of collision available in nuclear data libraries and explore the geometry of the system : the contributions of the particles to the response of interest (e.g. a count rate in a detector), averaged over all simulated stories, estimate the value predicted by the Boltzmann equation. If the detector region is "small", statistical convergence of the standard Monte Carlo approach becomes very difficult, because only an extremely limited number of stories will be able to contribute. It then becomes advantageous to use Monte Carlo methods for the solution of the adjoint transport equation: the histories of the particles are sampled from the detector backwards, and the collection region is the source of the starting problem (which is typically assumed to be “large” relative to the detector). This approach, simple in principle, offers the possibility of considerably reducing the statistical uncertainty. However, the adjoint Monte Carlo methods present scientific obstacles that are both practical and conceptual: how to sample the physical laws of collision “backwards”? How to control the numerical stability of adjoint simulations? In this thesis, we will explore different strategies in order to provide answers to these questions, in view of applying these methods to radiation shielding problems. The practical implications of this work could open up very encouraging perspectives for the new TRIPOLI-5® simulation code.

Assimilation of transient data and calibration of simulation codes using time series

In the context of scientific simulation, some computational tools (codes) are built as an assembly of (physical) models coupled in a numerical framework. These models and their coupling use data sets fitted on results given by experiments or fine computations of “Direct Numerical Simulation” (DNS) type in an up-scaling approach. The observables of these codes, as well as the results of the experiments or the fine computations are mostly time dependent (time series). The objective of this thesis is then to set up a methodology to improve the reliability of these codes by adjusting their parameters through data assimilation from these time series.
Work on parameter fitting has already been performed in our laboratory in a previous thesis, but using scalars derived from the temporal results of the codes. The methodology developed during this thesis has integrated screening, surrogate models and sensitivity analysis that can be extended and adapted to the new data format. A preliminary step of transformation of the time series will be developed, in order to reduce the data while limiting the loss of information. Machine learning /deep learning tools could be considered.
The application of this method will be performed within the framework of the nuclear reactor severe accident simulation. During these accidents, the core loses its integrity and corium (fuel and structure elements resulting from the reactor core fusion) is formed and can relocate and interact with its environment (liquid coolant, vessel’s steel, concrete from the basemat…). Some severe accident simulation codes describe each step / interaction individually while others describe the whole accident sequence. They have in common that they are multiphysic and have a large number of models and parameters. They describe transient physical phenomena in which the temporal aspect is important.
The thesis will be hosted by the Severe Accident Modeling Laboratory (LMAG) of the IRESNE institute at CEA Cadarache, in a team that is at the top of the national and international level for the numerical study of corium-related phenomena, from its generation to its propagation and interaction with the environment. The techniques implemented for data assimilation also have an important generic potential which ensures important opportunities for the proposed work, in the nuclear world and elsewhere.

Multi-block and non-conformal domain decomposition, applied to the 'exact' boundary coupling of the SIMMER-V thermohydraulics code

This thesis is part of the research required for the sustainable use of nuclear energy in a decarbonized, climate-friendly energy mix. Sodium-cooled 4th generation reactors are therefore candidates of great interest for saving uranium resources and minimizing the volume of final waste.

In the context of the safety of such reactors, it is important to be able to precisely describe the consequences of possible core degradation. A collaboration with its Japanese counterpart JAEA allows the CEA to develop the SIMMER-V code dedicated to simulating core degradation. The code calculates sodium thermohydraulics, structural degradation and core neutronics during the accident phase. The objective is to be able to represent not only the core but also its direct environment (primary circuit) with precision. Taking this topology into account requires partitioning the domain and using a boundary coupling method. The limitation of this approach generally lies in the quality and robustness of the coupling method, particularly during fast transients during which pressure and density waves cross boundaries.

A coupling method was initiated (Annals of Nuclear Energy 2022, Implementation of multi-domains in SIMMER-V thermohydraulic code at LMAG, which consists of merging the different decompositions of each of the domains, with the aim of constituting a unique decomposition of the overall calculation. This method was developed in a simplified framework where the (Cartesian) meshes connect in a conformal manner at the boundary level. The opportunity that opens up is to extend this method to non-conform meshes by using the MEDCoupling library. This first step, the feasibility of which has been established, will make it possible to assemble components to constitute a 'loop' type system. The second step will consist of extending the method so that one computational domain can be completely nested within another. This nesting will then make it possible to constitute a domain by juxtaposition or by nesting with non-conforming domain meshes and decompositions. After verifying the numerical qualities of the method, the last application step will consist of building a simulation of the degradation of a core immersed in its primary tank ('pool' configuration) allowing the method followed to be validated.

This job will enable the student to develop knowledge in numerical techniques and modeling for complex physical systems with flows. He or she will apply techniques ranging from method design to validation, as part of a dynamic, multidisciplinary team at CEA Cadarache.

Deployment strategy for energy infrastructures on a regional scale: an economic and environmental optimisation approach

The general context is "Design and optimisation of multi-vector energy systems on a territorial scale".
More specifically, the aim is to develop new methods for studying trajectories for reducing the overall environmental impact (underlying LCA) of a territory while controlling costs in various applications, for example:
- Opportunity to develop infrastructures (e.g. H2 network, or heat network) to enhance decarbonisation, by expanding new uses of energy where these infrastructures exist or will exist, while reducing the overall environmental impact for given uses.
- Based on these studies, study the impact of centralising or decentralising production and consumption resources,
- Taking into account the long-term dynamic of investments, with the compromise of renovating/replacing installations at a given moment, in order to reduce the overall environmental impact for given uses.
Possible applications for hydrogen infrastructures have been identified or are being identified

Geometric deep learning applied to medical applications

The PhD subject deals with geometric deep learning and its use in several medical applications.
The merging of these two domains (geometry and artificial intelligence) is at the core of the phD with the conception of SPDnet neural networks that combine both end-to-end training of frequency and spatial parameters with mathematical operations on the variety of symmetric definite-positive (SPD) matrices.
The design of such methods both from a mathematical and software point of view are part of the phD’s objectives as well as their application on public medical datasets like in electroencephalography-based brain-computer interface (BCI).
The expected results consist first in demonstrating the superiority of these geometric approaches over state-of-the-art methods used in BCI and second to identify the best architectures in different medical applications ranging from multi-array data to medical image processing.

CFD development and modeling applied to thermal-hydraulics of hydrogen storage in salt caverns

A PhD thesis is available at LMSF lab of CEA in collaboration with Storengy, a world specialist in natural gas storage in salt caverns. Measurements carried out in the cavity showed that gas is in convective motion in the upper part of the cavity and is not necessarily in thermodynamic equilibrium with the brine at the bottom of the cavity, leading to gas stratification phenomena. The different flow regimes (convective or not) will strongly influence, on the one hand, mass exchanges between the gas and the brine and therefore the evolution of the gas composition (in moisture and other components) at the cavity exit and, on the other hand, thermal exchanges between the gas and the rock mass surrounding the cavity. In this context, CFD-based prediction tools are highly beneficial for understanding these phenomena and will contribute to a better interpretation of the physical measurements made in the cavity, to the design improvment of surface installations and to monitoring storage facilities, particularly for hydrogen storage. In this doctoral project, the aim is to develop a thermal-hydraulics model based on TrioCFD software for gas storage in realistically-shaped cavities and under cavity operating conditions (injection and withdrawal phases). To this end, the operation of storage salt cavities will be modeled, initially for a real geometry and in single-phase flow, then in two-phase flow, taking into account mass exchanges between the brine and the gas in the cavity.

High-Performance Computing (HPC) resolution of "point-saddle" problems arising from the mechanics of contact between deformable structures

In the field of structural mechanics, simulated systems often involve deformable structures that may come into contact. In numerical models, this generally translates into kinematic constraints on the unknown of the problem (i.e. the displacement field), which are dealt with by the introduction of so-called dual unknowns that ensure the non-interpenetration of contacting structures. This leads to the resolution of so-called "saddle-point" linear systems, for which the matrix is "indefinite" (it has positive and negative eigenvalues) and "sparse" (the vast majority of terms in this matrix are zero).

In the context of high-performance parallel computing, we're turning to "iterative" methods for solving linear systems, which, unlike "direct" methods, can perform well for highly refined numerical models when using a very large number of parallel computing processors. But for this to happen, they need to be carefully designed and/or adapted to the problem at hand.

While iterative methods for solving "positive definite" linear systems (which are obtained in the absence of kinematic constraints) are relatively well mastered, solving linear point-saddle systems remains a major difficulty [1]. A relatively abundant literature proposes iterative methods adapted to the treatment of the "Stokes problem", emblematic of incompressible fluid mechanics. But the case of point-saddle problems arising from contact constraints between deformable structures is still a relatively open problem.

The proposed thesis consists in proposing iterative methods adapted to the resolution of linear "saddle-point" systems arising from contact problems between deformable structures, in order to efficiently handle large-scale numerical models. The target linear systems have a size of several hundred million unknowns, distributed over several thousand processes, and cannot currently be solved efficiently, either by direct methods, or by "basic" preconditioned iterative methods. In particular, we will validate the approach proposed by Nataf and Tournier [2] and adapt it to cases where the constraints do not act on all the primal unknowns.

The work carried out can be applied to numerous industrial problems, particularly in the nuclear industry. One example is the case of fuel pellets, which expand under the effect of temperature and the generation of fission products, and come into contact with the metal cladding of the fuel rod, which can lead to cladding failure [3].

This thesis is in collaboration with the LIP6 laboratory (Sorbonne-université).

An internship can be arranged in preparation for thesis work, depending on the candidate's wishes.

[1] Benzi, M., Golub, G. H., & Liesen, J. (2005). Numerical solution of saddle point problems. Acta numerica, 14, 1-137. (
[2] Nataf, F., & Tournier, P. H. (2023). A GenEO Domain Decomposition method for Saddle Point problems. Comptes Rendus. Mécanique, 351(S1), 1-18. (
[3] Michel, B., Nonon, C., Sercombe, J., Michel, F., & Marelle, V. (2013). Simulation of pellet-cladding interaction with the pleiades fuel performance software environment. Nuclear Technology, 182(2), 124-137. (