Radiative heat transfer: efficient numerical resolution of associated problems in Beerian or non-Beerian media for the validation of simplified models
This research proposal focuses on the study, through modeling and numerical simulation, of heat transfer within a heterogeneous medium composed of opaque solids and a transparent or semi-transparent fluid. The considered modes of transfer are radiation and conduction.
Depending on the scale of interest, the radiance is the solution of the Radiative Transfer Equation (RTE). In its classical form, the RTE describes heat transfer phenomena at the so-called local scale, where solids are explicitly represented in the domain. At the mesoscopic scale of an equivalent homogeneous medium, however, the radiance is governed by a generalized RTE (GRTE) when the medium no longer follows the Beer–Lambert law. In this work, we focus on the numerical resolution of the RTE in both configurations, ultimately coupled with the energy conservation equation for temperature.
In deterministic resolution of the RTE, a standard approach for handling the angular variable is the Discrete Ordinates Method (Sn), which relies on quadrature over the unit sphere. For non-Beerian media, solving the GRTE is a very active research topic, with Monte Carlo methods often receiving more attention. Nevertheless, the GRTE can be linked to the generalized transport equation, as formulated in the context of particle transport, and a spectral method can be applied for its deterministic Sn resolution. This is the direction pursued in this PhD project.
The direct application of this work is the numerical simulation of accidents in Light Water Reactors (LWR) with thermal neutrons. Modeling radiative heat transfer is crucial because, in the case of core uncovering and fuel rod drying, radiation becomes a major heat removal mechanism as temperatures rise, alongside gas convection (steam). This topic is also relevant in the context of the nuclear renaissance, with startups developing advanced High Temperature Reactors (HTR) cooled by gas.
The goal of this thesis is the analysis and development of an innovative and efficient numerical method for solving the GRTE (within a high-performance computing environment), coupled with thermal conduction. From an application standpoint, such a method would enable high-fidelity simulations, useful for validating and quantifying the bias of simplified models used in engineering calculations.
Successful completion of this thesis would prepare the student for a research career in high-performance numerical simulation of complex physical problems, beyond nuclear reactor physics alone.
Investigation of polytopal methods apllied to CFD and optimized on GPU architecture
This research proposal focuses on the study and implementation of polytopal methods for solving the equations of fluid mechanics. These methods aim to handle the most general meshes possible, overcoming geometric constraints or those inherited from CAD operations such as extrusions or assemblies that introduce non-conformities. This work also falls within the scope of high-performance computing, addressing the increase in computational resources and, in particular, the development of massively parallel computing on GPUs.
The objective of this thesis is to build upon existing polytopal methods already implemented in the TRUST software, specifically the Compatible Discrete Operator (CDO) and Discontinuous Galerkin (DG) methods. The study will be extended to include convection operators and will investigate other methods from the literature, such as Hybrid High Order (HHO), Hybridizable Discontinuous Galerkin (HDG), and Virtual Element Method (VEM).
The main goals are to evaluate:
1. The numerical behavior of these different methods on the Stokes/Navier-Stokes equations;
2. The adaptability of these methods to heterogeneous architectures such as GPUs.
High-Fidelity Monte Carlo Simulations of Neutron Noise in Nuclear Power Reactors
Operating nuclear reactors are subject to a variety of perturbations. These can include vibrations of the fuel pins and fuel assemblies due to fluid-structure interactions with the moderator, or even vibrations of the core barrel, baffle, and pressure vessel. All of these perturbations can lead to small periodic fluctuations in the reactor power about the stable average power level. These power fluctuations are referred to as “neutron noise”. Being able to simulate different types of in-core perturbations allows reactor designers and operators to predict how the neutron flux could behave in the presence of such perturbations. In recent years, many different research groups have worked to develop computational models to simulate these sources of neutron noise, and their resulting effects on the neutron flux in the reactor. The primary objective of this PhD thesis will be to bring Monte Carlo neutron noise simulations to the scale of real-world industry calculations of nuclear reactor cores, with a high-fidelity continuous-energy physics representation. As part of this process, the student will add novel neutron noise simulation capabilities to TRIPOLI-5, the next-generation production Monte Carlo particle-transport code jointly developed by CEA and ASNR, with the support of EDF.
GPU-ACCELERATED CHARACTERISTICS METHOD FOR 3D NEUTRON TRANSPORT COMBINING THE LINEAR-SURFACE METHOD AND THE AXIAL POLYNOMIAL EXPANSION
This thesis falls within the framework of advancing numerical computation techniques for reactor physics. Specifically, it focuses on the implementation of methods that incorporate higher-order spatial expansions for neutron flux and cross-sections. The primary objective is to accelerate both existing algorithms and those that will be developed through GPU programming. By harnessing the computational power of GPUs, this research aims to enhance the efficiency and accuracy of simulations in reactor physics, thereby contributing to the broader field of nuclear engineering and safety.
Design of asynchronous algorithms for solving the neutron transport equation on massively parallel and heterogeneous architectures
This PhD thesis work aims at designing an efficient solver for the solution to the neutron transport equation in Cartesian and hexagonal geometries for heterogeneous and massively parallel architectures. This goal can be achieved with the design of optimal algorithms with parallel and asynchronous programming models.
The industrial framework for this work is in solving the Boltzmann equation associated to the transportof neutrons in a nuclear reactor core. At present, more and more modern simulation codes employ an upwind discontinuous Galerkin finite element scheme for Cartesian and hexagonal meshes of the required domain.This work extends previous research which have been carried out recently to explore the solving step ondistributed computing architectures which we have not yet tackled in our context. It will require the cou-pling of algorithmic and numerical strategies along with programming model which allows an asynchronousparallelism framework to solve the transport equation efficiently.
This research work will be part of the numerical simulation of nuclear reactors. These multiphysics computations are very expensive as they require time-dependent neutron transport calculations for the severe power excursions for instance. The strategy proposed in this research endeavour will decrease thecomputational burden and time for a given accuracy, and coupled to a massively parallel and asynchronousmodel, may define an efficient neutronic solver for multiphysics applications.
Through this PhD research work, the candidate will be able to apply for research vacancies in highperformance numerical simulation for complex physical problems.
One-sided communication mechanisms for data decomposition in Monte Carlo particle transport applications
In the context of a Monte Carlo calculation for the evolution of a PWR (pressurized water reactor) core, it is necessary to compute a very large number of neutron-nucleus reaction rates, involving a data volume that can exceed the memory capacity of a compute node on current supercomputers. Within the Tripoli-5 framework, distributed memory architectures have been identified as targets for high-performance computing deployment. To leverage such architectures, data decomposition approaches must be used, particularly for reaction rates. However, with a classical parallelization method, processes have no particular affinity for the rates they host locally; on the contrary, each rate receives contributions uniformly from all processes. Access to decomposed data can be costly when it requires intensive use of communications. Nevertheless, one-sided communication mechanisms, such as MPI RMA (Message Passing Interface, Remote Memory Access), make these accesses easier both in terms of expression and performance.
The objective of this thesis is to propose a method for partial data decomposition relying on one-sided communication mechanisms to access remotely stored data, such as reaction rates. Such an approach will significantly reduce the volume of data stored in memory on each compute node without causing a significant degradation in performance.
AI Enhanced MBSE framework for joint safety and security analysis of critical systems
Critical systems must simultaneously meet the requirements of both Safety (preventing unintentional failures that could lead to damage) and Security (protecting against malicious attacks). Traditionally, these two areas are treated separately, whereas they are interdependent: An attack (Security) can trigger a failure (Safety), and a functional flaw can be exploited as an attack vector.
MBSE approaches enable rigorous system modeling, but they don't always capture the explicit links between Safety [1] and Security [2]; risk analyses are manual, time-consuming and error-prone. The complexity of modern systems makes it necessary to automate the evaluation of Safety-Security trade-offs.
Joint safety/security MBSE modeling has been widely addressed in several research works such as [3], [4] and [5]. The scientific challenge of this thesis is to use AI to automate and improve the quality of analyses. What type of AI should we use for each analysis step? How can we detect conflicts between safety and security requirements? What are the criteria for assessing the contribution of AI to joint safety/security analysis?
CORTEX: Container Orchestration for Real-Time, Embedded/edge, miXed-critical applications
This PhD proposal will develop a container orchestration scheme for real-time applications, deployed on a continuum of heterogeneous computing resources in the embedded-edge-cloud space, with a specific focus on applications that require real-time guarantees.
Applications, from autonomous vehicles, environment monitoring, or industrial automation, applications traditionally require high predictability with real-time guarantees, but they increasingly ask for more runtime flexibility as well as a minimization of their overall environmental footprint.
For these applications, a novel adaptive runtime strategy is required that can optimize dynamically at runtime the deployment of software payloads on hardware nodes, with a mixed-critical objective that combines real-time guarantees with the minimization of the environmental footprint.
Exploration and optimization of RAID architectures and virtualization technologies for high-performance data servers
Given the ever-increasing demands of numerical simulation, supercomputers
must constantly evolve to improve their performance and thus maintain a
high quality of service for users. These demands are reflected on storage
systems, which, to be performant, reliable, and capacitive, must contain
cutting-edge technologies concerning the optimization of data placement
and the scheduling of I/O accesses. The objective of this thesis is to
study these technologies such as GPU-based RAID and I/O virtualization,
to evaluate them, and to establish optimizations that can improve the
performance of HPC storage systems.
Parallel simulation and adaptive mesh refinement for 3D solids mechanics problems
The challenge of this PhD thesis is to implement adaptive mesh refinement methods for non-linear 3D solids mechanics adapted to parallel computers.
This research topic is proposed as part of the NumPEx (Digital for Exascale) Priority Research Programs and Equipment (PEPR). It is part of the Exa-MA (Methods and Algorithms for Exascale) Targeted Project. The PhD will take place at CEA Cadarache, within the Institute for Research on Nuclear Energy Systems for Low-Carbon Energy Production (IRESNE), as part of the PLEIADES software platform development team, which specializes in fuel behavior simulation and multi-scale numerical methods.
In finite element simulation, adaptive mesh refinement (AMR) has become an essential tool for performing accurate calculations with a controlled number of unknowns. The phenomena to be taken into account, particularly in solids mechanics, are often complex and non-linear: contact between deformable solids, viscoplastic behaviour, cracking, etc. Furthermore, these phenomena require intrinsically 3D modelling. Thus, the number of unknowns to be taken into account requires the use of parallel solvers. One of the current computational challenges is therefore to combine adaptive mesh refinement methods and nonlinear solid mechanics for deployment on parallel computers.
The first research topic of this PhD thesis concerns the development of a local mesh refinement method (of block-structured type) for non-linear mechanics, with dynamic mesh adaptation. We will therefore focus on projection operators to obtain an accurate dynamic AMR solution during the evolution of refined areas.
The other area of research will focus on the effective treatment of contact between deformable solids in a parallel environment. This will involve extending previous work, which was limited to matching contact meshes, to the case of arbitrary contact geometries (node-to-surface algorithm).
The preferred development environment will be the MFEM tool. Finite element management and dynamic re-evaluation of adaptive meshes require assessing (and probably improving) the efficiency of the data structures involved. Large 3D calculations will be performed on national supercomputers using thousands of computing cores.
his will ensure that the solutions implemented can be scaled up to tens of thousands of cores.