Conditional generative model for dose calculation in radiotherapy

Particle propagation through matter by Monte Carlo method (MC) is known for its accuracy but is sometimes limited in its applications due to its cost in computing resources and time. This limitation is all the more important for dose calculation in radiotherapy since a specific configuration for each patient must be simulated, which hinders its use in clinical routine.

The objective of this thesis is to allow an accelerated and thrifty dose calculation by training a conditional generative model to replace a set of phase space files (PSF), whose architecture will be determined according to the specificities of the problem (GAN, VAE, diffusion models, normalizing flows, etc.). In addition to the acceleration, the technique would produce an important gain in efficiency by reducing the number of particles to be simulated, both in the learning phase and in the generation of particles for the dose calculation (model's frugality).

We propose the following method:
- First, for the fixed parts of the linear accelerator, the use of a conditional generative model would replace the storage of the simulated particles in a PSF, whose data volume is particularly large. The compactness of the model would limit the exchanges between the computing units without the need for a specific storage infrastructure.
- In a second step, this approach will be extended to the final collimation whose complexity, due to the multiplicity of possible geometrical configurations, can be overcome using the model of the first step. A second conditional generative model will be trained to estimate the particle distribution for any configuration from a reduced number of simulated particles.

The last part of the thesis will consist in taking advantage of the gain in computational efficiency to tackle the inverse problem, i.e. optimising the treatment plan for a given patient from a contoured CT image of the patient and a dose prescription.

Numerical simulation of turbulence models on distorted meshes

Turbulence plays an important role in many industrial applications (flow, heat transfer, chemical reactions). Since Direct Simulation (DNS) is often an excessive cost in computing time, Reynolds Models (RANS) are then used in CFD (computational fluid dynamics) codes. The best known, which was published in the 70s, is the k - epsilon model.
It results in two additional non-linear equations coupled to the Navier-Stokes equations, describing the transport, for one, of turbulent kinetic energy (k) and, for the other, of its dissipation rate (epsilon). ). A very important property to check is the positivity of the parameters k and epsilon which is necessary for the system of equations modeling the turbulence to remain stable. It is therefore crucial that the discretization of these models preserves the monotony. The equations being of convection-diffusion type, it is well known that with classical linear schemes (finite elements, finite volumes, etc ...), the numerical solutions are likely to oscillate on distorted meshes. The negative values of the parameters k and epsilon are then at the origin of the stop of the simulation.
We are interested in nonlinear methods allowing to obtain compact stencils. For diffusion operators, they rely on nonlinear combinations of fluxes on either side of each edge. These approaches have proved their efficiency, especially for the suppression of oscillations on very distorted meshes. We can also take the ideas proposed in the literature where it is for example described nonlinear corrections applying on classical linear schemes. The idea would be to apply this type of method on the diffusive operators appearing in the k-epsilon models. In this context it will also be interesting to transform classical schemes of literature approaching gradients into nonlinear two-point fluxes. Fundamental questions need to be considered in the case of general meshes about the consistency and coercivity of the schemes studied.
During this thesis, we will take the time to solve the basic problems of these methods (first and second year), both on the theoretical aspects and on the computer implementation. This can be done in Castem, TrioCFD or Trust development environments. We will then focus on regular analytical solutions and application cases representative of the community.

New condensation model in stratified flow at CFD and macroscopic scale by two-phase upscaling

In the context of safety of Pressurized Water Reactor (PWR), the Primary Coolant Loss Accident (LOCA) is of great importance. The LOCA is a hypothetical accident caused by a breach in the primary circuit. This leads to a pressure decrease in the primary circuit and a loss of water inventory in this circuit. Its resulting in heating of the fuel rods, which must remain limited so that damage to the fuel does not reduce cooling of the reactor core and prevents meltdown.

To remedy this situation, safety injection is activated to inject cold water, in the form of a jet, into the horizontal cold branch, which is totally or partially dewatered by the presence of pressurized steam. A stratified flow appears in the cold branch, with significant condensation phenomena in the vicinity of the jet and at the free surface in stratified flow zones. Numerous experimental and numerical works have been carried out on interfacial transfers at the free surface on rectangular and cylindrical cross-sections. CFD simulations of condensation at the free surface are carried out with the Neptune_CFD code, used by FRAMATOME, EDF and CEA. Currently, three models for heat transfer at the free surface are available in Neptune_CFD. These models have been established from a reduced number of simulations (DNS, LES and RANS) on rectangular configurations that remain far from the configuration of interest. Flows in a rectangular section tend to be parallel, whereas flows in a cylindrical section are three-dimensional.

The aim of this thesis is to improve the modeling of free surface condensation in a cylindrical cross-section configuration. Initially, a bibliographic study will be carried out on the free surface flow map, as well as on experimental works devoted to characterizing of interfacial area, mean interfacial velocity, turbulence terms in the vicinity of the free surface and heat transfer. In parallel, a new model will be developed in relation to the various improvement elements identified, and the associated validation carried out. Work is also planned to upscale two-phase CFD simulations to a macroscopic CATHARE approach. This up-scaling method will be based on Tanguy Herry's thesis work.

Electronic structure calculation with deep learning models

Ab initio simulations with Density Functional Theory (DFT) are now routinely employed across scientific disciplines to unravel the intricate electronic characteristics and properties of materials at the atomic level. Over the past decade, deep learning has revolutionized multiple areas such as computer vision, natural language processing, healthcare diagnostics, and autonomous systems. The combination of these two fields presents a promising avenue to enhance the accuracy and efficiency of complex materials properties predictions, bridging the gap between quantum-level understanding and data-driven insights for accelerated scientific discovery and innovation. Many efforts have been devoted to build deep learning interatomic potentials that learn the potential energy surface (PES) from DFT simulations and can be employed in large-scale molecular dynamics (MD) simulations. Generalizing such deep learning approaches to predict the electronic structure instead of just the energy, forces and stress tensor of a system is an appealing idea as it would open up new frontiers in materials research, enabling the simulation of electron-related physical properties in large systems that are important for microelectronic applications. The goal of this PhD is to develop new methodologies relying on equivariant neural networks to predict the DFT Hamiltonian (i.e. the most fundamental property) of complex materials (including disorder, defects, interfaces, etc.) or heterostructures.

Study of pattern filling for nanoimprint use in advanced process

Since 1960, CEA-LETI (laboratory of electronics and information technology) is a driver of French innovation in new technologies domain. Its diverse entities bridge fundamental research and industrial outcomes. One of them, DPFT (department of technological platform) leverages innovation and maturation of new processes for next-generation electronics with its pre-industrial environment grouping together manufacturing and characterization tools. While “standard” printing technologies like photolithography are still developing, CEA-Leti, with various partnerships, explore rupture printing technologies as nanoimprint. This high-resolution printing technic, by mean of a re-usable intermediate mask, could solve dimensional challenge in diverse fields (electronics, optics, photonics, …) with an improved yield (reduction of both cost and time) while offering a more sustainable production method. Nanoimprint could globally reduce process step number but also decrease sacrificial material use thanks to its additive fabrication approach. This new method is an opportunity to review the full integration process, as well as enabling currently impossible integration. However, the limited knowledge of mask replication mechanics on the resist and the potential effects on the following process steps are still a limitation. A numerical model able to handle such question would be very valuable for the development of nanoimprint usage.
The final goal of this thesis is to set up a numerical model of mask filling by the resist able to take into account the mask design. In a first step, the model will include the chemical and physical properties of the resist used, as well as the theoretical laws of pattern filling. This model will be evaluated on reference patterns, directly with the tools available in clean room. The key feature of this model is the inclusion of the design, in order to obtain generalizable results. In a second step, the PhD student could change the test patterns to extend the model coverage and/or optimize pattern dimension for a specific end-use. Moreover, the candidate could test influence of different process parameters, as materials or equipment settings, in the clean room facilities. In a general way, the PhD student will have to develop a precise analysis in order to point out the phenomenon in play, potentially construct an experience map but also extend current computing tools on new grounds like 3D patterning.
Thesis contract is similar to a fixed-term contract of 3 years, with a gross salary of 2043.54€ the first and second year, and 2104.62€ the final year. Transversal competences developed during this PhD are a great value to continue on high technology domains like nano- and microelectronics or material chemistry, or more generally in any domain with an extensive use of data treatment or physical modelling.

Numerical twin for the Flame Spray Pyrolysis process

Our ability to manufacture metal oxide nanoparticles (NPs) with well-defined composition, morphology and properties is a key to accessing new materials that can have a revolutionary technological impact, for example for photocatalysis or storage of energy. Among the different nanopowders production technologies, Flame Spray Pyrolysis (FSP) constitutes a promising option for the industrial synthesis of NPs. This synthesis route is based on the rapid evaporation of a solution - solvent plus precursors - atomized in the form of droplets in a pilot flame to obtain nanoparticles. Unfortunately, mastery of the FSP process is currently limited due to too much variability in operating conditions to explore for the multitude of target nanoparticles. In this context, the objective of this thesis is to develop the experimental and numerical framework required by the future deployment of artificial intelligence for the control of FSP systems. To do this, the different phenomena taking place in the synthesis flames during the formation of the nanoparticles will be simulated, in particular by means of fluid dynamics calculations. Ultimately, the creation of a digital twin of the process is expected, which will provide a predictive approach for the choice of the synthesis parameters to be used to arrive at the desired material. This will drastically reduce the number of experiments to be carried out and in consequence the time to develop new grades of materials

Calculation of the thermal conductivity of UO2 nuclear fuels and the influence of irradiation defects

Atomistic simulations of the behaviour of nuclear fuel under irradiation can give access to its thermal properties and their evolution with temperature and irradiation. Knowledge of the thermal conductivity of 100% dense oxide can now be obtained by molecular dynamics and the interatomic force constants[1] at the single crystal scale, but the effect of defects induced by irradiation (irradiation loop, cluster of gaps) or even grain boundaries (ceramic before irradiation) remain difficult to evaluate in a coupled way.
The ambition is now to include defects in the supercells and to calculate their effect on the force constants. Depending on the size of the defects considered, we will use either the DFT (Density Functional Theory) or an empirical or numerical potential to perform the molecular dynamics. AlmaBTE allows the calculation of phonon scattering by point defects and the calculation of phonon scattering by dislocations and their transmission at an interface have also recently been implemented. Thus, the chaining atomistic calculations/AlmaBTE will make it possible to determine the effect of the polycrystalline microstructure and irradiation defects on the thermal conductivity. At the end of this thesis, the properties obtained will be used in the existing simulation tools in order to estimate the conductivity of a volume element (additional effect of the microstructure, in particular of the porous network, Fast Fourier Transform method), data which will finally be integrated into the simulation of the behavior of the fuel element under irradiation.
The work will be carried out at the Nuclear Fuel Department of the CEA, in a scientific environment characterised by a high level of expertise in materials modelling, in close collaboration with other CEA teams in Grenoble and in the Paris region who are experts in atomistic calculations. The results will be promoted through scientific publications and participation in international congresses.

Large-scale numerical modeling and optimization of a novel injector for laser-driven electron accelerators to enable their use for scientific and technological applications

Ultra-short, high-energy (up to few GeVs) electron beams can be accelerated over just a few centimeters by making an ultra-intense laser interact with a gas-jet, with a technique called “Laser Wakefield Acceleration” (LWFA). Thanks to their small size and the ultra-short duration of the accelerated electron beams, these devices are potentially interesting for a variety of scientific and technological applications. However, LWFA accelerators do not usually provide enough charge for most of the envisaged applications, in particular if a high beam quality and a high electron energy are also required.

The first goal of this thesis is to understand the basic physics of a novel LWFA injector concept recently conceived in our group. This injector consists of a solid target coupled with a gas-jet, and should be able to accelerate a substantially higher amount of charge with respect to conventional strategies, while preserving at the same time the quality of the beam. Large scale numerical simulation campaigns and machine learning techniques will be used to optimize the properties of the accelerated electrons. The interaction of these electron beams with various samples will be simulated with Monte Carlo code to assess their potential for applications such as Muon Tomography and radiobiology/radiotherapy. The proposed activity is essentially numerical, but with the possibility to be involved in the experimental activities of the team.

The PhD student will have the opportunity to be part of a dynamic team with strong national and international collaborations. They will also acquire the necessary skills to participate in laser-plasma interaction experiments in international facilities. Finally, they will acquire the required skills to contribute to the development of a complex software written in modern C++ and designed to run efficiently on the most powerful supercomputers in the world: the state-of-the-art Particle-In-Cell code WarpX (prix Gordon Bell en 2022). The development activity will be carried out in collaboration with the team led by Dr. J.-L. Vay at LBNL (US), where the candidate could have the opportunity to spend a few months during the thesis.

Cosmological Simulations of Galaxy Formation with Exascale Supercomputing

This project aims to enhance the synergy between astronomical observations, numerical cosmological simulations and galaxy modelling. Upcoming instruments like Euclid, DESI and Rubin LSST, among others, will make wide-field galaxy surveys with extremely precise measurements. The enhanced precision in the observations however, will requite robust theoretical predictions from galaxy formation models to achieve a profound understanding of the fundamental physics underlying the cosmological measurements.

To achieve this, exa-supercomputers will play a key role. Unlike modern supercomputers, which typically consist of thousands of CPUs for state-of-the-art simulation productions, exa-supercomputers will employ a hybrid configuration of CPUs hosts with GPUs accelerators. This configuration will empower the computations of up to 10^18 operations per second. Exa-supercomputers will revolutionise our ability to simulate cosmological volumes spanning 4 Gigaparsecs (Gpc) with 25 trillion particles! the minimum volume and resolution requirements necessary for making predictions of Euclid data.

However, the challenge to-date lies in the fact that cosmological simulation software designed for exa-supercomputers lacks the modelling for galaxy formation. Examples include the HACC-CRKSPH code (Habib et al. 2016, Emberson et al. 2019) and PKDGRAV3 (Potter, Stadel & Teyssier 2017), that have produced the largest simulations to-date, FarPoint (Frontiere et al. 2022), encompassing 1.86 trillion particles within a 1 Gpc volume, and Euclid Flagship (Potter, Stadel & Teyssier 2017), featuring 2 trillion particles in a 3 Gpc volume, respectively. While HACC-CRKSPH and PKDGRAV3 were developed to run on modern GPUs-accelerated supercomputers, they lack the complex physics of galaxy formation and can therefore only produce gravity-only cosmological boxes.

The SWIFT code (Schaller et al. 2023) is a parallel effort that has produced Flamingo (Schaye et al. 2023), the largest simulation that integrates gravity, hydrodynamics and galaxy formation physics, encompassing 0.3 trillion particles. However, the caveat of SWIFT is that it was primarily designed for CPU usage. The adaptation of SWIFT to run on modern GPUs will require the entire redevelopment of the code. Another example are the current big simulations of galaxy formation done at Irfu, such as Extreme Horizon (Chabanier et al. 2020), that have also reached their limit as they rely on CPU-based codes that hamper their scalability.

Understanding the intricacies of galaxy formation is paramount for interpreting astronomical observations. In this pursuit, CEA DRF/Irfu stands uniquely positioned to lead the advances in astrophysics in the emerging exascale era. Researchers at DAp and DPhP have already embarked on the analysis of high-quality data from the Euclid mission and DESI. Simultaneously, a team at DEDIP is developing DYABLO (Durocher & Delorme, in preparation), a robust gravity + hydrodynamics code tailored explicitly for exa-supercomputing.

In recent years, significant investments have been channeled into the advancement of DYABLO. Numerous researchers at DAp and DEDIP have contributed on various aspects (from the hydrodynamics of solar physics to refining Input/Output processes) thanks to collaborative grants such as PTC-CEA grant and FETHPC European project IO-SEA. Additionally, DYABLO has benefited from interactions with CEA research unit, Maison de la simulation (CEA & CNRS).

This ambitious project aims to extend DYABLO's capabilities by integrating galaxy formation modules in collaboration with Maxime Delorme. These modules will encompass radiative gas cooling and heating, star formation, chemical enrichment, stellar mass loss, energy feedback, black holes, and active galactic nuclei feedback. The ultimate objective is to enhance the analysis of Euclid and DESI data by generating simulation predictions of galaxy formation and evolution using DYABLO. The initial dataset will involve a comprehensive examination of clustering of matter and galaxy clustering, in partnership with researchers at DAp/LCEG and DAp/CosmoStat.

This thesis will create the first version of a galaxy formation code optimised for exa-scale supercomputing. Ongoing developments will not only expand its capabilities but also unlock new opportunities for in-depth research, enhancing synergy between astronomical observations, numerical cosmological simulations, and galaxy modelling.

References:
Habib, S., et al., 2016, New Astronomy, Volume 42, p. 49-65.
Emberson, J.D., et al., 2019, The Astrophysical Journal, Volume 877, Issue 2, article id. 85, 17 pp.
Potter, D., Stadel, J., & Teyssier, R., 2017, Computational Astrophysics and Cosmology, Vol. 4, Issue 1, 13 pp.
Frontiere, N., et al., 2023, The Astrophysical Journal Supplement Series, Volume 264, Issue 2, 24 pp.
Schaller, M., et al., 2023, eprint arXiv:2305.13380
Schaye, J., et al., 2023, eprint arXiv:2306.04024
Chabanier, S., et al., 2020, Astronomy & Astrophysics, Volume 643, id. L8, 12 pp.

Top