Clean Room Activity Simulation Tool Development

During a previous internship, a tool for simulating batch execution in a clean room was developed. This tool takes into account processing times on equipment, equipment failures, and certain holds related to integration. The batches injected into this simulator come from the actual history of the clean room.
The goal of the PhD is to develop a simulator that can prospectively simulate batch execution based on the POR routes of the main themes present or upcoming in the clean room. Based on the POR routes, the tool should be able to generate development batches for technology bricks (short loops), as well as functional batches including test plates and pilot plates. A nomenclature and enrichment of the routes through metadata will need to be carried out to enable the tool to generate batches realistically, both in terms of process and project scheduling.
Different simulation engines will be compared in terms of performance and accuracy. Classical resolution engines (discrete simulation, event-driven, conjunctive graph-based) as well as innovative approaches (primarily reinforcement learning, but also supervised learning) will be studied.
The development and publication of a methodology for creating simulation instances (testbed) will also be carried out during this PhD work.

Modeling of Wall Condensation Phenomena and Liquid Film Interactions

In this thesis, we focus on modeling mass and energy transfer associated with wall condensation in a turbulent flow of a vapor–noncondensable gas mixture. The flow is two-phase and turbulent, where forced, mixed, and natural convection modes may occur. The framework of this work relies on the RANS approach applied to the compressible Navier–Stokes equations, in which wall condensation is described using semi-analytical wall functions developed in a previous doctoral study cite{iziquel2023}. These functions account for the different convection modes as well as suction and species interdiffusion effects, but neglect the presence of a liquid film.
In the literature, the influence of film formation and flow on mass and heat transfer is often neglected, since it is generally assumed that, in the presence of noncondensable gases, the resistance of the gaseous layer to vapor diffusion is much greater than the thermal resistance of the liquid film.
The objective of this thesis is to improve the prediction of heat and mass transfer by investigating, beyond the thermal resistance of the condensate, the dynamic effect of the liquid and its interaction with the gaseous diffusion layer during wall condensation. The study will first consider laminar film flow, and then attempt to extend the analysis to the turbulent regime.
In the gas phase, the wall-function model developed in cite{iziquel2023} for a binary mixture of vapor and a single noncondensable gas will be extended to mixtures of vapor and $n>1$ noncondensable gases (N2, H2, …), in order to address hydrogen risk issues.
The validation of the implemented models will be carried out using results from separate-effect (SET) and coupled-effect (CET) experiments available in the literature (Huhtiniemi cite{huhti89}, COPAIN, ISP47-MISTRA, ISP47-TOSQAN, RIVA). Comparisons at the CFD scale, using wall functions for condensation neglecting the film, will be performed on benchmark cases from the literature and condensation experiments (COPAIN) to assess the impact of this assumption as well as the improvement provided by the new model in terms of accuracy and computational cost.

Modelling of Thermo-Fluid Phenomena in the Plasma Nozzle of the ELIPSE Process

The ELIPSE process (Elimination of Liquids by Plasma Under Water) is an innovative technology dedicated to the mineralization of organic effluents. It is based on the generation of a thermal plasma fully immersed in a water-filled reactor vessel, enabling extremely high temperatures and reactive conditions that promote the complete decomposition of organic compounds.
The proposed PhD research aims to develop a multiphysics numerical model describing the behavior of the process, particularly within the plasma nozzle, a key zone where the high-temperature gas jet from the torch interacts with the injected liquids.
The approach will rely on coupled thermo-aerodynamic modeling, integrating fluid dynamics, heat transfer, phase change phenomena, and turbulence effects. Using Computational Fluid Dynamics (CFD) tools, the study will characterize plasma–liquid interaction mechanisms and optimize the geometry and operating conditions of the process. This modeling will be compared and validated against complementary experimental data obtained from the ELIPSE setup, providing the necessary input for model calibration and validation.
This work will build upon previous research that has led to the development of thermal and hydraulic models of both the plasma torch and the reactor vessel. Integrating the new model within this framework will yield a comprehensive and coherent representation of the ELIPSE process. Such an approach represents a decisive step toward process optimization and industrial scale-up.
The ideal candidate will be a Master’s or final-year engineering student with a background in process engineering and/or numerical simulation, demonstrating a strong interest in physical modeling and computational approaches.
During this PhD, the candidate will develop and strengthen skills in multiphysics numerical modeling, advanced CFD simulation, and thermo-aerodynamic analysis of complex processes. They will also acquire solid experience in waste treatment, a rapidly expanding field with significant industrial and environmental relevance. These skills will provide strong career opportunities in applied research, process engineering, energy, and environmental sectors.

Magnetar formation: from amplification to relaxation of the most extreme magnetic fields

Magnetars are neutron stars with the strongest magnetic fields known in the Universe, observed as high-energy galactic sources. The formation of these objects is one of the most studied scenarios to explain some of the most violent explosions: superluminous supernovae, hypernovae, and gamma-ray bursts. In recent years, our team has succeeded in numerically reproducing magnetic fields of magnetar-like intensities by simulating dynamo amplification mechanisms that develop in the proto-neutron star during the first seconds after the collapse of the progenitor core. However, most observational manifestations of magnetars require the magnetic field to survive over much longer timescales (from a few weeks for super-luminous supernovae to thousands of years for Galactic magnetars). This thesis will consist of developing 3D numerical simulations of magnetic field relaxation initialized from different dynamo states previously calculated by the team, extending them to later stages after the birth of the neutron star when the dynamo is no longer active. The student will thus determine how the turbulent magnetic field generated in the first few seconds will evolve to eventually reach a stable equilibrium state, whose topology will be characterized and compared with observations.

Design of asynchronous algorithms for solving the neutron transport equation on massively parallel and heterogeneous architectures

This PhD thesis work aims at designing an efficient solver for the solution to the neutron transport equation in Cartesian and hexagonal geometries for heterogeneous and massively parallel architectures. This goal can be achieved with the design of optimal algorithms with parallel and asynchronous programming models.
The industrial framework for this work is in solving the Boltzmann equation associated to the transportof neutrons in a nuclear reactor core. At present, more and more modern simulation codes employ an upwind discontinuous Galerkin finite element scheme for Cartesian and hexagonal meshes of the required domain.This work extends previous research which have been carried out recently to explore the solving step ondistributed computing architectures which we have not yet tackled in our context. It will require the cou-pling of algorithmic and numerical strategies along with programming model which allows an asynchronousparallelism framework to solve the transport equation efficiently.
This research work will be part of the numerical simulation of nuclear reactors. These multiphysics computations are very expensive as they require time-dependent neutron transport calculations for the severe power excursions for instance. The strategy proposed in this research endeavour will decrease thecomputational burden and time for a given accuracy, and coupled to a massively parallel and asynchronousmodel, may define an efficient neutronic solver for multiphysics applications.
Through this PhD research work, the candidate will be able to apply for research vacancies in highperformance numerical simulation for complex physical problems.

Magneto-convection of solar-type stars: flux emergence and origin of starspots

The Sun and solar-type stars possess rich and variable magnetism. In our recent work on turbulent convective dynamos in this type of star, we have been able to highlight a magneto-rotational history of their secular evolution. Stars are born active with short magnetic cycles, then slow down due to braking by their magnetized particle wind, their magnetic cycle lengthens to become commensurate with that of the Sun (lasting 11 years) and finally, for stars that live long enough, they end up with a loss of cycle and a so-called anti-solar rotation (slow equator/fast poles). The agreement with observations is excellent, but we are missing an essential element to conclude: What role do sunspots/starspots play in the organization of the magnetism of these stars, and are they necessary for the appearance of a stellar magnetic cycle, e.g. the so-called “paradox of spotty dynamos”? Indeed, our HPC simulations of solar dynamos do not have yet the angular resolution to resolve the spots, and yet we do observe cycles in our simulations of stellar dynamos for Rossby numbers < 1. So, are the spots simply a surface manifestation of an internal self-organization of the cyclic magnetism of these stars, or do they play a decisive role? Furthermore, how do the latitudinal flux emergence and the size and intensity of the spots forming on the surface evolve during the magneto-rotational evolution of these stars? To answer these key questions in stellar and solar magnetism in support of the ESA space missions Solar Orbiter and PLATO, in which we are involved, new HPC simulations of stellar dynamos must be developed, allowing us to get closer to the surface and thus better describe the process of magnetic flux emergence and the possible formation of sun/starspots. Recent tests showing that magnetic concentrations inhibiting local surface convection form in simulations with a higher magnetic Reynolds number and smaller-scale surface convection strongly encourage us to continue this project beyond the ERC Whole Sun project (ending in April 2026). Thanks to the Dyablo-Whole Sun code that we are co-developing with IRFU/Dedip, we wish to study in detail the convective dynamo, the emergence of magnetic flux, and the self-consistent formation of resolved spots, using its adaptive mesh refinement capability while varying global stellar parameters such as rotation rate, convective zone thickness, and surface convection intensity to assess how their number, morphology and latitude of emergence change and if they contribute or not to the closing of the cyclic dynamo loop.

Staggered schemes for the Navier-Stokes equations with general meshes

The simulation of the Navier-Stokes equations requires accurate and robust numerical methods that
take into account diffusion operators, gradient and convection terms. Operational approaches have
shown their effectiveness on simplexes. However, in some models or codes
(TrioCF, Flica5), it may be useful to improve the accuracy of solutions locally using an
error estimator or to take into account general meshes. We are here interested in staggered schemes.
This means that the pressure is calculated at the centre of the mesh and the velocities on the edges
(or faces) of the mesh. This results in methods that are naturally accurate at low Mach numbers .
New schemes have recently been presented in this context and have shown their
robustness and accuracy. However, these discretisations can be very costly in terms of memory and
computation time compared with MAC schemes on regular meshes
We are interested in the "gradient" type methods. Some of them are based on a
variational formulation with pressure unknowns at the mesh centres and velocity vector unknowns on
the edges (or faces) of the cells. This approach has been shown to be effective, particularly in terms of
robustness. It should also be noted that an algorithm with the same degrees of freedom as the
MAC methods has been proposed and gives promising results.
The idea would therefore be to combine these two approaches, namely the "gradient" method with the same degrees of freedom as MAC methods. Initially, the focus will be on recovering MAC schemes on regular meshes. Fundamental
questions need to be examined in the case of general meshes: stability, consistency, conditioning of
the system to be inverted, numerical locking. An attempt may also be made to recover the gains in
accuracy using the methods presented in for discretising pressure gradients.
During the course of the thesis, time will be taken to settle the basic problems of this method (first and
second years), both on the theoretical aspects and on the computer implementation. It may be carried
out in the Castem, TrioCFD, Trust or POLYMAC development environments. The focus will be on
application cases that are representative of the community.

Cosmological parameter inference using theoretical Wavelet statistics predictions

Launched in 2023, the Euclid satellite is surveying the sky in optical and infrared wavelengths to create an unprecedented map of the Universe's large-scale structure. A cornerstone of its mission is the measurement of weak gravitational lensing—subtle distortions in the shapes of distant galaxies. This phenomenon is a powerful cosmological probe, capable of tracing the evolution of dark matter and helping to distinguish between dark energy and modified gravity theories.
Traditionally, cosmologists have analyzed weak lensing data using second-order statistics (like the power spectrum) paired with a Gaussian likelihood model. This established approach, however, faces significant challenges:
- Loss of Information: Second-order statistics fully capture information only if the underlying matter distribution is Gaussian. In reality, the cosmic web is highly structured, with clusters, filaments, and voids, making this approach inherently lossy.
- Complex Covariance: The method requires estimating a covariance matrix, which is both cosmology-dependent and non-Gaussian. This necessitates running thousands of computationally intensive N-body simulations for each model, a massive and often impractical undertaking.
- Systematic Errors: Incorporating real-world complications—such as survey masks, intrinsic galaxy alignments, and baryonic feedback—into this framework is notoriously difficult.

In response to these limitations, a new paradigm has emerged: likelihood-free inference via forward modelling. This technique bypasses the need for a covariance matrix by directly comparing real data to synthetic observables generated from a forward model. Its advantages are profound: it eliminates the storage and computational burden of massive simulation sets, naturally incorporates high-order statistical information, and can seamlessly integrate systematic effects. However, this new method has its own hurdles: it demands immense GPU resources to process Euclid-sized surveys, and its conclusions are only as reliable as the simulations it uses, potentially leading to circular debates if simulations and observations disagree.

A recent breakthrough (Tinnaneni Sreekanth, 2024) offers a compelling path forward. This work provides the first theoretical framework to directly predict key wavelet statistics of weak lensing convergence maps—exactly the kind Euclid will produce—for any given set of cosmological parameters. It has been shown in Ajani et al (2021) that the wavelet coefficient L1-norm is extremely powerful to constraint the cosmological parameters. This innovation promises to harness the power of advanced, non-Gaussian statistics without the traditional computational overhead, potentially unlocking a new era of precision cosmology. We have demonstrated that this theoretical prediction can be used to build a highly efficient emulator (Tinnaneri Sreekanth et al, 2025), dramatically accelerating the computation of these non-Gaussian statistics. However, it is crucial to note that this emulator, in its current stage, provides only the mean statistic and does not include cosmic variance. As such, it cannot yet be used for full statistical inference on its own. 

This PhD thesis aims to revolutionize the analysis of weak lensing data by constructing a complete, end-to-end framework for likelihood-free cosmological inference. The project begins by addressing the core challenge of stochasticity: we will first calculate the theoretical covariance of wavelet statistics, providing a rigorous mathematical description of their uncertainty. This model will then be embedded into a stochastic map generator, creating realistic mock data that captures the inherent variability of the Universe.
To ensure our results are robust, we will integrate a comprehensive suite of systematic effects—such as noise, masks, intrinsic alignments, and baryonic physics—into the forward model. The complete pipeline will be integrated and validated within a simulation-based inference framework, rigorously testing its power to recover unbiased cosmological parameters. The culmination of this work will be the application of our validated tool to the Euclid weak lensing data, where we will leverage non-Gaussian information to place competitive constraints on dark energy and modified gravity.

References
V. Ajani, J.-L. Starck and V. Pettorino, "Starlet l1-norm for weak lensing cosmology", Astronomy and Astrophysics,  645, L11, 2021.
V. Tinnaneri Sreekanth, S. Codis, A. Barthelemy, and J.-L. Starck, "Theoretical wavelet l1-norm from one-point PDF prediction", Astronomy and Astrophysics,  691, id.A80, 2024.
V. Tinnaneri Sreekanth, J.-L. Starck and S. Codis, "Generative modeling of convergence maps based in LDT theoretical prediction", Astronomy and Astrophysics,  701, id.A170, 2025.

Modeling of a magnonic diode based on spin-wave non-reciprocity in nanowires and nanotubes

This PhD project focuses on the emerging phenomenon of spin wave non-reciprocity in cylindrical magnetic wires, from their fundamental properties, to their exploitation towards realizing magnonic diode based devices. Preliminary experiments conducted in our laboratory SPINTEC on cylindrical wires, with axial magnetization in the core and azimuthal magnetization on the wire surface, revealed a giant non-symmetrical effect (non-symmetrical dispersion curves with different speeds and periods for left- and right-propagating waves), up to an extent of creating a band gap for a given direction of motion, related to the circulation of magnetization (right or left). This particular situation has not been yet described theoretically or modeled, which sets an unexplored and promising ground for this PhD project. To model spin-wave propagation and derive dispersion curves for a given material we plan to use different numerical tools: our in-home 3D finite element micromagnetic software feeLLGood and open source 2D TetraX package dedicated to eigen modes spectra calculations. This work will be conducted in tight collaboration with experimentalists, with a view both to explain experimental results and to guide further experiments and research directions.

One-sided communication mechanisms for data decomposition in Monte Carlo particle transport applications

In the context of a Monte Carlo calculation for the evolution of a PWR (pressurized water reactor) core, it is necessary to compute a very large number of neutron-nucleus reaction rates, involving a data volume that can exceed the memory capacity of a compute node on current supercomputers. Within the Tripoli-5 framework, distributed memory architectures have been identified as targets for high-performance computing deployment. To leverage such architectures, data decomposition approaches must be used, particularly for reaction rates. However, with a classical parallelization method, processes have no particular affinity for the rates they host locally; on the contrary, each rate receives contributions uniformly from all processes. Access to decomposed data can be costly when it requires intensive use of communications. Nevertheless, one-sided communication mechanisms, such as MPI RMA (Message Passing Interface, Remote Memory Access), make these accesses easier both in terms of expression and performance.
The objective of this thesis is to propose a method for partial data decomposition relying on one-sided communication mechanisms to access remotely stored data, such as reaction rates. Such an approach will significantly reduce the volume of data stored in memory on each compute node without causing a significant degradation in performance.

Top