The nonresonant streaming instability in turbulent plasmas

The magnetic turbulence prevalent in many astrophysical systems, such as the solar wind and supernova remnants, plays a crucial role in accelerating high-energy particles, particularly within collisionless shock waves. By trapping particles near the shock front, this turbulence facilitates their energy gain through repeated crossings between the upstream and downstream regions – a process known as Fermi acceleration, believed to be the origin of cosmic rays.
It happens that the turbulence surrounding supernova remnants is likely generated by the cosmic rays themselves via plasma instabilities as they stream ahead of the shock. In the specific case of a shock wave propagating parallel to the ambient magnetic field, the dominant instability is thought to be the non-resonant streaming instability, or Bell's instability, which acts to amplify the preexisting turbulence.
The objective of this PhD is to build a comprehensive analytical model of this instability within a turbulent plasma, and to validate its predictions against advanced numerical simulations.

Description of collective phenomena in atomic nuclei beyond Time-Dependent Density Functional

Context :
Predicting the organization and dynamics of neutrons and protons within atomic nuclei is a significant
scientific challenge, crucial for designing future nuclear technologies and addressing fundamental questions
such as the origin of heavy atoms in our universe. In this context, CEA, DAM, DIF develops theoretical
approaches to simulate the dynamics of the elementary constituents of atomic nuclei. The equations of
motion, derived within the framework of quantum mechanics, are solved on our supercomputers. The 2010s
saw the rise of the time-dependent density functional theory (TDDFT) approach for tackling this problem.
While TDDFT has provided groundbreaking insights into phenomena such as giant resonances observed in
atomic nuclei and nuclear fission, this approximation has intrinsic limitations.

Objectives :
This PhD project aims to develop and explore a novel theoretical approach to describe the collective motion
of protons and neutrons within the atomic nucleus. The goal is to generalize the TDDFT framework to
improve the prediction of certain nuclear reaction properties, such as the energy distribution among the
fragments resulting from nuclear fission. Building on initial work in this direction, the PhD candidate will
derive the equations of motion for this new approach and implement them as an optimized C++ library
designed to leverage the computational power of CEA's supercomputers. The final objective will be to assess
how this new framework enhances predictions of phenomena such as the damping of giant resonances in
atomic nuclei and the formation of fragments during nuclear fission.

Microscopic description of fission fragment properties at scission

Fission is one of the most difficult nuclear reactions to describe, reflecting the diversity of dynamic aspects of the N-body problem. During this process, the nucleus explores extreme deformation states leading to the formation of two fragments. While the number of degrees of freedom (DOF) involved is extremely large, the mean-field approximation is a good starting point that drastically reduces the DOF, with elongation and asymmetry being unavoidable. This reduction introduces discontinuities in the successive generation of states through which the nucleus transits, since continuity in energy does not ensure the continuity of states resulting from a variational principle. Recently, a new method based on constraints associated with wave function overlaps has been implemented to ensure this continuity up to and beyond the scission (Coulomb valley). This continuity is crucial for describing the dynamics of the process.

The objective of the proposed thesis is to carry out for the first time a two-dimensional implementation of this new approach in order to take into account the whole collectivity generated by elongation and asymmetry DOF. The theoretical and numerical developments will be done within the framework of the time-dependent generator coordinate method. This type of approach contains a first static step, which consists of generating potential energy surfaces (PES) obtained by constrained Hartree-Fock-Bogoliubov calculations, and a second dynamic step, which describes the dynamic propagation of a wave packet on these surfaces by solving the time-dependent Schrödinger equation. It is from this second step that the observables are generally extracted.

As part of this thesis, the PhD student will:
- as a first step, construct continuous two-dimensional PESs for the adiabatic and excited states. This will involve the three algorithms Link, drop and Deflation
- secondly, extract observables that are accessible using this type of approach: yields, the energy balance at scission, fragment deformation and the average number of emitted neutrons. In particular, we want to study the impact of intrinsic excitations on the fission observables, which are essentially manifested in the descent from the saddle point to the scission.
Finally, these results will be compared with experimental data, in actinides and pre-actinides of interest. In particular, the recent very precise measurements obtained by the SOFIA experiments for moderate to very exotic nuclei should help to test the precision and predictivity of our approaches, and guide future developments of N-body approaches and nuclear interaction in fission.

Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model

Context

Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.

The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.

The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.

PhD project

The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.

The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.

As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.

Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.

The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.

Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.

References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].

Validation of new APOLLO3 neutron transport calculation models for Light Water Reactors using multigroup Monte Carlo simulations combined with a perturbative approach

For the past twelve years, CEA has been developing a deterministic multi-purpose neutron transport code, APOLLO3, which is starting to be used for reactor studies. A classical two-step APOLLO3 calculation scheme is based on a first stage of two-dimensional infinite lattice calculations in fine transport, generating multi-parameter cross-section libraries used in the second stage of 3D core calculations. In the case of a large power reactor, the core calculation requires approximations that can differ in accuracy, depending on the type of application.

The reference calculation schemes of the SHEM-MOC type and the industrial schemes of the REL2005 type, still in use at the lattice stage by CEA and its industrial partners, EDF and Framatome, were developed in the mid-2000s, based on the methods available in the APOLLO2.8 code. Since then, new methods have been implemented in the APOLLO3 code, which have been individually verified and validated, demonstrating their ability to improve the quality of results at the lattice stage. These include new self-shielding methods, subgroups and Tone, the use of surface line sources in flux calculations using the method of characteristics, flux reconstruction for burnup calculations and a new 383-group fine energy mesh.

The aim of this thesis is to define and validate two new lattice calculation schemes for LWR applications to be used in future calculation tools at CEA and its partners. The goal is to integrate all or part of the new calculation methods, while aiming for reasonable calculation times for the reference scheme, and compatible with fast-running routine usage for the industrial scheme. The calculation schemes implemented will be validated in 2D on geometries taken from the VERA benchmark. Validation will be carried out using an innovative approach involving continuous-energy or multi-group Monte Carlo calculations and a perturbation analysis.

Designing a fast reactor burnup credit validation experiment in the JHR reactor

The primary mission of the Jules Horowitz experimental nuclear Reactor (JHR) is to meet the irradiation needs of materials and fuels for the current nuclear industry and future generations. It is expected to start around 2032. The design of the first wave of experimental devices for RJH already includes specifications for GEN2 and 3 industrial constraints. On the other hand, the field of experiments essential to GEN4 Fast Breeder Reactor remains quite open in the longer term, while no fast-spectrum irradiation facility is currently available.
The objective of this thesis is to study the feasibility of integral experiments in the JHR or another light water reactor, for validation of the reactivity loss with innovative FBR fuels.

In the first part of this thesis, fission products (FPs) that contribute to the loss of reactivity in a typical FBR will be identified and ranked by importance. The second part is the activation measurement and evaluation of the capture cross section of stable FPs in a fast spectrum. It involves the design, specification, implementation and achievement of a “stable” FBR-FP target in the ILL reactor or in the CABRI reactor fuel recovery station (potentially with thermal neutron shields). The third and final part is the design of an experiment in the JHR to generate and characterize FBR FPs. This experiment should be sufficiently representative of fuel irradiation conditions in a FBR. The goal is to access the FP inventory by underwater spectrometry in the JHR and integral reactivity weighing before/after irradiation in CABRI or another available facility.

The thesis will be carried out in a team experienced in the physics and thermal-hydraulics characterization of the JHR. The candidate will be advised by several experts based in the department. The candidate will have the opportunity to promote his/her results before the nuclear industry partners (CEA, EDF, Framatome, Orano, Technicatome etc.).

From Combustion to Astrophysics: Exascale Simulations of Fluid/Particle Flows

This thesis focuses on the development of advanced numerical methods to simulate fluid-particle interactions in complex environments. These methods, initially used in industrial applications such as combustion and multiphase flows, will be enhanced for integration into simulation codes for exascale supercomputers and adapted to meet the needs of astrophysics. The objective is to enable the study of astrophysical phenomena such as the dynamics of dust in protoplanetary disks and the structuring of dust in protostars and the interstellar medium. The expected outcomes include a better understanding of planetary formation mechanisms and disk structuring, as well as advancements in numerical methods that will benefit both industrial and astrophysical sciences.

First observations of the TeV gamma-ray sky with the NectarCAM camera for the CTA observatory

Very high energy gamma-ray astronomy is a relatively young part of astronomy (30 years), looking at the sky above 50 GeV. After the success of the H.E.S.S. array in the 2000s, an international observatory, the Cherenkov Telescope Array (CTA), should start operating by 2026. This observatory will include a total of 50 telescopes, distributed on two sites. IRFU is involved in the construction of the NectarCAM, a camera intended to equip the "medium" telescopes (MST) of CTA. The first NectarCAM (of the nine planned) is being integrated at IRFU and will be shipped on site in 2025. Once the camera is installed, the first astronomical observations will take place, allowing to fully validate the functioning of the camera. The thesis aims at finalizing the darkroom tests at IRFU, preparing the installation and validating the operation of the camera on the CTA site with the first astronomical observations. It is also planned for the student to participate in H.E.S.S. data analysis on astroparticle topics (search for primordial black holes, constraints on Lorentz Invariance using distant AGN).

Towards a multimodal photon irradiation platform: foundations and conceptualization

Photonic irradiation techniques exploit the interactions between a beam of high-energy photons and matter to carry out non-destructive measurements. By inducing photonuclear reactions such as photonic activation, nuclear resonance fluorescence (NRF) and photofission, these irradiation techniques enable deep probing of matter. Combining these different nuclear measurement techniques within a single irradiation platform would enable precise, quantitative identification of a wide variety of elements, probing the volume of the materials or objects under study. The high-energy photon beam is generally produced by the Bremsstrahlung phenomenon within a conversion target of a linear electron accelerator. An innovative alternative is to exploit the high-energy electrons delivered by a laser-plasma source, converted by Bremsstrahlung radiation or inverse Compton scattering. A platform based on such a source would open up new possibilities, as laser-plasma sources can reach significantly higher energies, enabling access to new advanced imaging techniques and applications. The aim of this thesis is to establish the foundations and conceptualize a multimodal photonic irradiation platform. Such a device would aim to be based on a laser-plasma source and would enable the combination of photonic activation, nuclear resonance fluorescence (NRF) and photofission techniques. By pushing back the limits of non-destructive nuclear measurements, this platform would offer innovative solutions to major challenges in strategic sectors such as security and border control, radioactive waste package management, and the recycling industry.

ARTIFICIAL INTELLIGENCE TO SIMULATE BIG DATA AND SEARCH FOR THE HIGGS BOSON DECAY TO A PAIR OF MUONS WITH THE ATLAS EXPERIMENT AT THE LARGE HADRON COLLIDER

There is growing interest in new artificial intelligence techniques to manage the massive volume of data collected by particle physics experiments, particularly at the LHC collider. This thesis proposes to study these new techniques for simulating the rare-event background from the two-muon decay of the Higgs boson, as well as to implement a new artificial intelligence method for simulating the response of the muon spectrometer detector resolution, which is crucial for this analysis.

Top