New experimental constraints on the weak interaction coupling constants by coincidence measurements of complex decay schemes

Accurate experimental knowledge of forbidden non-unique beta transitions, which constitute about one third of all known beta transitions, is an important and very difficult subject. Only a few reliable studies exist in the literature. Indeed, the continuous energy spectrum of these transitions is difficult to measure precisely for various reasons that cumulate: high diffusivity of electrons in matter and non-linearity of the detection system, unavailability of some radionuclides and presence of impurities, long half-lives and complex decay schemes, etc. Accurate theoretical predictions are equally difficult because of the necessity of coupling different models for the atomic, the nuclear and the weak interaction parts in the same, full-relativistic formalism. However, improving our knowledge of forbidden non-unique beta transitions is essential in radioactivity metrology to define the becquerel SI unit in the case of pure beta emitters. This can have a strong impact in nuclear medicine, for the nuclear industry, and for some studies in fundamental physics such as dark matter detection and neutrino physics.
Our recent study, both theoretical and experimental, of the second forbidden non-unique transition in 99Tc decay has highlighted that forbidden non-unique transitions can be particularly sensitive to the effective values of the weak interaction coupling constants. The latter act as multiplicative factors of the nuclear matrix elements. The use of effective values compensates for the approximations used in the nuclear structure models, such as simplified correlations between the nucleons in the valence space, or the absence of core excitation. However, they can only be adjusted by comparing with a high-precision experimental spectrum. The predictability of the theoretical calculations, even the most precise currently available, is thus strongly questioned. While it has already been demonstrated that universal values cannot be fixed, effective values for each type of transition, or for a specific nuclear model, are possible. The aim of this thesis is therefore to establish new experimental constraints on the weak interaction coupling constants by precisely measuring the energy spectra of beta transitions. Ultimately, establishing robust average effective values of these coupling constants will be possible, and a real predictive power for theoretical calculations of beta decay will be obtained.
Most of the transitions of interest for constraining the coupling constants have energies greater than 1 MeV, occur in complex decay schemes and are associated to the emission of multiple gamma photons. In this situation, the best strategy consists in beta-gamma detection in coincidence. The usual detection techniques in nuclear physics are appropriate but they must be extremely well implemented and controlled. The doctoral student will rely on the results obtained in two previous theses. To minimize self-absorption of the electrons in the source, they will have to adapt a preparation technique of ultra-thin radioactive sources developed at LNHB to the important activities that will be required. He will have to implement a new apparatus, in a dedicated vacuum chamber, including a coincidence detection of two silicon detectors and two gamma detectors. Several studies will be necessary, mechanical and by Monte Carlo simulation, to optimize the geometric configuration with regard to the different constraints. The optimization of the electronics, acquisition, signal processing, data analysis, spectral deconvolution and the development of a complete and robust uncertainty budget will all be topics covered. These instrumental developments will make possible the measurement with great precision of the spectra from 36Cl, 59Fe, 87Rb, 141Ce, or 170Tm decays. This very comprehensive subject will allow the doctoral student to acquire instrumental and analytical skills that will open up many career opportunities. The candidate should have good knowledge of nuclear instrumentation, programming and Monte Carlo simulations, as well as a reasonable knowledge of nuclear disintegrations.

Quantum simulation of atomic nulei

Atomic nuclei constitute strongly correlated quantum many-body systems governed by the strong interaction of QCD. The nuclear shell model, which diagonalizes the Hamiltonian in a basis whose dimension grows exponentially with the number of nucleons, represents a well-established approach for describing their structure. However, this combinatorial explosion confines classical high-performance computing to a restricted fraction of the nuclear chart.
Quantum computers offer a promising alternative through their natural ability to manipulate exponentially large Hilbert spaces. Although we remain in the NISQ era with its noisy qubits, they could revolutionize shell model applications.
This thesis aims to develop a comprehensive approach for quantum simulation of complex nuclear systems. A crucial first milestone involves creating a software interface that integrates nuclear structure data (nucleonic orbitals, nuclear interactions) with quantum computing platforms, thereby facilitating future applications in nuclear physics.
The project explores two classes of algorithms: variational and non-variational approaches. For the former, the expressivity of quantum ansätze will be systematically analyzed, particularly in the context of symmetry breaking and restoration. Variational Quantum Eigensolvers (VQE), especially promising for Hamiltonian-based systems, will be implemented with emphasis on the ADAPT-VQE technique tailored to the nuclear many-body problem.
A major challenge lies in accessing excited states, which are as crucial as the ground state in nuclear structure, while VQE primarily focuses on the latter. The thesis will therefore develop quantum algorithms dedicated to excited states, testing various methods: Hilbert space expansion (Quantum Krylov), response function techniques (quantum equations of motion), and phase estimation-based methods. The ultimate objective is to identify the most suitable approaches in terms of scalability and noise resilience for applications with realistic nuclear Hamiltonians.

Runaway electron impact asymmetry in tokamaks: characterization and modelling for ITER.

Disruptions are sudden interruptions of plasma discharges in tokamaks. They are due to instabilities leading to the loss of thermal energy and magnetic energy of the plasma over periods of the order of a few tens of milliseconds. Disruptions can generate so-called relativistic runaway electron beams carrying a large part of the initial plasma energy, and likely to damage plasma-facing components. The proposed PhD focuses on the characterization and modelling of runaway electrons impact asymmetries on the wall. It is likely that runaway electrons will be generated during the lifetime of future machines, even though preventing their generation or suppressing them is highly desirable. Unfortunately the geometry and physical processes at work during impacts are still poorly understood. In particular, asymmetries in the toroidal direction have been observed on many tokamaks, concentrating the heat flux with reproducible patterns over time and despite varied experimental conditions. Few controlled experiments have been performed to study these phenomena. It is therefore proposed for this topic to start by building a statistical review of recent experimental impact data on JET and WEST tokamaks: deposition surface, peaking factors, heat flux, ejecta characterization. Simple heat propagation codes will be used. The characteristics of the decoupled electrons just before the impact should also be part of the study, using indirect measurements (hard X-ray spectra, post-mortem measurements) or interpretative codes. In a second step, runaway beam impact simulations will be carried out to test the two main hypotheses that explain the asymmetries: misalignment of the wall elements, or an intrinsically three-dimensional structure of the beam, potentially created by error fields. The 3D MHD code JOREK will be used, in particular for the second hypothesis. The goal will be to reproduce the experimental observations. Finally, once the correct hypothesis has been validated and the model developed, the simulations will be extended to ITER where the thermal loads and asymmetries of the beam impact will be calculated from potential values of misalignments and/or error fields.

The nonresonant streaming instability in turbulent plasmas

The magnetic turbulence prevalent in many astrophysical systems, such as the solar wind and supernova remnants, plays a crucial role in accelerating high-energy particles, particularly within collisionless shock waves. By trapping particles near the shock front, this turbulence facilitates their energy gain through repeated crossings between the upstream and downstream regions – a process known as Fermi acceleration, believed to be the origin of cosmic rays.
It happens that the turbulence surrounding supernova remnants is likely generated by the cosmic rays themselves via plasma instabilities as they stream ahead of the shock. In the specific case of a shock wave propagating parallel to the ambient magnetic field, the dominant instability is thought to be the non-resonant streaming instability, or Bell's instability, which acts to amplify the preexisting turbulence.
The objective of this PhD is to build a comprehensive analytical model of this instability within a turbulent plasma, and to validate its predictions against advanced numerical simulations.

Description of collective phenomena in atomic nuclei beyond Time-Dependent Density Functional

Context :
Predicting the organization and dynamics of neutrons and protons within atomic nuclei is a significant
scientific challenge, crucial for designing future nuclear technologies and addressing fundamental questions
such as the origin of heavy atoms in our universe. In this context, CEA, DAM, DIF develops theoretical
approaches to simulate the dynamics of the elementary constituents of atomic nuclei. The equations of
motion, derived within the framework of quantum mechanics, are solved on our supercomputers. The 2010s
saw the rise of the time-dependent density functional theory (TDDFT) approach for tackling this problem.
While TDDFT has provided groundbreaking insights into phenomena such as giant resonances observed in
atomic nuclei and nuclear fission, this approximation has intrinsic limitations.

Objectives :
This PhD project aims to develop and explore a novel theoretical approach to describe the collective motion
of protons and neutrons within the atomic nucleus. The goal is to generalize the TDDFT framework to
improve the prediction of certain nuclear reaction properties, such as the energy distribution among the
fragments resulting from nuclear fission. Building on initial work in this direction, the PhD candidate will
derive the equations of motion for this new approach and implement them as an optimized C++ library
designed to leverage the computational power of CEA's supercomputers. The final objective will be to assess
how this new framework enhances predictions of phenomena such as the damping of giant resonances in
atomic nuclei and the formation of fragments during nuclear fission.

Microscopic description of fission fragment properties at scission

Fission is one of the most difficult nuclear reactions to describe, reflecting the diversity of dynamic aspects of the N-body problem. During this process, the nucleus explores extreme deformation states leading to the formation of two fragments. While the number of degrees of freedom (DOF) involved is extremely large, the mean-field approximation is a good starting point that drastically reduces the DOF, with elongation and asymmetry being unavoidable. This reduction introduces discontinuities in the successive generation of states through which the nucleus transits, since continuity in energy does not ensure the continuity of states resulting from a variational principle. Recently, a new method based on constraints associated with wave function overlaps has been implemented to ensure this continuity up to and beyond the scission (Coulomb valley). This continuity is crucial for describing the dynamics of the process.

The objective of the proposed thesis is to carry out for the first time a two-dimensional implementation of this new approach in order to take into account the whole collectivity generated by elongation and asymmetry DOF. The theoretical and numerical developments will be done within the framework of the time-dependent generator coordinate method. This type of approach contains a first static step, which consists of generating potential energy surfaces (PES) obtained by constrained Hartree-Fock-Bogoliubov calculations, and a second dynamic step, which describes the dynamic propagation of a wave packet on these surfaces by solving the time-dependent Schrödinger equation. It is from this second step that the observables are generally extracted.

As part of this thesis, the PhD student will:
- as a first step, construct continuous two-dimensional PESs for the adiabatic and excited states. This will involve the three algorithms Link, drop and Deflation
- secondly, extract observables that are accessible using this type of approach: yields, the energy balance at scission, fragment deformation and the average number of emitted neutrons. In particular, we want to study the impact of intrinsic excitations on the fission observables, which are essentially manifested in the descent from the saddle point to the scission.
Finally, these results will be compared with experimental data, in actinides and pre-actinides of interest. In particular, the recent very precise measurements obtained by the SOFIA experiments for moderate to very exotic nuclei should help to test the precision and predictivity of our approaches, and guide future developments of N-body approaches and nuclear interaction in fission.

Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model

Context

Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.

The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.

The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.

PhD project

The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.

The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.

As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.

Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.

The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.

Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.

References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].

Validation of new APOLLO3 neutron transport calculation models for Light Water Reactors using multigroup Monte Carlo simulations combined with a perturbative approach

For the past twelve years, CEA has been developing a deterministic multi-purpose neutron transport code, APOLLO3, which is starting to be used for reactor studies. A classical two-step APOLLO3 calculation scheme is based on a first stage of two-dimensional infinite lattice calculations in fine transport, generating multi-parameter cross-section libraries used in the second stage of 3D core calculations. In the case of a large power reactor, the core calculation requires approximations that can differ in accuracy, depending on the type of application.

The reference calculation schemes of the SHEM-MOC type and the industrial schemes of the REL2005 type, still in use at the lattice stage by CEA and its industrial partners, EDF and Framatome, were developed in the mid-2000s, based on the methods available in the APOLLO2.8 code. Since then, new methods have been implemented in the APOLLO3 code, which have been individually verified and validated, demonstrating their ability to improve the quality of results at the lattice stage. These include new self-shielding methods, subgroups and Tone, the use of surface line sources in flux calculations using the method of characteristics, flux reconstruction for burnup calculations and a new 383-group fine energy mesh.

The aim of this thesis is to define and validate two new lattice calculation schemes for LWR applications to be used in future calculation tools at CEA and its partners. The goal is to integrate all or part of the new calculation methods, while aiming for reasonable calculation times for the reference scheme, and compatible with fast-running routine usage for the industrial scheme. The calculation schemes implemented will be validated in 2D on geometries taken from the VERA benchmark. Validation will be carried out using an innovative approach involving continuous-energy or multi-group Monte Carlo calculations and a perturbation analysis.

Designing a fast reactor burnup credit validation experiment in the JHR reactor

The primary mission of the Jules Horowitz experimental nuclear Reactor (JHR) is to meet the irradiation needs of materials and fuels for the current nuclear industry and future generations. It is expected to start around 2032. The design of the first wave of experimental devices for RJH already includes specifications for GEN2 and 3 industrial constraints. On the other hand, the field of experiments essential to GEN4 Fast Breeder Reactor remains quite open in the longer term, while no fast-spectrum irradiation facility is currently available.
The objective of this thesis is to study the feasibility of integral experiments in the JHR or another light water reactor, for validation of the reactivity loss with innovative FBR fuels.

In the first part of this thesis, fission products (FPs) that contribute to the loss of reactivity in a typical FBR will be identified and ranked by importance. The second part is the activation measurement and evaluation of the capture cross section of stable FPs in a fast spectrum. It involves the design, specification, implementation and achievement of a “stable” FBR-FP target in the ILL reactor or in the CABRI reactor fuel recovery station (potentially with thermal neutron shields). The third and final part is the design of an experiment in the JHR to generate and characterize FBR FPs. This experiment should be sufficiently representative of fuel irradiation conditions in a FBR. The goal is to access the FP inventory by underwater spectrometry in the JHR and integral reactivity weighing before/after irradiation in CABRI or another available facility.

The thesis will be carried out in a team experienced in the physics and thermal-hydraulics characterization of the JHR. The candidate will be advised by several experts based in the department. The candidate will have the opportunity to promote his/her results before the nuclear industry partners (CEA, EDF, Framatome, Orano, Technicatome etc.).

From Combustion to Astrophysics: Exascale Simulations of Fluid/Particle Flows

This thesis focuses on the development of advanced numerical methods to simulate fluid-particle interactions in complex environments. These methods, initially used in industrial applications such as combustion and multiphase flows, will be enhanced for integration into simulation codes for exascale supercomputers and adapted to meet the needs of astrophysics. The objective is to enable the study of astrophysical phenomena such as the dynamics of dust in protoplanetary disks and the structuring of dust in protostars and the interstellar medium. The expected outcomes include a better understanding of planetary formation mechanisms and disk structuring, as well as advancements in numerical methods that will benefit both industrial and astrophysical sciences.

Top