Building a new effective nuclear interaction model and propagating statistical errors

At the very heart of any « many-body » method used to describe the fundamental properties of an atomic nucleus, we find the effective nucleon-nucleon interaction. Such an interaction should be capable of taking into account the nuclear medium effects. In order to obtain it, one has to use a specific fitting protocol that takes into account a variety of nuclear observables such as radii, masses, the centroids of the giant resonances or the properties of the nuclear equation of state around the saturation density.
A well-known model of the strong interaction is the Gogny model. It is a linear combination of coupling constants and operators, plus a radial form factor of the Gaussian type [1]. The coupling constants are determined via a fitting protocol that typically uses the properties of spherical nuclei such as 40-48Ca, 56Ni, 120Sn and 208Pb.
The primary goal of this thesis is to develop a consistent fitting protocol for a generic Gogny interaction in order to access some basic statistical information, such as the covariance matrix and the uncertainties on the coupling constants, in order to be able to perform a full statistical error propagation on some selected nuclear observables calculated with such an interaction [2].
After having analysed the relations between the model parameters and identified their relative importance on how well observables are reproduced, the PhD candidate will explore the possibility of modifying some terms of the interaction itself such as the inclusion of a real three-body term or beyond mean-field effects.
The PhD candidate will work within a nuclear physics group at CEA/IRESNE Cadarache. The work will be done in close collaboration with CEA/DIF. Employment perspectives are in academic research and nuclear R&D labs.

[1] D. Davesne et al. "Infinite matter properties and zero-range limit of non-relativistic finite-range interactions." Annals of Physics 375 (2016): 288-312.
[2] T. Haverinen and M. Kortelainen. "Uncertainty propagation within the UNEDF models." Journal of Physics G: Nuclear and Particle Physics 44.4 (2017): 044008.

Microscopic nuclear structure models to study de-excitation process in nuclear fission

The FIFRELIN code is being developed at CEA/IRESNE Cadarache in order to provide a detailed description of the fission process and to calculate all relevant fission observables accurately. The code heavily resides on the detailed knowledge of the underlying structure of the nuclei involved in the post-fission de-excitation process. When possible, the code relies on nuclear structure databases such as RIPL-3 that provide valuable information on nuclear level schemes, branching ratios and other critical nuclear properties. Unfortunately, not all these quantities have been measured, nuclear models are therefore used instead.

The development of state-of-the-art nuclear models is the task of the newly-formed nuclear theory group at Cadarache, whose main expertise is the implementation of nuclear many-body solvers based on effective nucleon-nucleon interactions.

The goal of this thesis is to quantify the impact of the E1/M1 and E2/M2 strength functions on fission observables. Currently, this quantity is estimated using simple models such as the generalized Lorentzian. The doctoral student will be tasked with replacing these models by fully microscopic ones based on effective nucleon-nucleon interaction via QRPA-type techniques. A preliminary study shows that the use of macroscopic (generalized Lorentzian) or microscopic (QRPA) has a non-negligible impact on fission observables.

Professional perspectives for the student include academic research as well as theoretical and applied nuclear R&D.

Measurement and evaluation of the energy dependence of delayed neutron data from 239Pu

This PhD proposal aims to measure and characterize the delayed neutron emissions from the fission of 239Pu. This actinide is involved in various reactor concepts, and the nuclear data available remains insufficient, particularly with fast neutrons. The project has a strong experimental focus, with multiple measurement campaigns at the MONNET electrostatic accelerator from JRC Geel, in which the candidate will actively participate.
The first phase focuses on the intercomparison of the neutron flux measurement methods (dosimetry, fission chamber, long-counter detector and recoil proton scintillator) which will be confronted to Monte-Carlo simulations of neutron emission from charged particle interactions (D+T, D+D, p+T). This work will ensure proper neutron flux characterization, a crucial step for the project.
Next, the candidate will replicate the delayed neutron measurements for ²³8U using an existing target in order to verify the results from a 2023 experimental campaign.
Finally, the candidate will measure the delayed neutron yields and group abundances for ²³?Pu in a neutron energy range from 1 to 8 MeV. The objective is to produce an energy-dependent evaluation, integrated into an ENDF file, to be tested on reactor calculations (beta-eff, power transients, absorber efficiency calibration, etc.). These measurements will complement a thermal spectrum study conducted at ILL in 2022, forming a coherent model for ²³?Pu from 0 to 8 MeV.
This project will contribute to the OECD/NEA's JEFF-4 nuclear data file, addressing a strong demand from the nuclear industry (highlighted by the IAEA) to improve the precision of multiplicity measurements and delayed neutron kinetic parameters, thus enhancing reactor safety and reducing safety margins.

Exploring the High-Frequency fast Electron-Driven Instabilities towards application to WEST

In current tokamaks, the electron distribution is heavily influenced by external heating systems, like Electron Cyclotron Resonance Heating (ECRH) or Lower Hybrid (LH) heating, which generate a large population of fast electrons. This is expected also in next-generation tokamaks, such as ITER, where a substantial part of input power is deposited on electrons. A significant population of fast electrons can destabilize high-frequency instabilities, including Alfvén Eigenmodes (AEs), as observed in various tokamaks. However, this phenomenon remains understudied, especially regarding the specific resonant electron population triggering these instabilities and the impact of electron-driven AEs on the multi-scale turbulence dynamics in the plasma complex environment.
The PhD project aims to explore the physics of high-frequency electron-driven AEs in realistic plasma conditions, applying insights to WEST experiments for in-depth characterization of these instabilities. The candidate will make use of advanced numerical codes, whose expertise is present at the IRFM laboratory, to analyze realistic plasma conditions with fast-electron-driven AE in previous experiments, to grasp the essential physics at play. Code development will also be necessary to capture key aspects of this physics. Once such a knowledge is established, predictive modeling for the WEST environment will guide experiments to observe these instabilities.
Based at CEA Cadarache, the student will collaborate with different teams, from the theory and modeling group to WEST experimental team, gaining diverse expertise in a stimulating environment. Collaborations with EUROfusion task forces will further provide an enriching international experience.

Generative AI for Robust Uncertainty Quantification in Astrophysical Inverse Problems

Context
Inverse problems, i.e. estimating underlying signals from corrupted observations, are ubiquitous in astrophysics, and our ability to solve them accurately is critical to the scientific interpretation of the data. Examples of such problems include inferring the distribution of dark matter in the Universe from gravitational lensing effects [1], or component separation in radio interferometric imaging [2].

Thanks to recent deep learning advances, and in particular deep generative modeling techniques (e.g. diffusion models), it now becomes not only possible to get an estimate of the solution of these inverse problems, but to perform Uncertainty Quantification by estimating the full Bayesian posterior of the problem, i.e. having access to all possible solutions that would be allowed by the data, but also plausible under prior knowledge.

Our team has in particular been pioneering such Bayesian methods to combine our knowledge of the physics of the problem, in the form of an explicit likelihood term, with data-driven priors implemented as generative models. This physics-constrained approach ensures that solutions remain compatible with the data and prevents “hallucinations” that typically plague most generative AI applications.

However, despite remarkable progress over the last years, several challenges still remain in the aforementioned framework, and most notably:

[Imperfect or distributionally shifted prior data] Building data-driven priors typically requires having access to examples of non corrupted data, which in many cases do not exist (e.g. all astronomical images are observed with noise and some amount of blurring), or might exist but may have distribution shifts compared to the problems we would like to apply this prior to.
This mismatch can bias estimations and lead to incorrect scientific conclusions. Therefore, the adaptation, or calibration, of data-driven priors from incomplete and noisy observations becomes crucial for working with real data in astrophysical applications.

[Efficient sampling of high dimensional posteriors] Even if the likelihood and the data-driven prior are available, correctly sampling from non-convex multimodal probability distributions in such high-dimensions in an efficient way remains a challenging problem. The most effective methods to date rely on diffusion models, but rely on approximations and can be expensive at inference time to reach accurate estimates of the desired posteriors.

The stringent requirements of scientific applications are a powerful driver for improved methodologies, but beyond the astrophysical scientific context motivating this research, these tools also find broad applicability in many other domains, including medical images [3].

PhD project
The candidate will aim to address these limitations of current methodologies, with the overall aim to make uncertainty quantification for large scale inverse problems faster and more accurate.
As a first direction of research, we will extend recent methodology concurrently developed by our team and our Ciela collaborators [4,5], based on Expectation-Maximization, to iteratively learn (or adapt) diffusion-based priors to data observed under some amount of corruption. This strategy has been shown to be effective at correcting for distribution shifts in the prior (and therefore leading to well calibrated posteriors). However, this approach is still expensive as it requires iteratively solving inverse problems and retraining the diffusion models, and is critically dependent on the quality of the inverse problem solver. We will explore several strategies including variational inference and improved inverse problem sampling strategies to address these issues.
As a second (but connected) direction we will focus on the development of general methodologies for sampling complex posteriors (multimodal/complex geometries) of non-linear inverse problems. Specifically we will investigate strategies based on posterior annealing, inspired from diffusion model sampling, applicable in situations with explicit likelihoods and priors.
Finally, we will apply these methodologies to some challenging and high impact inverse problems in astrophysics, in particular in collaboration with our colleagues from the Ciela institute, we will aim to improve source and lens reconstruction of strong gravitational lensing systems.
Publications in top machine learning conferences are expected (NeurIPS, ICML), as well as publications of the applications of these methodologies in astrophysical journals.

References
[1] Benjamin Remy, Francois Lanusse, Niall Jeffrey, Jia Liu, Jean-Luc Starck, Ken Osato, Tim Schrabback, Probabilistic Mass Mapping with Neural Score Estimation, https://www.aanda.org/articles/aa/abs/2023/04/aa43054-22/aa43054-22.html

[2] Tobías I Liaudat, Matthijs Mars, Matthew A Price, Marcelo Pereyra, Marta M Betcke, Jason D McEwen, Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imaging, RAS Techniques and Instruments, Volume 3, Issue 1, January 2024, Pages 505–534, https://doi.org/10.1093/rasti/rzae030

[3] Zaccharie Ramzi, Benjamin Remy, Francois Lanusse, Jean-Luc Starck, Philippe Ciuciu, Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, https://arxiv.org/abs/2011.08698

[4] François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe, Learning Diffusion Priors from Observations by Expectation Maximization, NeurIPS 2024, https://arxiv.org/abs/2405.13712

[5] Gabriel Missael Barco, Alexandre Adam, Connor Stone, Yashar Hezaveh, Laurence Perreault-Levasseur, Tackling the Problem of Distributional Shifts: Correcting Misspecified, High-Dimensional Data-Driven Priors for Inverse Problems, https://arxiv.org/abs/2407.17667

Machine Learning-based Algorithms for the Futur Upstream Tracker Standalone Tracking Performance of LHCb at the LHC

This proposal focuses on enhancing tracking performance for the LHCb experiments during Run 5 at the Large Hadron Collider (LHC) through the exploration of various machine learning-based algorithms. The Upstream Tracker (UT) sub-detector, a crucial component of the LHCb tracking system, plays a vital role in reducing the fake track rate by filtering out incorrectly reconstructed tracks early in the reconstruction process. As the LHCb detector investigates rare particle decays, studies CP violation in the Standard Model, and study the Quark-Gluon plasma in PbPb collisions, precise tracking becomes increasingly important.

With upcoming upgrades planned for 2035 and the anticipated increase in data rates, traditional tracking methods may struggle to meet the computational demands, especially in nucleus-nucleus collisions where thousands of particles are produced. Our project will investigate a range of machine learning techniques, including those already demonstrated in the LHCb’s Vertex Locator (VELO), to enhance the tracking performance of the UT. By applying diverse methods, we aim to improve early-stage track reconstruction, increase efficiency, and decrease the fake track rate. Among these techniques, Graph Neural Networks (GNNs) are a particularly promising option, as they can exploit spatial and temporal correlations in detector hits to improve tracking accuracy and reduce computational burdens.

This exploration of new methods will involve development work tailored to the specific hardware selected for deployment, whether it be GPUs, CPUs, or FPGAs, all part of the futur LHCb’s data architecture. We will benchmark these algorithms against current tracking methods to quantify improvements in performance, scalability, and computational efficiency. Additionally, we plan to integrate the most effective algorithms into the LHCb software framework to ensure compatibility with existing data pipelines.

Caliste-3D CZT: development of a miniature, monolithic and hybrid gamma-ray imaging spectrometer with improved efficiency in the 100 keV to 1 MeV range and optimised for detection of the Compton effect and sub-pixel localisation

Multi-wavelength observation of astrophysical sources is the key to a global understanding of the physical processes involved. Due to instrumental constraints, the spectral band from 0.1 to 1 MeV is the one that suffers most from insufficient detection sensitivity in existing observatories. This band allows us to observe the deepest and most distant active galactic nuclei, to better understand the formation and evolution of galaxies on cosmological scales. It reveals the processes of nucleosynthesis of the heavy elements in our Universe and the origin of the cosmic rays that are omnipresent in the Universe. The intrinsic difficulty of detection in this spectral range lies in the absorption of these very energetic photons after multiple interactions in the material. This requires good detection efficiency, but also good localisation of all the interactions in order to deduce the direction and energy of the incident photon. These detection challenges are the same for other applications with a strong societal and environmental impact, such as the dismantling of nuclear facilities, air quality monitoring and radiotherapy dosimetry.

The aim of this instrumentation thesis is to develop a versatile '3D' detector that can be used in the fields of astrophysics and nuclear physics, with improved detection efficiency in the 100 keV to 1 MeV range and Compton events, as well as the possibility of locating interactions in the detector at better than pixel size.

Several groups around the world, including our own, have developed hard X-ray imaging spectrometers based on high-density pixelated semiconductors for astrophysics (CZT for NuSTAR, CdTe for Solar Orbiter and Hitomi), for synchrotron (Hexitec UK, RAL) or for industrial applications (Timepix, ADVACAM). However, their energy range remains limited to around 200 keV (except for Timepix) due to the thinness of the crystals and their intrinsic operating limitations. To extend the energy range beyond MeV, thicker crystals with good charge carrier transport properties are needed. This is currently possible with CZT, but several challenges need to be overcome.

The first challenge was the ability of manufacturers to produce thick homogeneous CZT crystals. Advances in this field over the last 20 years mean that we can now foresee detectors up to at least 10 mm thick (Redlen, Kromek).

The main remaining technical challenge is the precise estimation of the charge generated by the interaction of a photon in the semiconductor. In a pixelated detector where only the X and Y coordinates of the interaction are recorded, increasing the thickness of the crystal degrades spectral performance. Obtaining Z interaction depth information in a monolithic crystal theoretically makes it possible overcome the associated challenge. This requires the deployment of experimental methods, physical simulations, the design of readout microelectronics circuits and original data analysis methods. In addition, the ability to localise interactions in the detector to better than the size of a pixel will help to solve this challenge.

Multi-messenger analysis of core-collapse supernovae

Core-collapse supernovae play a crucial role in the stellar evolution of massive stars, the birth of neutron stars and black holes, and the chemical enrichment of galaxies. How do they explode? The explosion mechanism can be revealed by the analysis of multi-messenger signals: the production of neutrinos and gravitational waves is modulated by hydrodynamic instabilities during the second following the formation of a proto-neutron star.
This thesis proposes to use the complementarity of multi-messenger signals, using numerical simulations of the stellar core- collapse and perturbative analysis, in order to extract physical information on the explosion mechanism.
The project will particularly focus on the multi-messenger properties of the stationary shock instability ("SASI") and the corotational instability ("low T/W") for a rotating progenitor. For each of these instabilities, the signal from different species of neutrinos and the gravitational waves with different polarization will be exploited, as well as the correlation between them.

Relativistic laboratory astrophysics

This PhD project is concerned with the numerical and theoretical modeling of the ultra-relativistic plasmas encountered in a variety of astrophysical environments such as gamma-ray bursts or pulsar wind nebulae, as well as in future laboratory experiments on extreme laser-plasma, beam-plasma or gamma-plasma interactions. The latter experiments are envisioned at the multi-petawatt laser facilities currently under development worldwide (e.g. the European ELI project), or at next-generation high-energy particle accelerators (e.g. the SLAC/FACET-II facility).
The plasma systems under scrutiny have in common a strong coupling between energetic particles, photons and quantum electrodynamic effects. They will be simulated numerically using a particle-in-cell (PIC) code developed at CEA/DAM over the past years. Besides the collective effects characteristic of plasmas, this code describes a number of gamma-ray photon emission and electron-positron pair creation processes. The purpose of this PhD project is to treat additional photon-particle and photon-photon interaction processes, and then to examine thoroughly their impact and interplay in various experimental and astrophysical configurations.

INVESTIGATION OF THE NUCLEAR TWO-PHOTON DECAY

The nuclear two-photon, or double-gamma decay is a rare decay mode in atomic nuclei whereby a nucleus in an excited state emits two gamma rays simultaneously. Even-even nuclei with a first excited 0+ state are favorable cases to search for a double-gamma decay branch, since the emission of a single gamma ray is strictly forbidden for 0+ to 0+ transitions by angular momentum conservation. The double-gamma decay still remains a very small decay branch (<1E-4) competing with the dominant (first-order) decay modes of atomic internal-conversion electrons (ICE) or internal positron-electron (e+-e-) pair creation (IPC).

The thesis project has two distinct experimental parts: First, we store bare (fully-stripped) ions in their excited 0+ state in the heavy-ion storage ring (ESR) at the GSI facility to search for the double-gamma decay in several nuclides. For neutral atoms the excited 0+ state is a rather short-lived isomeric state with a lifetime of the order of a few tens to hundreds of nanoseconds. At relativistic energies available at GSI, however, all ions are fully stripped of their atomic electrons and decay by ICE emission is hence not possible. If the state of interest is located below the pair creation threshold the IPC process is not possible either. Consequently, bare nuclei are trapped in a long-lived isomeric state, which can only decay by double-gamma emission to the ground state. The decay of the isomers is identified by so-called time-resolved Schottky Mass Spectroscopy. This method allows to distinguish the isomer and the ground state by their (very slightly) different revolution time in the ESR, and to observe the disappearance of the isomer peak in the mass spectrum with a characteristic decay time. Successful experiment establishing the double-gamma decay in several nuclides (72Ge, 98Mo, 98Zr) were already performed and a new experiment has been accepted by the GSI Programme Committee and its realization is planned for 2025.

The second part concerns the direct observation of the emitted photons using gamma-ray spectroscopy. While the storage ring experiments allow to measure the partial lifetime for the double gamma decay, further information on the nuclear properties can be only be achieved by measuring the photon themselves. A test experiment has been performed to study its feasibility and the plans a more detailed study should be developed with the PhD project.

Top