Bayesian Inference with Differentiable Simulators for the Joint Analysis of Galaxy Clustering and CMB Lensing
The goal of this PhD project is to develop a novel joint analysis for the DESI galaxy clustering
and Planck PR4/ACT CMB lensing data, based on numerical simulations of the surveys and
state-of-the-art machine learning and statistical inference techniques. The aim is to overcome
many of the limitations of the traditional approaches and improve the recovery of cosmological
parameters. The joint galaxy clustering - CMB lensing inference will significantly improve
constraints on the growth of structure upon DESI-only analyses and refine even more the test of general relativity.
Source clustering impact on Euclid weak lensing high-order statistics
In the coming years, the Euclid mission will provide measurements of the shapes and positions of billions of galaxies with unprecedented precision. As the light from the background galaxies travels through the Universe, it is deflected by the gravity of cosmic structures, distorting the apparent shapes of galaxies. This effect, known as weak lensing, is the most powerful cosmological probe of the next decade, and it can answer some of the biggest questions in cosmology: What are dark matter and dark energy, and how do cosmic structures form?
The standard approach to weak lensing analysis is to fit the two-point statistics of the data, such as the correlation function of the observed galaxy shapes. However, this data compression is sub- optimal and discards large amounts of information. This has led to the development of several approaches based on high-order statistics, such as third moments, wavelet phase harmonics and field-level analyses. These techniques provide more precise constraints on the parameters of the cosmological model (Ajani et al. 2023). However, with their increasing precision, these methods become sensitive to systematic effects that were negligible in the standard two-point statistics analyses.
One of these systematics is source clustering, which refers to the non-uniform distribution of the galaxies observed in weak lensing surveys. Rather than being uniformly distributed, the observed galaxies trace the underlying matter density. This clustering causes a correlation between the lensing signal and the galaxy number density, leading to two effects: (1) it modulates the effective redshift distribution of the galaxies, and (2) it correlates the galaxy shape noise with the lensing signal. Although this effect is negligible for two-point statistics (Krause et al. 2021, Linke et al. 2024), it significantly impacts the results of high-order statistics (Gatti et al. 2023). Therefore, accurate modelling of source clustering is critical to applying these new techniques to Euclid’s weak lensing data.
In this project, we will develop an inference framework to model source clustering and asses its impact on cosmological constraints from high-order statistics. The objectives of the project are:
1. Develop an inference framework that populates dark matter fields with galaxies, accurately modelling the non-uniform distribution of background galaxies in weak lensing surveys.
2. Quantify the source clustering impact on the cosmological parameters from wavelet transforms and field-level analyses.
3. Incorporate source clustering in emulators of the matter distribution to enable accurate data modelling in the high-order statistics analyses.
With these developments, this project will improve the accuracy of cosmological analyses and the realism of the data modelling, making high-order statistics analyses possible for Euclid data.
Detecting the first clusters of galaxies in the Universe in the maps of the cosmic microwave background
Galaxy clusters, located at the node of the cosmic web, are the largest gravitationally bound structures in the Universe. Their abundance and spatial distribution are very sensitive to cosmological parameters, such the matter density in the Universe. Galaxy clusters thus constitute a powerful cosmological probe. They have proven to be an efficient probe in the last years (Planck, South Pole Telescope, XXL, etc.) and they are expected to make great progress in the coming years (Euclid, Vera Rubin Observatory, Simons Observatory, CMB- S4, etc.).
The cosmological power of galaxy clusters increases with the size of the redshift (z) range covered by the catalogue. Planck detected the most massive clusters in the Universe in the redshift range 0<z<1. SPT and ACT are more sensitive but covered less sky: they detected tens of clusters between z=1 and z=1.5, and a few clusters between z=1.5 and z=2. The next generation of instruments (Simons Observatory starting in 2025 and CMB- S4 starting in 2034) will routinely detect clusters in 1<z<2 and will observe the first clusters formed in the Universe in 2<z<3.
Only the experiments studying the cosmic microwave background will be able to observe the hot gas in these first clusters at 2<z<3, thanks to the SZ effect, named after its discoverers Sunyaev and Zel’dovich. This effect is due to high energetic electrons of the gas, which distorts the frequency spectrum of the cosmic microwave background, and is detectable in current experiments. But the gas is not the only component emitting in galaxy clusters: galaxies inside the clusters can also emit in radio or in infrared, contaminating the SZ signal. This contamination is weak at z<1 but increases drastically with redshift. One expects that the emission from radio and infrared galaxies in clusters are of the same order of magnitude as the SZ signal in 2<z<3.
One thus needs to understand and model the emission of the gas as a function of redshift, but also the emission of radio and infrared galaxies inside the clusters to be ready to detect the first clusters in the Universe. Irfu/DPhP developed the first tools for detecting clusters of galaxies in cosmic microwave background data in the 2000s. These tools have been used successfully on Planck data and on ground-based data, such as the data from the SPT experiment. They are efficient at detecting clusters of galaxies whose emission is dominated by the gas, but their performance is unknown when the emission from radio and infrared galaxies is significant.
This thesis will first study and model the radio and infrared emission from galaxies in the clusters detected in the cosmic microwave background data (Planck, SPT and ACT) as a function of redshift.
Secondly, one will quantify the impact of these emissions on existing cluster detection tools, in the redshift range currently being probed (0<z<2) and then in the future redshift range (2<z<3).
Finally, based on our knowledge of these radio and infrared emissions from galaxies in clusters, we will develop a new cluster extraction tool for high redshift clusters (2<z<3) to maximize the detection efficiency and control selection effects, that is the number of detected clusters compared to the total number of clusters.
The PhD student will join the Simons Observatory and CMB-S4 collaborations.
The biased Cosmic web, from theoretical modelling to observations
The study of the filamentary Cosmic Web is a paramount aspect of modern research in cosmology. With the advent of extremely large and precise cosmological datasets which are now (or within months) coming notably from the Euclid space mission, it becomes feasible to study in detail the formation of cosmic structures through gravitational instability. In particular, fine non-linear aspects of this dynamics can be studied from a theoretical point of view with the hope of detecting signatures in real observations. One of the major difficulty in this regard is probably to make the link between the observed distribution of galaxies along filaments and the underlying matter distribution for which first-principles models are known. Building on recent and state of the art theoretical developments in gravitational perturbation theory and constrained random field theory, the successful candidate will develop first-principles predictions for statistical observables (extrema counts, topological estimators, extrema correlation functions, e.g. Pogosyan et al. 2009, MNRAS 396 or Ayçoberry, Barthelemy, Codis 2024, A&A 686) of the cosmic web, applied to the actual discrete field of galaxies which only traces the total matter in a biased manner. This model will then be applied to the analysis of Euclid data.
Fission yield measurements for decay heat evaluation of used nuclear fuel
The fission process involves the violent splitting of a heavy nucleus into two fission fragments, resulting in over 300 different isotopes. Understanding the distribution and production of these fragments, known as fission yields, is essential for grasping the underlying mechanisms of fission, which are influenced by nuclear structure and dynamics. Accurate measurements of fission yields are crucial for advancing nuclear energy applications, particularly in developing Generation IV reactors and recycling spent nuclear fuel. The VAMOS magnetic spectrometer enables precise fission yield measurements due to its large acceptance and identification capabilities for various isotopes. An experimental campaign at VAMOS in 2024 utilized beams of (^{238})U and (^{232})Th on a carbon target to produce fissioning actinides. The combination of VAMOS with a new Silicon telescope (PISTA) enhances data quality significantly. The candidate will analyze VAMOS data to produce high-resolution fission yields and study uncerta
High Harmonic Generation in cavity for an attosecond quantum source
Attophysics is at the forefront of time-resolved spectroscopy. Indeed, it harnesses the shortest light pulse probe that can be produced experimentally, thanks to the high harmonic generation (HHG) process. A standard way to trigger HHG is to submit an atomic system to an oscillating electromagnetic field whose strength compares with the Coulomb potential bounding electrons with their nuclei. This non-linear, non-perturbative optical effect produces a broadband coherent radiation in the extreme ultraviolet (XUV) frequency range, which forms attosecond pulses (1e-18 s). Since its discovery in the late 1980s, continuous experimental and theoretical efforts have been dedicated to get a complete understanding of this complex phenomenon. Despite the tremendous success of attosecond science, there is still no consensus about a quantum description of the process. We foresee that such a description of HHG would push forward our understanding of non-linear optics and open up new perspectives for attosecond science.
Systematic study of the neutron scattering reactions on structural materials of interest for nuclear reactor applications
Elastic and inelastic scattering reactions on structural materials have a significant impact on the simulation of neutron transport. The nuclear data of structural materials of interest for nuclear reactors and criticality studies must be known with good precision over a wide incident neutron energy range, from a few tens of meV to several MeV. The thesis proposal aims to carry out a systematic study of the scattering reactions above the resolved resonance range up to 5 MeV. In this energy range, neither the R-Matrix formalism nor the statistical Hauser-Feshbach model are valid for structural materials. A new formalism will be developed by using high-resolution measurements of the scattering angular distributions. This work will focus more precisely on measurements already done at the JRC-Geel facility (sodium [1], iron [2]) and will be extended to other elements studied within the framework of the IAEA/INDEN project, such as copper, chromium and nickel. As part of this thesis, the experimental database will be complemented by new measurements on the copper isotopes (Cu63 and Cu65). The measurements will be carried out at JRC Geel GELINA facility with the ELISA detector. Concerning the copper isotopes, integral benchmarks from the ICSBEP database revealed several issues in the nuclear data libraries, which provide contradictory integral feedbacks on the nuclear data of U235. For example, the ZEUS benchmarks, which is routinely used to study the capture cross section of U235 in the fast neutron energy range, are very sensitive to the nuclear data of copper. This type of benchmark will provide an ideal framework for quantifying the impact of any new formalism developed to evaluate the nuclear data of structural materials.
This study will allow the PhD student to develop skills in experimental and theoretical nuclear physics, as well as in neutron physics. The results will be communicated to the JEFF working group of the Nuclear Energy Agency (OCDE/AEN).
[1] P. Archier, Contribution à l’amélioration des données nucléaires neutroniques du sodium pour le calcul des réacteurs de génération IV, Thèse, Université de Grenoble, 2011.
[2] G. Gkatis, Study of neutron induced reaction cross sections on Fe isotopes at the GELINA facility relevant to reactor applications, Thèse, Université Aix-Marseille, 2024.
Integral measurement of fission products capture cross-section using a combination of oscillation and activation techniques
This thesis is proposed as part of the POSEIDON (Fission Product Oscillation Experiments for Improving Depletion Calculations) project that deals with the integral measurement of the neutron capture and scattering cross-sections of the main fission products contributing to the reactivity loss in irradiated fuel. It consists of measuring the reactivity effect of separated isotope samples using a pile oscillation device, coupled with neutron activation measurements, in three different core spectral configurations : thermal, PWR and epithermal.
Part of the work will be done at CEA IRESNE in Cadarache and part at the Research Center of the Czech Republic, CV Rez. The PhD student will be involved in testing and optimizating the oscillation device that is currently being designed, as well as performing the measurements in the LR-0 Czech experimental reactor. The work at Cadarache will be on the analysis of the measurements with MC simulation tools. Functionalities needed for data analysis will require additional developments of the codes by the student.
The expected impact is a better prediction of the reactivity loss in reactor cores as a function of burn-up. Indeed, even with the most recent international nuclear data libraries, there is an important bias in the estimation of this reactivity loss.
The PhD student will develop competences in experimental and theoretical neutronics. Following job opportunities include R&D laboratories and nuclear industry.
Modeling of nuclear charge polarization as part of fission yield evaluation: applications to actinides of interest to the nuclear fuel cycle
Nuclear data is crucial for civil nuclear energy applications, being the bridge between the micoscopic properties of nuclei and the “macroscopic good values” needed for cycle and reactor physics studies. The laboratory of physics studies at CEA/IRESNE Cadarache is involved in the evaluation of these nuclear physics observables, in the framework of the JEFF Group and the Coordinated Research Project (CRP) of IAEA. The recent development of a new methodology for thermal neutrons induced fission product yield evaluation (fission product yields after prompt neutron emission) has improved the accuracy of the evaluations proposed for the JEFF-4.0 Library, together with their covariance matrix. To extend the assessments of fission yields induced by thermal neutrons to the fast neutron spectrum, it is necessary to develop a coupling of current evaluation tools with fission fragment yield models (before prompt neutron emission). This coupling is essential to extrapolate the actual studies on thermal fission of 235U and 239Pu to less experimentally known nuclei (241Pu, 241Am, 245Cm) or to study the incident neutron energy dependence of fission yields. One of the essential missing components is the description of the nuclear charge distribution (Z) as a function of the mass of the fission fragments and the incident neutron energy. These distributions are characterized by a key parameter: the charge polarization. This polarization reflects an excess (respectively deficiency) of proton in light (respectively heavy) fission fragments compared to the average charge density of the fissioning nucleus. If this quantity has been measured for the 235U(nth,f) reaction, it is incomplete for other neutron energies or other fissioning systems. The perspectives of this subject concern as much the impact of these new evaluations on the key quantities for electronuclear applications as well as the validation of the fission mechanisms described by microscopic fission models.
Optimization of gamma radiation detectors for medical imaging. Time-of-flight positron emission tomography
Positron emission tomography (PET) is a nuclear medical imaging technique widely used in oncology and neurobiology.
We're proposing you to contribute to the development of an ambitious, patented technology: ClearMind. This gamma photon detector uses a monolithic PbWO4 crystal, in which Cherenkov and scintillation photons are produced. These optical photons are converted into electrons by a photoelectric layer and multiplied in a MicroChannel plate. The induced electrical signals are amplified by gigahertz amplifiers and digitized by SAMPIC fast acquisition modules. The opposite side of the crystal will be fitted with a matrix of silicon photomultiplier (SiPM).
You will work in an advanced instrumentation laboratory in a particle physics environment .
The first step will be to optimize the "components" of ClearMind detectors, in order to achieve nominal performance. We'll be working on scintillating crystals, optical interfaces, photoelectric layers and associated fast photodetectors, and readout electronics.
We will then characterize the performance of the prototype detectors on our measurement benches.
The data acquired will be interpreted using in-house analysis software written in C++ and/or Python.
Finally, we will compare the physical behavior of our detectors to Monté-Carlo simulation software (Geant4/Gate).
A particular effort will be devoted to the development of ultra-fast scintillating crystals in the context of a European collaboration.