Machine-learning methods for the cosmological analysis of weak- gravitational lensing images from the Euclid satellite

Weak gravitational lensing, the distortion of the images of high-redshift galaxies due to foreground matter structures on large scales, is one of the most promising tools of cosmology to probe the dark sector of the Universe. The statistical analysis of lensing distortions can reveal the dark-matter distribution on large scales, The European space satellite Euclid will measure cosmological parameters to unprecedented accuracy. To achieve this ambitious goal, a number of sources of systematic errors have to be quanti?ed and understood. One of the main origins of bias is related to the detection of galaxies. There is a strong dependence on local number density and whether the galaxy's light emission overlaps with nearby objects. If not handled correctly, such ``blended`` galaxies will strongly bias any subsequent measurement of weak-lensing image distortions.
The goal of this PhD is to quantify and correct weak-lensing detection biases, in particular due to blending. To that end, modern machine- and deep-learning algorithms, including auto-differentiation techniques, will be used. Those techniques allow for a very efficient estimation of the sensitivity of biases to galaxy and survey properties without the need to create a vast number of simulations. The student will carry out cosmological parameter inference of Euclid weak-lensing data. Bias corrections developed during this thesis will be included a prior in galaxy shape measurements, or a posterior as nuisance parameters. This will lead to measurements of cosmological parameters with a reliability and robustness required for precision cosmology.

Cosmological parameter inference using theoretical Wavelet statistics predictions

Launched in 2023, the Euclid satellite is surveying the sky in optical and infrared wavelengths to create an unprecedented map of the Universe's large-scale structure. A cornerstone of its mission is the measurement of weak gravitational lensing—subtle distortions in the shapes of distant galaxies. This phenomenon is a powerful cosmological probe, capable of tracing the evolution of dark matter and helping to distinguish between dark energy and modified gravity theories.
Traditionally, cosmologists have analyzed weak lensing data using second-order statistics (like the power spectrum) paired with a Gaussian likelihood model. This established approach, however, faces significant challenges:
- Loss of Information: Second-order statistics fully capture information only if the underlying matter distribution is Gaussian. In reality, the cosmic web is highly structured, with clusters, filaments, and voids, making this approach inherently lossy.
- Complex Covariance: The method requires estimating a covariance matrix, which is both cosmology-dependent and non-Gaussian. This necessitates running thousands of computationally intensive N-body simulations for each model, a massive and often impractical undertaking.
- Systematic Errors: Incorporating real-world complications—such as survey masks, intrinsic galaxy alignments, and baryonic feedback—into this framework is notoriously difficult.

In response to these limitations, a new paradigm has emerged: likelihood-free inference via forward modelling. This technique bypasses the need for a covariance matrix by directly comparing real data to synthetic observables generated from a forward model. Its advantages are profound: it eliminates the storage and computational burden of massive simulation sets, naturally incorporates high-order statistical information, and can seamlessly integrate systematic effects. However, this new method has its own hurdles: it demands immense GPU resources to process Euclid-sized surveys, and its conclusions are only as reliable as the simulations it uses, potentially leading to circular debates if simulations and observations disagree.

A recent breakthrough (Tinnaneni Sreekanth, 2024) offers a compelling path forward. This work provides the first theoretical framework to directly predict key wavelet statistics of weak lensing convergence maps—exactly the kind Euclid will produce—for any given set of cosmological parameters. It has been shown in Ajani et al (2021) that the wavelet coefficient L1-norm is extremely powerful to constraint the cosmological parameters. This innovation promises to harness the power of advanced, non-Gaussian statistics without the traditional computational overhead, potentially unlocking a new era of precision cosmology. We have demonstrated that this theoretical prediction can be used to build a highly efficient emulator (Tinnaneri Sreekanth et al, 2025), dramatically accelerating the computation of these non-Gaussian statistics. However, it is crucial to note that this emulator, in its current stage, provides only the mean statistic and does not include cosmic variance. As such, it cannot yet be used for full statistical inference on its own. 

This PhD thesis aims to revolutionize the analysis of weak lensing data by constructing a complete, end-to-end framework for likelihood-free cosmological inference. The project begins by addressing the core challenge of stochasticity: we will first calculate the theoretical covariance of wavelet statistics, providing a rigorous mathematical description of their uncertainty. This model will then be embedded into a stochastic map generator, creating realistic mock data that captures the inherent variability of the Universe.
To ensure our results are robust, we will integrate a comprehensive suite of systematic effects—such as noise, masks, intrinsic alignments, and baryonic physics—into the forward model. The complete pipeline will be integrated and validated within a simulation-based inference framework, rigorously testing its power to recover unbiased cosmological parameters. The culmination of this work will be the application of our validated tool to the Euclid weak lensing data, where we will leverage non-Gaussian information to place competitive constraints on dark energy and modified gravity.

References
V. Ajani, J.-L. Starck and V. Pettorino, "Starlet l1-norm for weak lensing cosmology", Astronomy and Astrophysics,  645, L11, 2021.
V. Tinnaneri Sreekanth, S. Codis, A. Barthelemy, and J.-L. Starck, "Theoretical wavelet l1-norm from one-point PDF prediction", Astronomy and Astrophysics,  691, id.A80, 2024.
V. Tinnaneri Sreekanth, J.-L. Starck and S. Codis, "Generative modeling of convergence maps based in LDT theoretical prediction", Astronomy and Astrophysics,  701, id.A170, 2025.

Characterization and calibration of cryogenic detectors at the 100 eV scale for the detection of coherent neutrino scattering (CEvNS)

DESCRIPTIONS:

The NUCLEUS experiment [1] aims to detect reactor neutrinos via coherent elastic neutrino–nucleus scattering (CEvNS). Predicted in 1974 and first observed in 2017, this process provides a unique opportunity to test the Standard Model at low energies. Because the scattering is coherent over the entire nucleus, the cross section is enhanced by several orders of magnitude, making CEvNS also promising for reactor monitoring using neutrinos.

The NUCLEUS experimental setup is currently being installed near the EDF nuclear reactors in Chooz (Ardennes, France), which constitute an intense neutrino source. The only physical signal of a CEvNS event is the tiny recoil of the target nucleus, with an energy below 1 keV. To detect this, NUCLEUS uses CaWO4 crystals of about 1 g, placed in a cryostat cooled to 15 mK. The nuclear recoil produces vibrations in the crystal lattice, equivalent to a temperature rise of about 100 µK, measured with a Transition Edge Sensor (TES) deposited on the crystal. These detectors achieve excellent energy resolutions of only a few eV and detection thresholds on the order of ~10 eV [2]. The NUCLEUS setup was successfully tested and validated in 2024 at TU Munich [3], and data taking at Chooz is scheduled to start in summer 2026, simultaneously with the beginning of the PhD. An initial contribution will involve data acquisition and analysis at the reactor site. More specifically, the PhD student will be responsible for the characterization of the deployed cryogenic CaWO4 detectors — stability, energy resolution, calibration, and intrinsic background of the crystal.

Calibration at the sub-keV scale is a crucial challenge for CEvNS (and dark matter) experiments. Until recently, it was extremely difficult to generate nuclear recoils of known energy to characterize detector responses. The CRAB method [4, 5] addresses this issue by using thermal neutron capture (25 meV) on nuclei that constitute the cryogenic detector. The resulting compound nucleus has a well-known excitation energy — the neutron separation energy — between 5 and 8 MeV, depending on the isotope. When it de-excites by emitting a single gamma photon, the nucleus recoils with a precisely determined energy given by two-body kinematics. A calibration peak in the desired energy range of a few hundred eV then appears in the detector’s energy spectrum. A first measurement in 2022, using a NUCLEUS CaWO4 detector and a commercial ²5²Cf neutron source, validated this method [6].

The second part of the PhD will take place within the “high-precision” phase of the project, which consists in performing measurements with a pure thermal neutron beam from the TRIGA-Mark-II reactor in Vienna (TU Wien, Austria). The calibration setup was successfully installed and characterized in 2025 [7]. It consists of a cryostat housing the cryogenic detectors to be characterized, surrounded by large BaF2 crystals for coincidence detection of the nuclear recoil and the gamma ray that induced it. The whole setup is placed directly on the neutron beam axis, which provides a flux of about 450 n/cm²/s. This coincidence technique will significantly reduce background and extend the CRAB method to a wider energy range and to materials used in most cryogenic detectors. These measurements are expected to provide a unique characterization of the response of cryogenic detectors in the energy region of interest for light dark matter searches and coherent neutrino scattering. In parallel with the measurement of nuclear recoils, the installation of a low-energy X-ray source in the cryostat will generate electronic recoils, enabling a direct comparison between the detector responses to sub-keV energy deposits produced by nuclear and electronic recoils.

The arrival of the PhD student will coincide with the completion of the measurement program on CaWO4 and Al2O3 detectors of NUCLEUS and with the start of the measurement programs on Ge (TESSERACT project) and Si (BULLKID project) detectors.
The high-precision measurements will also open a new sensitivity window to subtle effects coupling nuclear physics(nuclear de-excitation times) and solid-state physics (nuclear recoil times in matter, and the creation of crystal defects induced by nuclear recoils) [8].

The PhD student will be deeply involved in all aspects of the experiment: simulation, data analysis, and interpretation of the obtained results.

WORK PLAN:

The PhD student will actively participate in data taking and in the analysis of the first results from the NUCLEUS cryogenic CaWO4 detectors at Chooz. This work will be carried out in collaboration with the Nuclear Physics Department (DPhN), the Particle Physics Department (DPhP) of CEA-Saclay, and the TU Munich team. It will begin with familiarization with the CAIT analysis framework used for cryogenic detectors. The student will focus in particular on detector calibration, studying the detector response to electronic recoils induced by optical photon pulses injected through fibers and by X-ray fluorescence generated by cosmic rays. Once this calibration is established, two types of backgrounds will be investigated: Nuclear recoils in the keV range induced by cosmogenic fast neutrons, and a low-energy background, known as the Low Energy Excess (LEE), intrinsic to the detector.
The comparison between the experimental and simulated fast neutron background spectra will be analyzed in light of the differences between nuclear and electronic recoil responses measured in the CRAB project. The long data-taking periods at the Chooz site will also be used to study the time evolution of the LEE background. This work will be conducted in collaboration with solid-state physics experts from the Institute for Applied Sciences and Simulation (CEA/ISAS) to better understand the origin of the LEE, which remains a major open question in the cryogenic detector community.
The analysis skills acquired on NUCLEUS will then be applied to the high-precision CRAB measurement campaigns planned for 2027 at the TRIGA reactor (TU Wien) with Ge and Si detectors. The student will be deeply involved in the setup, data acquisition, and analysis of results. The planned measurements on germanium, using both phonon and ionization channels, have the potential to resolve the current ambiguity in the ionization yield of low-energy nuclear recoils, a key factor for the sensitivity of future experiments.
The high calibration precision will also be exploited to study fine effects in nuclear and solid-state physics, such as timing effects and crystal defect formation induced by nuclear recoils in the detector. This study will be conducted in synergy with teams from CEA/IRESNE and CEA/ISAS, who provide detailed simulations of nuclear de-excitation gamma cascades and molecular dynamics simulations of nuclear recoil propagation in matter.

Through this work, the student will receive comprehensive training as an experimental physicist, including strong components in simulation and data analysis, as well as hands-on experience with cryogenic techniques during the commissioning of the NUCLEUS and CRAB detectors. The proposed contributions are expected to lead to several publications during the PhD, with high visibility in the CEvNS and dark matter communities. Within the CEA, the student will also benefit from the exceptionally cross-disciplinary nature of this project, which already
fosters regular interaction among the communities of nuclear physics, particle physics and condensed matter physics.

COLLABORATIONS:

NUCLEUS: Germany (TU-Munich, MPP), Austria (HEPHY, TU-Wien), Italy (INFN), France (CEA-Saclay).
CRAB: Germany (TU-Munich, MPP), Austria (HEPHY, TU-Wien), Italy (INFN), France (CEA-Saclay, CNRS-IJCLab, CNRS-IP2I, CNRS-LPSC).

BIBLIOGRAPHY:

[1] NUCLEUS Collaboration, Exploring CE?NS with NUCLEUS at the Chooz nuclear power plant, The European Physical Journal C 79 (2019) 1018.
15, 48, 160, 174
[2] R. Strauss et al., Gram-scale cryogenic calorimeters for rare-event searches, Phys. Rev. D 96 (2017) 022009. 16, 18, 78, 174
[3] H. Abele et al., Particle background characterization and prediction for the NUCLEUS reactor CE?NS experiment, https://arxiv.org/abs/2509.03559
[4] L. Thulliez, D. Lhuillier et al. Calibration of nuclear recoils at the 100 eV scale using neutron capture, JINST 16 (2021) 07, P07032
(https://arxiv.org/abs/2011.13803)
[5]https://irfu.cea.fr/dphp/Phocea/Vie_des_labos/Ast/ast.php?id_ast=4970
[6] H. Abele et al., Observation of a nuclear recoil peak at the 100 eV scale induced by neutron capture, Phys. Rev. Lett. 130, 211802 (2023) (https://arxiv.org/abs/2211.03631)
[7] H.Abele et al., The CRAB facility at the TUWien TRIGA reactor: status and related physics program, (https://arxiv.org/abs/2505.15227)
[8] G. Soum-Sidikov et al., Study of collision and ?-cascade times following neutron-capture processes in cryogenic detectors Phys. Rev. D
108, 072009 (2023) (https://arxiv.org/abs/2305.10139)

Axion searches in the SuperDAWA experiment with superconducting magnets and microwave radiometry

Axions are hypothetical particles that could both explain a fundamental problem in strong interactions (the conservation of CP symmetry in QCD) and account for a significant fraction of dark matter. Their direct detection is therefore a key challenge in both particle physics and cosmology.

The SuperDAWA experiment, currently under construction at CEA Saclay, uses superconducting magnets and a microwave radiometer placed inside a cryogenic cryostat. This setup aims to convert potential axions into measurable radio waves, with frequencies directly linked to the axion mass.

The proposed PhD will combine numerical modeling with hands-on experimental work. The student will develop a detailed model of the experiment, including magnetic fields, radio signal propagation, and detector electronics, validated step by step with real measurements. Once the experiment is running, the PhD candidate will participate in data-taking campaigns and their analysis.

This project provides a unique opportunity to contribute to a state-of-the-art experiment in experimental physics, with direct implications for the global search for dark matter.

Testing the Standard Model in the Higgs-top sector in a new inclusive way with multiple leptons using the ATLAS detector at the LHC

The LHC collides protons at 13.6 TeV, producing a massive dataset to study rare processes and search for new physics. The production of a Higgs boson in association with a single top quark (tH) in the multi-lepton final state (2 same-sign leptons or 3 charged leptons) is particularly promising, but challenging to analyze due to undetected neutrinos and fake leptons. The tH process is especially interesting because its small Standard Model cross section originates from a subtle destructive interference between diagrams including the Higgs coupling to the W boson and the Higgs coupling to the top quark. This makes tH uniquely sensitive: even small deviations from the Standard Model can strongly enhance its production rate. The measurement of the tH cross section is delicate because the ttH and ttW processes have similar topologies and much larger cross sections, requiring a simultaneous extraction to obtain a reliable result and properly account for correlations between signals. ATLAS observed a moderate excess of tH using the Run 2 dataset (2.8 s), making the analysis of Run 3 data including these correlations crucial. The thesis will first exploit AI algorithms based on Transformer architectures to reconstruct event kinematics and extract observables sensitive to the CP nature of the Higgs-top coupling. In a second phase, a global approach will be adopted to analyze simultaneously the ttW, ttZ, ttH, tH, and 4-top processes, searching for anomalous couplings, including those violating CP symmetry, within the framework of the Standard Model Effective Field Theory (SMEFT). This study will provide the first complete measurement of tH in the multi-lepton channel with Run 3 data and will pave the way for a global analysis of rare processes and anomalous couplings at the LHC in this channel.

Precision measurements of neutrino oscillations and search for CP violation with the T2K and Hyper-Kamiokande experiments

The study of neutrino oscillations has entered a precision era, driven by long-baseline experiments like T2K, which compare neutrino signals at near and far detectors to probe key parameters, including possible Charge-Parity Violation (CPV). Detecting CPV in neutrinos could help explain the Universe’s matter–antimatter asymmetry. T2K’s 2020 results gave first hints of CPV but remain limited by statistics. To improve sensitivity, T2K has undergone major upgrades: replacing the most upstream part of its near detector with a new target, increased accelerator power (up to 800 kW by 2025, aiming for 1.3 MW by 2030). The next-generation Hyper-Kamiokande (Hyper-K) experiment, starting in 2028, will reuse the T2K beam and near detector but with new far detector 8.4 times larger than Super-Kamiokande greatly boosting the statistics. The IRFU group has key role in the near detector upgrade and is now focusing on analysis, crucial for controlling systematic uncertainties crucial for the Hyper-K high statistics time. The proposed PhD work centers on analyzing the new near detector data: designing new sample selections taking into account for the low-momentum protons and neutrons from neutrinos, and refining neutrino–nucleus interaction models to improve energy reconstruction. The second goal is to propagate these improvements to Hyper-K, guiding future oscillation analyses. The student will also contribute to Hyper-K construction and calibration (electronics testing at CERN, installation in Japan).

Multi-Probe Cosmological Mega-Analysis of the DESI Survey: Standard and Field-Level Bayesian Inference

The large-scale structure (LSS) of the Universe is probed through multiple observables: the distribution of galaxies, weak lensing of galaxies, and the cosmic microwave background (CMB). Each probe tests gravity on large scales and the effects of dark energy, but their joint analysis provides the best control over nuisance parameters and yields the most precise cosmological constraints.

The DESI spectroscopic survey maps the 3D distribution of galaxies. By the end of its 5-year nominal survey this year, it will have observed 40 million galaxies and quasars — ten times more than previous surveys — over one third of the sky, up to a redshift of z = 4.2. Combining DESI data with CMB and supernova measurements, the collaboration has revealed a potential deviation of dark energy from a cosmological constant.

To fully exploit these data, DESI has launched a “mega-analysis” combining galaxies, weak lensing of galaxies (Euclid, UNIONS, DES, HSC, KIDS) and the CMB (Planck, ACT, SPT), aiming to deliver the most precise constraints ever obtained on dark energy and gravity. The student will play a key role in developing and implementing this multi-probe analysis pipeline.

The standard analysis compresses observations into a power spectrum for cosmological inference, but this approach remains suboptimal. The student will develop an alternative, called field-level analysis, which directly fits the observed density and lensing field, simulated from the initial conditions of the Universe. This constitutes a very high-dimensional Bayesian inference problem, which will be tackled using recent gradient-based samplers and GPU libraries with automatic differentiation. This state-of-the-art method will be validated alongside the standard approach, paving the way for a maximal exploitation of DESI data.

Search for di-Higgs production in the multilepton channel with the ATLAS detector using 13.6 TeV data

In the Standard Model (SM), the Higgs field is responsible for the breaking of the electroweak symmetry, thereby giving mass to the W and Z bosons. The discovery of the Higgs boson in 2012 at the LHC provided experimental confirmation of the existence of this field. Despite extensive studies, the self-coupling of the Higgs boson remains unmeasured, yet it is crucial for understanding the shape of the Higgs potential and the stability of the universe’s vacuum. Studying Higgs pair production (di-Higgs) is the only direct way to access this parameter, providing key insights into the electroweak phase transition after the Big Bang. Di-Higgs production is extremely rare (cross-section ~40 fb for proton-proton collisions at a centre-of-mass energy of 13.6 TeV), and among its possible final states, the multilepton channel is promising due to its distinctive kinematics, though complex due to diverse topologies and backgrounds. Recent advances in artificial intelligence, particularly transformer-based architectures respecting physical symmetries, have recently significantly improved event reconstruction in complex Higgs channels such as HH?4b or HH?bbtt. Applying these techniques to the multilepton channel offers strong potential to enhance sensitivity. This PhD project will focus on searching for di-Higgs production in the multilepton final state with the full ATLAS Run 3 dataset at 13.6 TeV, leveraging the group’s ongoing ttH multilepton work to develop advanced AI-based reconstruction and analysis methods. The projet aims to approach SM sensitivity for the Higgs self-coupling.

Higgs boson decay into a Z boson and a photon and time resolution of the CMS electromagnetic calorimeter

The thesis focuses on Higgs boson physics, specifically one of its rare and yet unobserved decay channels: the decay into a Z boson and a photon (Zgamma channel). This decay not only complements our understanding of the Higgs boson but also uniquely involves all currently known neutral bosons (Higgs, Z, photon) and is sensitive to potential processes beyond the Standard Model. The final state of the analysis consists of the two lepton decay products from the Z boson (muons or electrons for this study) and a photon. Background events produced by other Standard Model processes that contain two leptons and a photon (or misidentified particles) form the background of the analysis. With all data gathered during LHC Run 2 (2015-2018) and Run 3 (2021-2026), it is possible to have evidence of this decay, that is to observe it with a statistical significance exceeding three standard deviations.

In addition, the thesis includes an instrumental part focused on optimizing the time resolution of the CMS electromagnetic calorimeter (ECAL). Although designed for precise energy measurements, the ECAL also shows excellent timing resolution for photons and electrons (approximately 150 ps in collisions, 70 ps in test beam conditions). In a final state populated by photons from multiple overlapping events (pileup), the arrival time of a photon helps to verify its compatibility with the Higgs boson decay vertex. This will be crucial during the high-luminosity phase of the LHC (2029 onward), when the number of overlapping events is expected to be about three times greater than today. A new readout electronics for the ECAL is being developed and will be installed in the ECAL and CMS during the duration of the thesis. The new electronics achieves a timing resolution of 30 ps for high-energy photons and electrons. This performance was tested in ideal beam conditions (no magnetic fields, no tracker material in front of ECAL, no pileup). The thesis aims to develop algorithms to maintain this performance within CMS.

The thesis work is a continuation of the ongoing Z? analysis within the CMS group at CEA Saclay and the timing performance analysis of the ECAL, where the Saclay group is a leader. Simple, robust, and efficient analysis tools written in modern C++ and leveraging the ROOT analysis framework allow to understand and contribute to every stage of the analysis, from raw data to published results. The CMS Saclay group has leading responsibilities in CMS since its construction, including deep expertise in Higgs physics, electron and photon reconstruction, detector simulation, and machine learning and artificial intelligence techniques.

Regular trips to CERN are proposed for presenting the results of this work to the CMS collaboration and for participating in laboratory tests planned for the new ECAL electronics, as well as for participating to its installation.

The MINI-BINGO demonstrator: advancing the quest to unveil the neutrino nature

BINGO is an innovative neutrino physics project designed to lay the groundwork for a large-scale bolometric experiment dedicated to the search for neutrinoless double beta decay. The goal is to achieve an extremely low background index—on the order of 10^-5 counts/(keV·kg·yr)—while delivering excellent energy resolution in the region of interest. These performance levels will enable the exploration of lepton number violation with unprecedented sensitivity.

The project relies on scintillating bolometers, which are particularly effective at rejecting the dominant background caused by surface alpha particles. It focuses on two highly promising isotopes, 100Mo and 130Te, whose complementary properties make them both strong candidates for future large-scale investigations.

BINGO introduces three major innovations to the well-established heat-light hybrid bolometer technology. First, the sensitivity of the light detectors will be enhanced by an order of magnitude through the use of Neganov-Luke amplification. Second, a novel detector assembly design will reduce surface radioactivity contributions by at least an order of magnitude. Third, and for the first time in a macrobolometer array, an internal active shield made of ultrapure BGO scintillators with bolometric light readout will be implemented to suppress external gamma background.

As part of this thesis work, the student will take part in the assembly and installation of the MINI-BINGO demonstrator within the cryostat recently installed at the Modane Underground Laboratory. He/she will be involved in data acquisition and analysis, and will contribute to evaluating the final background rejection enabled by the performance of the detector's final configuration.

Top