Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model
Context
Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.
The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.
The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.
PhD project
The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.
The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.
As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.
Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.
The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.
Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.
References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].
Validation of new APOLLO3 neutron transport calculation models for Light Water Reactors using multigroup Monte Carlo simulations combined with a perturbative approach
For the past twelve years, CEA has been developing a deterministic multi-purpose neutron transport code, APOLLO3, which is starting to be used for reactor studies. A classical two-step APOLLO3 calculation scheme is based on a first stage of two-dimensional infinite lattice calculations in fine transport, generating multi-parameter cross-section libraries used in the second stage of 3D core calculations. In the case of a large power reactor, the core calculation requires approximations that can differ in accuracy, depending on the type of application.
The reference calculation schemes of the SHEM-MOC type and the industrial schemes of the REL2005 type, still in use at the lattice stage by CEA and its industrial partners, EDF and Framatome, were developed in the mid-2000s, based on the methods available in the APOLLO2.8 code. Since then, new methods have been implemented in the APOLLO3 code, which have been individually verified and validated, demonstrating their ability to improve the quality of results at the lattice stage. These include new self-shielding methods, subgroups and Tone, the use of surface line sources in flux calculations using the method of characteristics, flux reconstruction for burnup calculations and a new 383-group fine energy mesh.
The aim of this thesis is to define and validate two new lattice calculation schemes for LWR applications to be used in future calculation tools at CEA and its partners. The goal is to integrate all or part of the new calculation methods, while aiming for reasonable calculation times for the reference scheme, and compatible with fast-running routine usage for the industrial scheme. The calculation schemes implemented will be validated in 2D on geometries taken from the VERA benchmark. Validation will be carried out using an innovative approach involving continuous-energy or multi-group Monte Carlo calculations and a perturbation analysis.
Designing a fast reactor burnup credit validation experiment in the JHR reactor
The primary mission of the Jules Horowitz experimental nuclear Reactor (JHR) is to meet the irradiation needs of materials and fuels for the current nuclear industry and future generations. It is expected to start around 2032. The design of the first wave of experimental devices for RJH already includes specifications for GEN2 and 3 industrial constraints. On the other hand, the field of experiments essential to GEN4 Fast Breeder Reactor remains quite open in the longer term, while no fast-spectrum irradiation facility is currently available.
The objective of this thesis is to study the feasibility of integral experiments in the JHR or another light water reactor, for validation of the reactivity loss with innovative FBR fuels.
In the first part of this thesis, fission products (FPs) that contribute to the loss of reactivity in a typical FBR will be identified and ranked by importance. The second part is the activation measurement and evaluation of the capture cross section of stable FPs in a fast spectrum. It involves the design, specification, implementation and achievement of a “stable” FBR-FP target in the ILL reactor or in the CABRI reactor fuel recovery station (potentially with thermal neutron shields). The third and final part is the design of an experiment in the JHR to generate and characterize FBR FPs. This experiment should be sufficiently representative of fuel irradiation conditions in a FBR. The goal is to access the FP inventory by underwater spectrometry in the JHR and integral reactivity weighing before/after irradiation in CABRI or another available facility.
The thesis will be carried out in a team experienced in the physics and thermal-hydraulics characterization of the JHR. The candidate will be advised by several experts based in the department. The candidate will have the opportunity to promote his/her results before the nuclear industry partners (CEA, EDF, Framatome, Orano, Technicatome etc.).
From Combustion to Astrophysics: Exascale Simulations of Fluid/Particle Flows
This thesis focuses on the development of advanced numerical methods to simulate fluid-particle interactions in complex environments. These methods, initially used in industrial applications such as combustion and multiphase flows, will be enhanced for integration into simulation codes for exascale supercomputers and adapted to meet the needs of astrophysics. The objective is to enable the study of astrophysical phenomena such as the dynamics of dust in protoplanetary disks and the structuring of dust in protostars and the interstellar medium. The expected outcomes include a better understanding of planetary formation mechanisms and disk structuring, as well as advancements in numerical methods that will benefit both industrial and astrophysical sciences.
First observations of the TeV gamma-ray sky with the NectarCAM camera for the CTA observatory
Very high energy gamma-ray astronomy is a relatively young part of astronomy (30 years), looking at the sky above 50 GeV. After the success of the H.E.S.S. array in the 2000s, an international observatory, the Cherenkov Telescope Array (CTA), should start operating by 2026. This observatory will include a total of 50 telescopes, distributed on two sites. IRFU is involved in the construction of the NectarCAM, a camera intended to equip the "medium" telescopes (MST) of CTA. The first NectarCAM (of the nine planned) is being integrated at IRFU and will be shipped on site in 2025. Once the camera is installed, the first astronomical observations will take place, allowing to fully validate the functioning of the camera. The thesis aims at finalizing the darkroom tests at IRFU, preparing the installation and validating the operation of the camera on the CTA site with the first astronomical observations. It is also planned for the student to participate in H.E.S.S. data analysis on astroparticle topics (search for primordial black holes, constraints on Lorentz Invariance using distant AGN).
Towards a multimodal photon irradiation platform: foundations and conceptualization
Photonic irradiation techniques exploit the interactions between a beam of high-energy photons and matter to carry out non-destructive measurements. By inducing photonuclear reactions such as photonic activation, nuclear resonance fluorescence (NRF) and photofission, these irradiation techniques enable deep probing of matter. Combining these different nuclear measurement techniques within a single irradiation platform would enable precise, quantitative identification of a wide variety of elements, probing the volume of the materials or objects under study. The high-energy photon beam is generally produced by the Bremsstrahlung phenomenon within a conversion target of a linear electron accelerator. An innovative alternative is to exploit the high-energy electrons delivered by a laser-plasma source, converted by Bremsstrahlung radiation or inverse Compton scattering. A platform based on such a source would open up new possibilities, as laser-plasma sources can reach significantly higher energies, enabling access to new advanced imaging techniques and applications. The aim of this thesis is to establish the foundations and conceptualize a multimodal photonic irradiation platform. Such a device would aim to be based on a laser-plasma source and would enable the combination of photonic activation, nuclear resonance fluorescence (NRF) and photofission techniques. By pushing back the limits of non-destructive nuclear measurements, this platform would offer innovative solutions to major challenges in strategic sectors such as security and border control, radioactive waste package management, and the recycling industry.
ARTIFICIAL INTELLIGENCE TO SIMULATE BIG DATA AND SEARCH FOR THE HIGGS BOSON DECAY TO A PAIR OF MUONS WITH THE ATLAS EXPERIMENT AT THE LARGE HADRON COLLIDER
There is growing interest in new artificial intelligence techniques to manage the massive volume of data collected by particle physics experiments, particularly at the LHC collider. This thesis proposes to study these new techniques for simulating the rare-event background from the two-muon decay of the Higgs boson, as well as to implement a new artificial intelligence method for simulating the response of the muon spectrometer detector resolution, which is crucial for this analysis.
SEARCH FOR DIFFUSE EMISSIONS AND SEARCHES IN VERY-HIGH-ENERGY GAMMA RAYS AND FUNDAMENTAL PHYSICS WITH H.E.S.S. AND CTAO
Observations in very-high-energy (VHE, E>100 GeV) gamma rays are crucial for understanding the most violent non-thermal phenomena at work in the Universe. The central region of the Milky Way is a complex region active in VHE gamma rays. Among the VHE gamma sources are the supermassive black hole Sagittarius A* at the heart of the Galaxy, supernova remnants and even star formation regions. The Galactic Center (GC) houses a cosmic ray accelerator up to energies of PeV, diffuse emissions from GeV to TeV including the “Galactic Center Excess” (GCE) whose origin is still unknown, potential variable sources at TeV, as well as possible populations of sources not yet resolved (millisecond pulsars, intermediate mass black holes). The GC should be the brightest source of annihilations of massive dark matter particles of the WIMPs type. Lighter dark matter candidates, axion-like particles (ALP), could convert into photons, and vice versa, in magnetic fields leaving an oscillation imprint in the gamma-ray spectra of active galactic nuclei (AGN).
The H.E.S.S. observatory located in Namibia is composed of five atmospheric Cherenkov effect imaging telescopes. It is designed to detect gamma rays from a few tens of GeV to several tens of TeV. The Galactic Center region is observed by H.E.S.S. for twenty years. These observations made it possible to detect the first Galactic Pevatron and place the strongest constraints to date on the annihilation cross section of dark matter particles in the TeV mass range. The future CTA observatory will be deployed on two sites, one in La Palma and the other in Chile. The latter composed of more than 50 telescopes will provide an unprecedented scan of the region on the Galactic Center.
The proposed work will focus on the analysis and interpretation of H.E.S.S observations. carried out in the Galactic Center region for the search for diffuse emissions (populations of unresolved sources, massive dark matter) as well as observations carried out towards a selection of active galactic nuclei for the search for ALPs constituting dark matter. These new analysis frameworks will be implemented for the future CTA analyses. Involvement in taking H.E.S.S. data. is expected.
STUDY OF THE MULTI-SCALE VARIABILITY OF THE VERY HIGH ENERGY GAMMA-RAY SKY
Very high energy gamma ray astronomy observes the sky above a few tens of GeV. This emerging field of astronomy has been in constant expansion since the early 1990s, in particular since the commissioning of the H.E.S.S. array in 2004 in Namibia. IRFU/CEA-Paris Saclay is a particularly active member of this collaboration from the start. It is also involved in the preparation of the future CTAO observatory (Cherenkov Telescope Array Observatory), which is now being installed. The detection of gamma rays above a few tens of GeV makes it possible to study the processes of charged particles acceleration within objects as diverse as supernova remnants or active galactic nuclei. Through this, H.E.S.S. aims in particular at answering the century-old question of the origin of cosmic rays.
H.E.S.S. allows measuring the direction, the energy and the arrival time of each detected photon. The time measurement makes it possible to highlight sources which present significant temporal or periodic flux variations. The study of these variable
Direction de la Recherche Fondamentale
Institut de recherche
sur les lois fondamentales de l’univers
emissions (transient or periodic), either towards the Galactic Center or active nuclei of galaxies (AGN) at cosmological distance allows for a better understanding of the emission processes at work in these sources. It also helps characterizing the medium in which the photons propagate and testing the validity of some fundamental physical laws such as Lorentz invariance. It is possible to probe a wide range of time scales variations in the flux of astrophysical sources. These time scales range from a few seconds (gamma ray bursts, primordial black holes) to a few years (binary systems of high mass, active galaxy nuclei).
One of the major successes of H.E.S.S.'s two decades of data-taking. was to conduct surveys of the galactic and extragalactic skies in the very-high energy range. These surveys combine observations dedicated to certain sources, such as the Galactic Center or certain remains of supernovae, as well as blind observations for the discovery of new sources. The thesis subject proposed here concerns an aspect of the study of sources which remains to be explored: the research and study of the variability of very-high energy sources. For variable sources, it is also interesting to correlate the variability in other wavelength ranges. Finally, the source model can help predict its behavior, for example its “high states” or its bursts.
Disequilibrium chemistry of exoplanets’ high-metallicity atmospheres in JWST times
In little more than two years of scientific operations, JWST has revolutionized our understanding of exoplanets and their atmospheres. The ARIEL space mission, to be launched in 2029, will soon contribute to this revolution. A main finding that has been enabled by the exquisite quality of the JWST data is that exoplanet atmospheres are in chemical disequilibrium. A full treatment of disequilibrium is complex, especially when the atmospheres are metal-rich, i.e. when they contain in significant abundances elements other than hydrogen and helium. In a first step, our project will numerically investigate the extent of chemical disequilibrium in the atmospheres of JWST targets suspected to have metal-rich atmospheres. We will use towards that end an in-house photochemical model. In a second step, our project will explore the effect of super-thermal chemistry as a driver of chemical disequilibrium. This will offer previously-unexplored insight into the chemistry of metal-rich atmospheres, with the potential to shed new light into the chemical and evolutionary paths of low-mass exoplanets.