Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model

Context

Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.

The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.

The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.

PhD project

The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.

The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.

As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.

Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.

The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.

Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.

References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].

From Combustion to Astrophysics: Exascale Simulations of Fluid/Particle Flows

This thesis focuses on the development of advanced numerical methods to simulate fluid-particle interactions in complex environments. These methods, initially used in industrial applications such as combustion and multiphase flows, will be enhanced for integration into simulation codes for exascale supercomputers and adapted to meet the needs of astrophysics. The objective is to enable the study of astrophysical phenomena such as the dynamics of dust in protoplanetary disks and the structuring of dust in protostars and the interstellar medium. The expected outcomes include a better understanding of planetary formation mechanisms and disk structuring, as well as advancements in numerical methods that will benefit both industrial and astrophysical sciences.

STUDY OF THE MULTI-SCALE VARIABILITY OF THE VERY HIGH ENERGY GAMMA-RAY SKY

Very high energy gamma ray astronomy observes the sky above a few tens of GeV. This emerging field of astronomy has been in constant expansion since the early 1990s, in particular since the commissioning of the H.E.S.S. array in 2004 in Namibia. IRFU/CEA-Paris Saclay is a particularly active member of this collaboration from the start. It is also involved in the preparation of the future CTAO observatory (Cherenkov Telescope Array Observatory), which is now being installed. The detection of gamma rays above a few tens of GeV makes it possible to study the processes of charged particles acceleration within objects as diverse as supernova remnants or active galactic nuclei. Through this, H.E.S.S. aims in particular at answering the century-old question of the origin of cosmic rays.
H.E.S.S. allows measuring the direction, the energy and the arrival time of each detected photon. The time measurement makes it possible to highlight sources which present significant temporal or periodic flux variations. The study of these variable
Direction de la Recherche Fondamentale
Institut de recherche
sur les lois fondamentales de l’univers

emissions (transient or periodic), either towards the Galactic Center or active nuclei of galaxies (AGN) at cosmological distance allows for a better understanding of the emission processes at work in these sources. It also helps characterizing the medium in which the photons propagate and testing the validity of some fundamental physical laws such as Lorentz invariance. It is possible to probe a wide range of time scales variations in the flux of astrophysical sources. These time scales range from a few seconds (gamma ray bursts, primordial black holes) to a few years (binary systems of high mass, active galaxy nuclei).
One of the major successes of H.E.S.S.'s two decades of data-taking. was to conduct surveys of the galactic and extragalactic skies in the very-high energy range. These surveys combine observations dedicated to certain sources, such as the Galactic Center or certain remains of supernovae, as well as blind observations for the discovery of new sources. The thesis subject proposed here concerns an aspect of the study of sources which remains to be explored: the research and study of the variability of very-high energy sources. For variable sources, it is also interesting to correlate the variability in other wavelength ranges. Finally, the source model can help predict its behavior, for example its “high states” or its bursts.

Disequilibrium chemistry of exoplanets’ high-metallicity atmospheres in JWST times

In little more than two years of scientific operations, JWST has revolutionized our understanding of exoplanets and their atmospheres. The ARIEL space mission, to be launched in 2029, will soon contribute to this revolution. A main finding that has been enabled by the exquisite quality of the JWST data is that exoplanet atmospheres are in chemical disequilibrium. A full treatment of disequilibrium is complex, especially when the atmospheres are metal-rich, i.e. when they contain in significant abundances elements other than hydrogen and helium. In a first step, our project will numerically investigate the extent of chemical disequilibrium in the atmospheres of JWST targets suspected to have metal-rich atmospheres. We will use towards that end an in-house photochemical model. In a second step, our project will explore the effect of super-thermal chemistry as a driver of chemical disequilibrium. This will offer previously-unexplored insight into the chemistry of metal-rich atmospheres, with the potential to shed new light into the chemical and evolutionary paths of low-mass exoplanets.

Investigating the nature of Gamma-Ray Bursts with SVOM

Gamma-Ray Bursts are short lived (0.1-100 s) gamma-ray transient sources that appear randomly on the entire sky. Even if they have been discovered at the end of the 1960s, their nature remained mysterious until the end of the 1990s. It is only thanks to the observations of the BeppoSAX satellite at the end of the last century and especially thanks to the observations of the Swift satellite starting from 2004, that the mysterious nature of GRBs started to be elucidated.
These emissions are related to the final stages of very massive stars (30-50 times the mass of the Sun) for the long GRBs (<2 s) or to the merger of two compact objects (typically two neutron stars) for the short GRBs (< 2s). In either case there is the creation of a powerful relativistic jet, which is at the origin of the electromagnetic emission that is measure in gamma-rays and in other energy bands. If this jet points towards the Earth, GRBs can be detected up to very long distances (z~9.1) corresponding to a young age of the Universe (~500 Myr).
Svom is a sino-french space mission dedicated to GRBs, which has been successfully launched on June 22nd 2024, and in which CEA/Irfu/DAp is deeply involved. The PHD subject is aimed at exploiting the multi-wavelength data of SVOM and its partner telescopes in order to investigate the nature of GRBs, and in particular to make use of X-ray data from the MXT telescope in order to try to constrain the nature of the compact object which is at the origin of the relativistic jets.

The dawn of planet formation

Planet formation is a key topic of modern astrophysics with implications on existential questions such as the origin of life in the Universe. Quite surprisingly, we do not precisely know when and where planets are formed in protoplanetary disks. Recent observations however indicate that this might happen sooner than we previously believed. But the physical conditions in the young disks remain poorly constrained. During this thesis we propose to test the hypothesis that planets could form early. We will perform 3D simulations of protoplanetary disk formation with gas, dust and including the mechanisms of planetesimal formation. In addition from determining whether planets form early we will be able to predict the architectures of exoplanet systems and to compare them to real ones. This work, beyond the current state-of-the-art, is timely as many efforts are currently being done by our community to better understand exoplanets as well as our origins.

Generative AI for Robust Uncertainty Quantification in Astrophysical Inverse Problems

Context
Inverse problems, i.e. estimating underlying signals from corrupted observations, are ubiquitous in astrophysics, and our ability to solve them accurately is critical to the scientific interpretation of the data. Examples of such problems include inferring the distribution of dark matter in the Universe from gravitational lensing effects [1], or component separation in radio interferometric imaging [2].

Thanks to recent deep learning advances, and in particular deep generative modeling techniques (e.g. diffusion models), it now becomes not only possible to get an estimate of the solution of these inverse problems, but to perform Uncertainty Quantification by estimating the full Bayesian posterior of the problem, i.e. having access to all possible solutions that would be allowed by the data, but also plausible under prior knowledge.

Our team has in particular been pioneering such Bayesian methods to combine our knowledge of the physics of the problem, in the form of an explicit likelihood term, with data-driven priors implemented as generative models. This physics-constrained approach ensures that solutions remain compatible with the data and prevents “hallucinations” that typically plague most generative AI applications.

However, despite remarkable progress over the last years, several challenges still remain in the aforementioned framework, and most notably:

[Imperfect or distributionally shifted prior data] Building data-driven priors typically requires having access to examples of non corrupted data, which in many cases do not exist (e.g. all astronomical images are observed with noise and some amount of blurring), or might exist but may have distribution shifts compared to the problems we would like to apply this prior to.
This mismatch can bias estimations and lead to incorrect scientific conclusions. Therefore, the adaptation, or calibration, of data-driven priors from incomplete and noisy observations becomes crucial for working with real data in astrophysical applications.

[Efficient sampling of high dimensional posteriors] Even if the likelihood and the data-driven prior are available, correctly sampling from non-convex multimodal probability distributions in such high-dimensions in an efficient way remains a challenging problem. The most effective methods to date rely on diffusion models, but rely on approximations and can be expensive at inference time to reach accurate estimates of the desired posteriors.

The stringent requirements of scientific applications are a powerful driver for improved methodologies, but beyond the astrophysical scientific context motivating this research, these tools also find broad applicability in many other domains, including medical images [3].

PhD project
The candidate will aim to address these limitations of current methodologies, with the overall aim to make uncertainty quantification for large scale inverse problems faster and more accurate.
As a first direction of research, we will extend recent methodology concurrently developed by our team and our Ciela collaborators [4,5], based on Expectation-Maximization, to iteratively learn (or adapt) diffusion-based priors to data observed under some amount of corruption. This strategy has been shown to be effective at correcting for distribution shifts in the prior (and therefore leading to well calibrated posteriors). However, this approach is still expensive as it requires iteratively solving inverse problems and retraining the diffusion models, and is critically dependent on the quality of the inverse problem solver. We will explore several strategies including variational inference and improved inverse problem sampling strategies to address these issues.
As a second (but connected) direction we will focus on the development of general methodologies for sampling complex posteriors (multimodal/complex geometries) of non-linear inverse problems. Specifically we will investigate strategies based on posterior annealing, inspired from diffusion model sampling, applicable in situations with explicit likelihoods and priors.
Finally, we will apply these methodologies to some challenging and high impact inverse problems in astrophysics, in particular in collaboration with our colleagues from the Ciela institute, we will aim to improve source and lens reconstruction of strong gravitational lensing systems.
Publications in top machine learning conferences are expected (NeurIPS, ICML), as well as publications of the applications of these methodologies in astrophysical journals.

References
[1] Benjamin Remy, Francois Lanusse, Niall Jeffrey, Jia Liu, Jean-Luc Starck, Ken Osato, Tim Schrabback, Probabilistic Mass Mapping with Neural Score Estimation, https://www.aanda.org/articles/aa/abs/2023/04/aa43054-22/aa43054-22.html

[2] Tobías I Liaudat, Matthijs Mars, Matthew A Price, Marcelo Pereyra, Marta M Betcke, Jason D McEwen, Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imaging, RAS Techniques and Instruments, Volume 3, Issue 1, January 2024, Pages 505–534, https://doi.org/10.1093/rasti/rzae030

[3] Zaccharie Ramzi, Benjamin Remy, Francois Lanusse, Jean-Luc Starck, Philippe Ciuciu, Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, https://arxiv.org/abs/2011.08698

[4] François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe, Learning Diffusion Priors from Observations by Expectation Maximization, NeurIPS 2024, https://arxiv.org/abs/2405.13712

[5] Gabriel Missael Barco, Alexandre Adam, Connor Stone, Yashar Hezaveh, Laurence Perreault-Levasseur, Tackling the Problem of Distributional Shifts: Correcting Misspecified, High-Dimensional Data-Driven Priors for Inverse Problems, https://arxiv.org/abs/2407.17667

Caliste-3D CZT: development of a miniature, monolithic and hybrid gamma-ray imaging spectrometer with improved efficiency in the 100 keV to 1 MeV range and optimised for detection of the Compton effect and sub-pixel localisation

Multi-wavelength observation of astrophysical sources is the key to a global understanding of the physical processes involved. Due to instrumental constraints, the spectral band from 0.1 to 1 MeV is the one that suffers most from insufficient detection sensitivity in existing observatories. This band allows us to observe the deepest and most distant active galactic nuclei, to better understand the formation and evolution of galaxies on cosmological scales. It reveals the processes of nucleosynthesis of the heavy elements in our Universe and the origin of the cosmic rays that are omnipresent in the Universe. The intrinsic difficulty of detection in this spectral range lies in the absorption of these very energetic photons after multiple interactions in the material. This requires good detection efficiency, but also good localisation of all the interactions in order to deduce the direction and energy of the incident photon. These detection challenges are the same for other applications with a strong societal and environmental impact, such as the dismantling of nuclear facilities, air quality monitoring and radiotherapy dosimetry.

The aim of this instrumentation thesis is to develop a versatile '3D' detector that can be used in the fields of astrophysics and nuclear physics, with improved detection efficiency in the 100 keV to 1 MeV range and Compton events, as well as the possibility of locating interactions in the detector at better than pixel size.

Several groups around the world, including our own, have developed hard X-ray imaging spectrometers based on high-density pixelated semiconductors for astrophysics (CZT for NuSTAR, CdTe for Solar Orbiter and Hitomi), for synchrotron (Hexitec UK, RAL) or for industrial applications (Timepix, ADVACAM). However, their energy range remains limited to around 200 keV (except for Timepix) due to the thinness of the crystals and their intrinsic operating limitations. To extend the energy range beyond MeV, thicker crystals with good charge carrier transport properties are needed. This is currently possible with CZT, but several challenges need to be overcome.

The first challenge was the ability of manufacturers to produce thick homogeneous CZT crystals. Advances in this field over the last 20 years mean that we can now foresee detectors up to at least 10 mm thick (Redlen, Kromek).

The main remaining technical challenge is the precise estimation of the charge generated by the interaction of a photon in the semiconductor. In a pixelated detector where only the X and Y coordinates of the interaction are recorded, increasing the thickness of the crystal degrades spectral performance. Obtaining Z interaction depth information in a monolithic crystal theoretically makes it possible overcome the associated challenge. This requires the deployment of experimental methods, physical simulations, the design of readout microelectronics circuits and original data analysis methods. In addition, the ability to localise interactions in the detector to better than the size of a pixel will help to solve this challenge.

Multi-messenger analysis of core-collapse supernovae

Core-collapse supernovae play a crucial role in the stellar evolution of massive stars, the birth of neutron stars and black holes, and the chemical enrichment of galaxies. How do they explode? The explosion mechanism can be revealed by the analysis of multi-messenger signals: the production of neutrinos and gravitational waves is modulated by hydrodynamic instabilities during the second following the formation of a proto-neutron star.
This thesis proposes to use the complementarity of multi-messenger signals, using numerical simulations of the stellar core- collapse and perturbative analysis, in order to extract physical information on the explosion mechanism.
The project will particularly focus on the multi-messenger properties of the stationary shock instability ("SASI") and the corotational instability ("low T/W") for a rotating progenitor. For each of these instabilities, the signal from different species of neutrinos and the gravitational waves with different polarization will be exploited, as well as the correlation between them.

Relativistic laboratory astrophysics

This PhD project is concerned with the numerical and theoretical modeling of the ultra-relativistic plasmas encountered in a variety of astrophysical environments such as gamma-ray bursts or pulsar wind nebulae, as well as in future laboratory experiments on extreme laser-plasma, beam-plasma or gamma-plasma interactions. The latter experiments are envisioned at the multi-petawatt laser facilities currently under development worldwide (e.g. the European ELI project), or at next-generation high-energy particle accelerators (e.g. the SLAC/FACET-II facility).
The plasma systems under scrutiny have in common a strong coupling between energetic particles, photons and quantum electrodynamic effects. They will be simulated numerically using a particle-in-cell (PIC) code developed at CEA/DAM over the past years. Besides the collective effects characteristic of plasmas, this code describes a number of gamma-ray photon emission and electron-positron pair creation processes. The purpose of this PhD project is to treat additional photon-particle and photon-photon interaction processes, and then to examine thoroughly their impact and interplay in various experimental and astrophysical configurations.

Top