Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model
Context
Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.
The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.
The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.
PhD project
The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.
The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.
As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.
Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.
The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.
Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.
References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].
Methods for the Rapid Detection of Gravitational Events from LISA Data
The thesis focuses on the development of rapid analysis methods for the detection and characterization of gravitational waves, particularly in the context of the upcoming LISA (Laser Interferometer Space Antenna) space mission planned by ESA around 2035. Data analysis involves several stages, one of the first being the rapid analysis “pipeline,” whose role is to detect new events and to characterize them. The final aspect concerns the rapid estimation of the sky position of the gravitational wave source and their characteristic time, such as the coalescence time in the case of black hole mergers. These analysis tools constitute the low-latency analysis pipeline.
Beyond its value for LISA, this pipeline also plays a crucial role in the rapid follow-up of events detected by electromagnetic observations (ground or space-based observatories, from radio waves to gamma rays). While fast analysis methods have been developed for ground-based interferometers, the case of space-borne interferometers such as LISA remains an area to be explored. Thus, a tailored data processing method will have to consider the packet-based data transmission mode, requiring event detection from incomplete data. From data affected by artifacts such as glitches, these methods must enable the detection, discrimination, and analysis of various sources.
In this thesis, we propose to develop a robust and effective method for the early detection of massive black hole binaries (MBHBs). This method should accommodate the data flow expected for LISA, process potential artifacts (e.g., non-stationary noise and glitches), and allow the generation of alerts, including a detection confidence index and a first estimate of the source parameters (coalescence time, sky position, and binary mass); such a rapid initial estimate is essential for optimally initializing a more accurate and computationally expensive parameter estimation.
Unveiling the Universal Coupling Between Accretion and Ejection: From Microquasars to Extragalactic Transients
This PhD project investigates the universal coupling between accretion and ejection, the fundamental processes through which black holes and neutron stars grow and release energy. Using microquasars as nearby laboratories, the project will study how variations in accretion flows produce relativistic jets, and how these mechanisms scale up to supermassive black holes in tidal disruption events (TDEs).
Accretion–ejection coupling drives energy feedback that shapes galaxy formation and evolution, yet its physical origin remains poorly understood. The candidate will combine multi-wavelength observations—from SVOM (X-ray/optical) and new radio facilities (MeerKAT, SKA precursors)—to perform time-resolved analyses linking accretion states to jet emission.
Recent missions such as Einstein Probe and the Vera Rubin Observatory (LSST) will greatly expand the sample of transients, including jetted TDEs, enabling new tests of jet-launching physics across mass and time scales.
Working within the CEA/IRFU team, a major SVOM partner, the student will participate in real-time transient detection and multi-wavelength follow-up, while also exploiting archival data to provide long-term context. This project will train the candidate in high-energy astrophysics, radio astronomy, and data-driven discovery, contributing to a unified understanding of accretion, jet formation, and cosmic feedback.
Aurorae are well known optical phenomena in the Solar System planets. Aurorae have great diagnostic value, as their emissions reveal the planets’ atmospheric compositions, the occurrence of magnetic fields and the solar wind conditions at the planet’s orbit. Looking for aurorae on exoplanets and brown dwarfs is the next frontier. A first breakthrough in this direction has occurred recently, with the detection of a CH4 emission attributed to auroral excitation on the brown dwarf W1935. This detection, and the prospects of observing other auroral features with existent and upcoming telescopes, is what motivates this project. In particular, we will build the first model dedicated to investigate CH4 and H3+ auroral emission on exoplanets and brown dwarfs. The model will be used to investigate the conditions at W1935, and to predict the detectability of aurorae on other sub-stellar objects.
Exploring trends in rocky exoplanets observed with JWST
One of JWST’s major goals is to characterize, for the first time, the atmospheres of rocky, temperate exoplanets, a key milestone in the search for potentially habitable worlds. The temperate rocky exoplanets accessible to JWST are primarily those orbiting M-type stars. However, a major question remains regarding the ability of planets orbiting M-dwarfs to retain their atmospheres.
In 2024, an exceptional 500-hour Director’s Discretionary Time (DDT) program, entitled Rocky Worlds, was dedicated to this topic, underlining its strategic importance at the highest level (NASA, STScI).
The main objective of this PhD project is to: 1) Analyze all available JWST/MIRI eclipse data for rocky exoplanets from Rocky Worlds and other public programs using a consistent and homogeneous framework; 2)Search for population-level trends in the observations and interpret them using 3D atmospheric simulations.
Through this work, we aim to identify the physical processes that control the presence and composition of atmospheres on temperate rocky exoplanets.
Magnetar formation: from amplification to relaxation of the most extreme magnetic fields
Magnetars are neutron stars with the strongest magnetic fields known in the Universe, observed as high-energy galactic sources. The formation of these objects is one of the most studied scenarios to explain some of the most violent explosions: superluminous supernovae, hypernovae, and gamma-ray bursts. In recent years, our team has succeeded in numerically reproducing magnetic fields of magnetar-like intensities by simulating dynamo amplification mechanisms that develop in the proto-neutron star during the first seconds after the collapse of the progenitor core. However, most observational manifestations of magnetars require the magnetic field to survive over much longer timescales (from a few weeks for super-luminous supernovae to thousands of years for Galactic magnetars). This thesis will consist of developing 3D numerical simulations of magnetic field relaxation initialized from different dynamo states previously calculated by the team, extending them to later stages after the birth of the neutron star when the dynamo is no longer active. The student will thus determine how the turbulent magnetic field generated in the first few seconds will evolve to eventually reach a stable equilibrium state, whose topology will be characterized and compared with observations.
Understanding the origin of the remarkable efficiency of distant galaxy formation
The James Webb Space Telescope is revolutionizing our understanding of the distant universe. A result has emerged that challenges our models: the extremely high efficiency of star formation in distant galaxies. However, this finding is derived indirectly: we measure the mass of stars in galaxies, not their star formation rate. This is the main weakness of the James Webb. The aim of this thesis is to remedy this weakness by using its angular resolution capacity, which has not been taken into account until now, in order to obtain a more robust measurement of the SFR of distant galaxies. We will deduce a law that will improve the robustness of SFR determination using morphological properties and combining data from the James Webb Space Telescope with data from ALMA (z=1-3). We will then apply it to the distant universe (z=3-6, part 2) and use it as a benchmark for numerical simulations (part 3).
TRANSFORMER: from the genealogy of dark matter halos to the baryonic properties of galaxy clusters.
The thesis proposes to predict the baryonic properties of galaxy clusters based on the history of dark matter halo formation, using innovative neural networks (Transformers). The work will involve intensive numerical simulations. This project falls within the general framework of determining cosmological parameters through the observation of galaxy clusters in X-rays. It is directly linked to the international Heritage programme in the XMM-Euclid FornaX deep field.
Numerical Study of Interstellar Turbulence in the Exascale Era
This PhD project aims to better understand interstellar medium turbulence, a key phenomenon governing the formation of stars and galactic structures. This turbulence—magnetized, supersonic, and multiphase—influences how energy is transferred and dissipated, thereby regulating the efficiency of star formation throughout the history of the Universe. Its study is complex, as it involves a wide range of spatial and temporal scales that are difficult to reproduce numerically. Advances in high-performance computing, particularly the advent of GPU-based exascale supercomputers, now make it possible to perform much more refined simulations.
The Dyablo code, developed at IRFU, will be used to carry out large-scale three-dimensional simulations with adaptive mesh refinement to resolve the regions where energy dissipation occurs. The study will progress in stages: first, simulations of simple isothermal flows will be conducted, followed by models that include heating, cooling, magnetic fields, and gravity. The turbulent properties will be analyzed using power spectra, structure functions, and density distributions, in order to better understand the formation of dense regions that give birth to stars. Finally, the work will be extended to the galactic scale, in collaboration with other French institutes, to investigate the large-scale energy cascade of turbulence across entire galaxies.