Radiological large-scale accident dosimetry: use of EPR spectroscopy for population triage by measurements of smartphone screens

In the event of a large-scale radiological emergency involving sources of external irradiation, methods are needed to identify which members of the population have been exposed and require priority care. To date, there are no operational methods for such sorting. Smartphone touch screen lenses retain traces of ionizing radiation through the formation of so-called “radiation-induced” defects.Measuring and quantifying these punctual defects, in particular by electron paramagnetic resonance (EPR) spectroscopy, makes itpossible to estimate the dose deposited in the glass, and thus the exposure associated with irradiation. The thesis work proposed herefocuses in particular on the alkali-aluminosilicate glasses used in cell phone touch screens, which are currently the best candidates fordeveloping new measurement capabilities in the context of accidents involving large numbers of victims.

We will focus in particular on identifying point defects as a function of the glass model used in smartphones by simulating EPR spectra in order to optimize the proposed dosimetry method.

Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model

Context

Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.

The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.

The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.

PhD project

The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.

The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.

As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.

Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.

The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.

Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.

References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].

Analysis and experimental study of capillary structures to mitigate the influence of magnetogravitational forces on liquid helium cooling for future HTS superconducting magnets

As physics requires increasingly higher magnetic fields, CEA is called upon to develop and produce superconducting magnets capable of generating magnetic field of more than 30 T. The windings of these electromagnets are made from superconducting materials whose electrical resistance is extremely low at cryogenic temperatures (a few Kelvins). This enables them to carry high currents (>10 kA) while dissipating a minimum of heat by Joule effect. Cooling at these low temperatures is achieved using liquid helium. But helium is diamagnetic. Magnetic fields will therefore induce volumetric forces that add to or oppose gravity within the helium. These magneto-gravity forces disrupt the convective phenomena required to cool the superconducting magnet. This can lead to a rise in their temperature and a loss of their superconducting state, which is essential for their proper operation. In order to circumvent this phenomenon, a new cooling system never used in cryomagnetism will be studied. This cooling system will be developed using heat pipes whose operation is based on capillary forces that are theoretically independent of the magneto-gravity forces induced by strong magnetic fields. These capillary structures can take several forms (microchannels, foam, mesh, etc.). In the framework of the thesis these different structures will be studied theoretically and then experimentally, both with and without magnetic forces, in order to determine the most suitable structures for the future superconducting magnets.

Mutagenesis and selection of enzymatic catalysts for biotechnological applications: development of an integrated in vivo method

Due to their properties as catalysts producing highly enantio- and regioselective compounds from target substrates under mild reaction conditions, the use of enzymes in biotechnological processes is rising. However, their often insufficient activity on non-natural compounds and narrow substrate ranges still limit their use in industrial setups. To obtain enzymes with enhanced activities, methods of directed evolution are available, involving mutant gene library generation and high throughput testing of individual variants in a cellular context. Linking of the targeted enzymatic activity to cell growth by constructing strains conditionally auxotrophic for essential metabolites or for energy carriers have significantly enlarged the application range of directed evolution (Chen et al., 2022). To achieve spatial and temporal connection between mutagenesis and variant screening, in vivo mutagenesis approaches have recently been developed. Among them are inducible systems employing different deaminase base editors tethered to T7 RNA polymerase (T7 RNAP), provoking base substitutions concomitant to transcription depending on the deaminase used (Cravens et al., 2021; (https://2021.igem.org/ Team:Evry_Paris-Saclay). However, these techniques have not yet been applied for the amelioration of industrial biocatalysts.
The components of the systems, i.e. target genes, T7 RNAP-deaminase fusion proteins and regulatory modules, are plasmid borne. The PhD student will further develop this method by inserting the T7 RNAP editor and the target gene into the E. coli chromosome, thus stabilizing the system and opening the possibility of multiple rounds of mutagenesis and selection steps in GM3 automated continuous culture devices available in the laboratory. He/she will establish a mutagenesis and selection protocol, using a native gene enabling conditional metabolic selection as reporter. The validated protocol will subsequently be applied to heterologous NADPH-dependent dehydrogenases using a generic NADPH sensor selection strain constructed and used in the lab (Lindner et al., 2018). These will include the screening for alcohol and amine dehydrogenases, activities already studied by our group (Ducrot et al., 2020), to obtain variants with broadened substrate specificity. Their potential for synthetic applications will be assessed in laboratory scale, using targets chosen in collaboration with national and international partners. In vitro characterization of the enzymatic activity of enhanced variants will also be undertaken. The PhD student will benefit from multiple expertise and equipment of the UMR Génomique Métabolique, covering molecular genetics, synthetic biology, directed evolution, chemical analytics and enzymology.

Elucidation of the homarine degradation pathway in the oceans

Context:
Primary biological production in the oceans exerts significant control over atmospheric CO2. Every day, phytoplankton transform 100 million tonnes of CO2 into thousands of different organic compounds (1). Most of these molecules (as metabolites) are biologically labile and converted back into CO2 within a few hours or days. The climate-carbon feedback loops mediated by this reservoir of labile dissolved organic carbon (DOC) depend on this network of microbes and metabolites. In other words, the resilience of the ocean to global changes(such as temperature rise and acidification) will depend on how this network responds to these perturbations.
Because of its short lifespan, this pool of labile DOC is difficult to observe. Yet these microbial metabolites are the most important carbon transport pathways in the ocean and are assimilated by marine bacteria as sources of carbon and energy. Knowledge of the main metabolic pathways (from genes to metabolites) is therefore essential for modelling carbon flows in the oceans. However, the diversity of these molecules remains largely unexplored and many of them have no annotated biosynthetic and/or catabolic pathways. This is the case for homarin (N-methylpicolinate), an abundant compound in the oceans. Homarine content can reach 400 mM in the marine cyanobacterium Synechococchus (2) and this ubiquitous organism contributes between 10 and 20% of global net primary production (3).Because of its abundance, homarine is probably an important metabolite in the carbon cycle.

Project:
In this thesis project, we aim to elucidate the homarine degradation pathway in the oceans.
Ruegeria pomeroyi DSS-3 is a Gram-negative aerobic bacterium and a member of the marine Roseobacter clade. Its close relatives account for around 10-20% of the bacterial plankton in the mixed coastal and oceanic layer (4). In the laboratory, DSS-3 can use homarine as its sole carbon source but to date, there is no information on the genes and catabolites involved in this process.
Comparative analysis of RNAseq experiments conducted on DSS-3 cultures grown with homarine or glucose (control) as a carbon source will enable us to identify the candidate genes involved in the degradation pathway. This pathway will also be studied using a metabolomic approach based on liquid chromatography coupled with very high resolution mass spectrometry. The difference in profile between DSS-3 metabolomes from cells grown on glucose as a carbon source and those from cells grown on homarine will help to detect catabolites in the pathway. Finally, the candidate genes will be cloned for recombinant expression in E. coli, the corresponding proteins purified and their activity characterized in order to reconstruct the entire homarine degradation pathway in vitro.
Analysis of the expression of these genes in data from the Tara Oceans project (5) will be the first step towards a better understanding of the role of homarine in the carbon cycle.

References :
(1) doi.org/10.1038/358741a0
(2) doi.org/10.1128/mSystems.01334-20
(3) doi.org/10.1073/pnas.1307701110
(4) doi.10.1038/nature03170
(5) https://fondationtaraocean.org/expedition/tara-oceans/

Wetting dynamics at the nanoscale

Wetting dynamics describes the processes involved when a liquid spreads on a solid surface. It's an ubiquitous phenomenon in nature, for example when dew beads up on a leaf, as well as in many processes of industrial interest, from the spreading of paint on a wall to the development of high-performance coating processes in nanotechnology. Today, wetting dynamics is relatively well understood in the case of perfectly smooth, homogeneous model solid surfaces, but not in the case of real surfaces featuring roughness and/or chemical heterogeneity, for which fine modeling of the mechanisms remains a major challenge. The main goal of this thesis is to understand how nanometric roughness influences wetting dynamics.

This project is based on an interdisciplinary approach combining physics and surface chemistry. The PhD student will conduct systematic model experiments, combined with multi-scale visualization and characterization tools (optical microscopy, AFM, X-ray and neutron reflectivity, etc.).

Thanks to the complementary nature of the experimental approaches, this thesis will provide a better understanding of the fundamental mechanisms of energy dissipation at the contact line, from the nanometric to the millimetric scale.

Understanding the signals emitted by moving liquids

Elasticity is one of the oldest physical properties of condensed matter. It is expressed by a constant of proportionality G between the applied stress (s) and the deformation (?): s = G.? (Hooke's law). The absence of resistance to shear deformation (G' = 0) indicates liquid-like behavior (Maxwell model). Long considered specific to solids, shear elasticity has recently been identified in liquids at the submillimeter scale [1].

The identification of liquid shear elasticity (non-zero G') is a promise of discoveries of new solid properties. Thus, we will explore the thermal response of liquids [2,3], exploit the capacity of conversion of mechanical energy into temperature variations and develop a new generation of micro-hydrodynamic tools.

At the nanoscopic scale, we will study the influence of a solid surface in contact with the liquid. It will be a question of studying by unique methods such as Inelastic Neutron Scattering and Synchrotron radiation, the dynamics of the solid-liquid interface using Very Large Research Facilities such as the ILL or the ESRF, as well as by microscopy (AFM). Finally, we will strengthen our collaborations with theoreticians, in particular with K. Trachenko of the Queen Mary Institute "Top 10 Physics World Breakthrough" and A. Zaccone of the University of Milan.

The PhD topic is related to wetting, macroscopic thermal effects, phonon dynamics and liquid transport.

From Combustion to Astrophysics: Exascale Simulations of Fluid/Particle Flows

This thesis focuses on the development of advanced numerical methods to simulate fluid-particle interactions in complex environments. These methods, initially used in industrial applications such as combustion and multiphase flows, will be enhanced for integration into simulation codes for exascale supercomputers and adapted to meet the needs of astrophysics. The objective is to enable the study of astrophysical phenomena such as the dynamics of dust in protoplanetary disks and the structuring of dust in protostars and the interstellar medium. The expected outcomes include a better understanding of planetary formation mechanisms and disk structuring, as well as advancements in numerical methods that will benefit both industrial and astrophysical sciences.

Mesure de la réponse intra-pixel de détecteur infrarouge à base de HgCdTe avec des rayons X pour l’astrophysique

In the field of infrared astrophysics, the most commonly used photon sensors are detector arrays based on the HgCdTe absorbing material. The manufacturing of such detectors is a globally recognized expertise of CEA/Leti in Grenoble. As for the Astrophysics Department (DAp) of CEA/IRFU, it holds renowned expertise in the characterization of this type of detector. A key characteristic is the pixel spatial response (PSR), which describes the response of an individual pixel in the array to the point-like generation of carriers within the absorbing material at various locations inside the pixel. Today, this detector characteristic has become a critical parameter for instrument performance. It is particularly crucial in applications such as measuring galaxy distortion or conducting high-precision astrometry. Various methods exist to measure this quantity, including the projection of point light sources and interferometric techniques. These methods, however, are complex to implement, especially at the cryogenic operating temperatures of the detectors.
At the DAp, we propose a new method based on the use of X-ray photons to measure the PSR of infrared detectors. By interacting with the HgCdTe material, the X-ray photon generates carriers locally. These carriers then diffuse before being collected. The goal is to derive the PSR by analyzing the resulting images. We suggest a two-pronged approach that integrates both experimental methods and simulations. Data analysis methods will also be developed. Thus, the ultimate objective of this thesis is to develop a new, robust, elegant, and fast method for measuring the intra-pixel response of infrared detectors for space instrumentation. The student will be based at the DAp. This work also involves collaboration with CEA/Leti, combining the instrumental expertise of the DAp with the technological knowledge of CEA/Leti.

Development and characterization of a reliable 13.5 nm EUV OAM carrying photon beamline

The Extreme UltraViolet (EUV) photon energy range (10-100 nm) is crucial for many applications spanning from fundamental physics (attophysics, femto-magnetism) to applied domains such as lithography and nanometer scale microscopy. However, there are no natural source of light in this energy domain on Earth because photons are strongly absorbed by matter, requiring thus vacuum environment. People instead have to rely on expensive large-scale sources such as synchrotrons, free electron lasers or plasmas from large lasers. High order laser harmonic generation (HHG), discovered 30 years ago and recognized by the Nobel Prize in Physics in 2023, is a promising alternative as a laboratory scale EUV source. Based on a strongly nonlinear interaction between an ultrashort intense laser and an atomic gas, it results in the emission of EUV pulses with femto to attosecond durations, very high coherence properties and relatively large fluxes. Despite intensive research that have provided a clear understanding of the phenomenon, it has up to know been mostly limited to laboratories. Breaching the gap towards applied industry requires increasing the reliability of the beamlines, subjects to large fluctuations due to the strong nonlinearity of the mechanism, and developing tools to measure and control their properties.

CEA/LIDYL and Imagine Optic have recently joined their expertise in a join laboratory to develop a stable EUV beamline dedicated to metrology and EUV sensors. The NanoLite laboratory, hosted at CEA/LIDYL, is based on a high repetition rate compact HHG beamline providing EUV photons around 40eV. Several EUV wavefront sensors have been successfully calibrated in the past few years. However, new needs have emerged recently, resulting in the need to upgrade the beamline.

The first objective of the PhD will be to install a new HHG geometry to the beamline to enhance its overall stability and efficiency and to increase the photon energy to 92eV, a golden target for lithography. He will then implement the generation of a EUV beam carrying orbital angular momentum and will upgrade Imagine Optic’s detector to characterize its OAM content. Finally, assisted by Imagine Optic engineers, he will develop a new functionality to their wavefront sensors in order to enable large beam characterization.

Top