Topologic optimization of µLED's optical performance
The performance of micro-LEDs (µLEDs) is crucial for micro-displays, a field of expertise at the LITE laboratory within CEA-LETI. However, simulating these components is complex and computationally expensive due to the incoherent nature of light sources and the involved geometries. This limits the ability to effectively explore multi-parameter design spaces.
This thesis proposes to develop an innovative finite element method to accelerate simulations and enable the use of topological optimization. The goal is to produce non-intuitive designs that maximize performance while respecting industrial constraints.
The work is divided into three phases:
- Develop a fast and reliable simulation method by incorporating appropriate physical approximations for incoherent sources and significantly reducing computation times.
- Design a robust topological optimization framework that includes fabrication constraints to generate immediately realizable designs.
- Realize such a metasurface on an existing shortloop in the laboratory. This part is optional and will be tackled only if we manage to seize an Opportunity to finance the prototype, via the inclusion of the thésis inside the "metasurface
topics" of european or IPCEI projets in the lab .
The expected results include optimized designs for micro-displays with enhanced performance and a methodology that can be applied to other photonic devices and used by other laboratories from DOPT.
Modeling and characterization of CFET transistors for enhanced electrical performance
Complementary Field Effect Transistors (CFETs) represent a new generation of vertically stacked CMOS devices, offering a promising path to continue transistor miniaturization and to meet the requirements of high-performance computing.
The objective of this PhD work is to study and optimize the strain engineering of the transistor channel in order to enhance carrier mobility and improve the overall electrical performance of CFET devices. The work will combine numerical modeling of technological processes using finite element methods with experimental characterization of crystalline deformation through transmission electron microscopy coupled with precession electron diffraction (TEM-PED).
The modeling activity will focus on predicting strain distributions and their impact on electrical properties, while accurately accounting for the complexity of the technological stacks and critical fabrication steps such as epitaxy. In parallel, the experimental work will aim to quantify strain fields using TEM-PED and to compare these results with simulation outputs.
This research will contribute to the development of dedicated modeling tools and advanced characterization methodologies adapted to CFET architectures, with the goal of improving spatial resolution, measurement reproducibility, and the overall understanding of strain mechanisms in next-generation transistors.
Investigation and Modeling of Ferroelectric and Antiferroelectric Domain Dynamics in HfO2-Based Capacitors
The proposed PhD work lies within the exploration of new supercapacitor and hybrid energy storage technologies, aiming to combine miniaturization, high power density, and CMOS process compatibility. The hosting laboratory (LTEI/DCOS/LCRE) has recognized expertise in thin-film integration and dielectric material engineering, offering unique opportunities to investigate ferroelectric (FE) and antiferroelectric (AFE) behaviors in doped hafnium oxide (HfO2).
The thesis will focus on the experimental investigation and physical modeling of thin-film HfO2-based capacitors, intentionally doped to exhibit ferroelectric or antiferroelectric properties depending on the composition and deposition conditions (for instance, through ZrO2 or SiO2 doping). Such materials are particularly attractive for realizing devices that combine non-volatile memory and energy storage functions on a single CMOS-compatible platform, enabling ultra-low-power autonomous systems such as edge computing architectures, environmental sensors, and smart connected objects.
The research will involve the fabrication and characterization of metal–insulator–metal (MIM) capacitors based on doped HfO2 integrated on silicon substrates. Systematic electrical measurements—including current–voltage (I–V) and polarization–electric field (P–E) characterizations—will be carried out under various frequencies, amplitudes, and cycling conditions to investigate the relaxation mechanisms of FE and AFE domains. Analysis of minor hysteresis loops will provide access to the distribution of activation energies and enable the modeling of domain relaxation dynamics. A physical model will be developed or refined to describe FE/AFE transitions under cyclic electrical excitation, incorporating effects such as charge trapping, mechanical stress, and domain nucleation kinetics.
The overall objective is to optimize the recoverable energy density and the energy conversion efficiency of these capacitors, while establishing design guidelines for compact, efficient, and silicon-integrable energy storage devices. The insights gained from this work will contribute to a deeper understanding of the dynamic mechanisms governing FE/AFE behavior in doped HfO2, with potential impact on ferroelectric memories, energy-harvesting devices, and low-power neuromorphic architectures.
Fabrication of Metasurfaces by Self-Assembly of Block Copolymers
Block copolymers (BCP) are an industrial technology in full expansion, offering promising perspectives for material nanostructuring. These polymers, composed of chemically distinct block chains, self-assemble to form ordered structures at the nanometric scale. However, their current use is limited to specific nanostructuring per product (1 product = 1 nanostructuring), thus restricting their application potential.
This PhD proposes to develop an innovative method to create multiple patterns in a single BCP self-assembly step using a mixture of two products. The student will also focus on controlling the localization of these patterns using chemoepitaxy, a technique combining chemical and morphological guidance to precisely control the position of patterns at the micrometric and nanometric scales.
The work will proceed in several steps: understanding the mechanisms of mixed block copolymers, developing functionalized substrates for chemoepitaxy using advanced lithography techniques, and conducting BCP self-assembly experiments on these substrates. The resulting structures will be analyzed using the metrology equipment available at CEA-Leti.
The targeted applications include the creation of nanostructures capable of interacting with light, reducing diffraction, and controlling polarization. The expected results include demonstrating the ability to generate multiple types of patterns in a single self-assembly step, with precise control over their position and dimensions.
Impact of magnetohydrodynamic on access and dynamics of X-point radiator regimes (XPR)
ITER and future fusion powerplants will need to operate without degrading too much the plasma facing components (PFC) in the divertor, the peripheral element with is dedicated to heat and particle exhaust in tokamaks. In this context, two key factors must be considered: heat fluxes must stay below engineering limits both in stationary conditions and during violent transient events. An operational regime recently developed can satisfy those two constraints: the X-point Radiator (XPR). Experiments on many tokamaks, in particular WEST which has the record plasma duration in this regime (> 40 seconds), have shown that it allowed to drastically reduce heat fluxes on PFCs by converting most of the plasma energy into photons and neutral particles, and that it also was able to mitigate – or even suppress – deleterious magnetohydrodynamic (MHD) edge instabilities known as ELMs (edge localised modes). The mechanisms governing these mitigation and suppression are still poorly understood. Additionally, the XPR itself can become unstable and trigger a disruption, i.e., a sudden loss of plasma confinement cause by global MHD instabilities.
The objectives of this PhD are: (i) understand the physics at play during the interaction XPR-ELMs, and (ii) optimise the access and stability of the XPR regime. To do so, the student will use the 3D non linear MHD code JOREK, the European reference code in the field. The goal is to define the operational limits of a stable XPR with small or no ELMs, and identify the main actuators (quantity and species of injected impurities, plasma geometry).
A participation to experimental campaigns of the WEST tokamak (operated by IRFM at CEA Cadarache) – and of the MAST-U tokamak operated by UKAEA – is also envisaged to confront numerical results and predictions to experimental measurements.
Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model
Context
Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.
The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.
The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.
PhD project
The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.
The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.
As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.
Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.
The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.
Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.
References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].
Development and validation of surface haptics machine learning algorithms for touch and dexterity assessment in neurodevelopmental disorders
The aim of this PhD thesis is to develop new clinical assessment methods using surface haptics technologies, developed at CEA List, and machine learning algorithms for testing and monitoring tactile-motor integration. In particular, the thesis will investigate and validate the development of a multimodal analytics pipeline that converts surface haptics signals and dexterity exercises inputs (i.e. tactile stimulation events, finger kinematics, contact forces, and millisecond timing) into reliable, interpretable biomarkers of tactile perception and sensorimotor coupling, and then classify normative versus atypical integration patterns with clinical fidelity for assessment.
Expected results: a novel technology and models for the rapid and feasible measurement of tactile-motor deficits in clinical setting, with an initial validation in different neurodevelopmental disorders (i.e. first-episode psychosis, autism spectrum disorder, and dyspraxia). The methods developed and data collected will provide:
(1) an open, versioned feature library for tactile–motor assessment;
(2) classifiers with predefined operating points (sensitivity/specificity);
(3) and an on-device/edge-ready pipeline, i.e. able to run locally on a typical tablet hardware whilst meeting constraints on latency, computing, and data privacy. Success will be measured by reproducibility of features, clinically meaningful effect sizes, and interpretable decision logic that maps back to known neurophysiology rather than artefacts.
Multiscale modeling of rare earth ion emission from ionic liquids under intense electric fields
The main objective of this thesis is to model the mechanisms of rare earth ion emission from ionic liquids subjected to an intense electric field, in order to identify the conditions favorable to the emission of weakly complexed ions.
The aim is to establish rational criteria for the design of new ILIS sources suitable for the localized implantation of rare earths in photonic devices.
The thesis work will be based on large-scale molecular dynamics simulations, reproducing the emission region of a Taylor cone under an electric field.
The simulations will be compared with emission experiments conducted in parallel by the SIMUL group in collaboration with Orsay Physics TESCAN, using a prototype ILIS source doped with rare earths. Comparisons of measurements (mass spectrometry, energy distribution) will enable the models to be adjusted and the proposed mechanisms to be validated.
Magnetic Tunnel Junctions at Boundaries
Spin electronics, thanks to the additional degree of freedom provided by electron spin, enables the deployment of a rich physics of magnetism on a small scale, but also provides breakthrough technological solutions in the field of microelectronics (storage, memory, logic, etc.) as well as for magnetic field measurement.
In the field of life sciences and health, giant magnetoresistance (GMR) devices have demonstrated the possibility of measuring the very weak fields produced by excitable cells on a local scale (Caruso et al, Neuron 2017, Klein et al, Journal of Neurophysiology 2025).
Measuring the information contained in the magnetic component associated with neural currents (or magnetophysiology) can, in principle, provide a description of the dynamic, directional and differentiating neural landscape. It could pave the way for new types of implants, thanks to their immunity to gliosis and their longevity.
The current bottleneck is the very small amplitude of the signal produced (<1nT), which requires averaging the signal in order to detect it.
Tunnel magnetoresistances (TMR), in which a spin-polarised tunnel current is measured, offer sensitivity performance that is more than an order of magnitude higher than GMR. However, they currently have too high a level of low-frequency noise to be fully beneficial, particularly in the context of measuring biological signals.
The aim of this thesis is to push back the current limits of TMRs by reducing low-frequency noise, positioning them as break sensors for measuring very weak signals and exploiting their potential as amplifiers for small signals.
To achieve this objective, an initial approach based on exploring the materials composing the tunnel junction, in particular those of the so-called free magnetic layer, or on improving the crystallinity of the tunnel barrier, will be deployed. A second approach, consisting of studying the intrinsic properties of low-frequency noise, particularly in previously unexplored limits, at very low temperatures where intrinsic mechanisms are reached, will guide the most promising solutions.
Finally, the most advanced structures and approaches at the state of the art thus obtained will be integrated into devices that will provide the building blocks for going beyond the state of the art and offering new possibilities for spin electronics applications. These elements will also be integrated into systems for 2D (or even 3D) mapping of the activity of a global biological system (neural network) and for evaluating capabilities for clinical cases (such as epilepsy or motor rehabilitation).
It should be noted that these improved TMRs may have other applications in the fields of physical instrumentation, non-destructive testing, and magnetic imaging.
Physical-attack-assisted cryptanalysis for error-correcting code-based schemes
The security assessment of post-quantum cryptography, from the perspective of physical attacks, has been extensively studied in the literature, particularly with regard to the ML-KEM and ML-DSA standards, which are based on Euclidean lattices. Furthermore, in March 2025, the HQC scheme, based on error-correcting codes, was standardized as an alternative key encapsulation mechanism to ML-KEM. Recently, Soft-Analytical Side-Channel Attacks (SASCA) have been used on a wide variety of algorithms to combine information related to intermediate variables in order to trace back to the secret, providing a form of “correction” to the uncertainty associated with profiled attacks. SASCA is based on probabilistic models called “factor graphs,” to which a “belief propagation” algorithm is applied. In the case of attacks on post-quantum cryptosystems, it is theoretically possible to use the underlying mathematical structure to process the output of a SASCA attack in the form of cryptanalysis. This has been demonstrated, for example, on ML-KEM. The objective of this thesis is to develop a methodology and the necessary tools for cryptanalysis and residual complexity calculation for cryptography based on error-correcting codes. These tools will need to take into account information (“hints”) obtained from a physical attack. A second part of the thesis will be to study the impact that this type of tool can have on the design of countermeasures.