Surface technologies for enhanced superconducting Qubits lifetimes

Materials imperfections in superconducting quantum circuits—in particular, two-level-system (TLS) defects—are a major source of decoherence, ultimately limiting the performance of qubits. Thus, identifying the microscopic origin of possible TLS defects in these devices and developing strategies to eliminate them is key to superconducting qubit performance improvement. This project proposes an original approach that combines the passivation of the superconductor’s surface with films deposited by Atomic Layer Deposition (ALD), which inherently have lower densities of TLS defects, and thermal treatments designed to dissolve the initially present native oxides. These passivating layers will be tested on 3D Nb resonators than implemented in 2D resonators and Qubits and tested to measure their coherence time. The project will also perform systematic material studies with complementary characterization techniques in order to correlate improvements in qubit performances with the chemical and crystalline alteration of the surface.

Bayesian Neural Networks with Ferroelectric Memory Field-Effect Transistors (FeMFETs)

Artificial Intelligence (AI) increasingly powers safety-critical systems that demand robust, energy-efficient computation, often in environments marked by data scarcity and uncertainty. However, conventional AI approaches struggle to quantify confidence in their predictions, making them prone to unreliable or unsafe decisions.

This thesis contributes to the emerging field of Bayesian electronics, which exploits the intrinsic randomness of novel nanodevices to perform on-device Bayesian computation. By directly encoding probability distributions at the hardware level, these devices naturally enable uncertainty estimation while reducing computational overhead compared to traditional deterministic architectures.

Previous studies have demonstrated the promise of memristors for Bayesian inference. However, their limited endurance and high programming energy pose significant obstacles for on-chip learning applications.

This thesis proposes the use of ferroelectric memory field-effect transistors (FeMFETs)—which offer nondestructive readout and high endurance—as a promising alternative for implementing Bayesian neural networks.

Adaptive and explainable Video Anomaly Detection

Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.

This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.

The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.

Cosmological parameter inference using theoretical Wavelet statistics predictions

Launched in 2023, the Euclid satellite is surveying the sky in optical and infrared wavelengths to create an unprecedented map of the Universe's large-scale structure. A cornerstone of its mission is the measurement of weak gravitational lensing—subtle distortions in the shapes of distant galaxies. This phenomenon is a powerful cosmological probe, capable of tracing the evolution of dark matter and helping to distinguish between dark energy and modified gravity theories.
Traditionally, cosmologists have analyzed weak lensing data using second-order statistics (like the power spectrum) paired with a Gaussian likelihood model. This established approach, however, faces significant challenges:
- Loss of Information: Second-order statistics fully capture information only if the underlying matter distribution is Gaussian. In reality, the cosmic web is highly structured, with clusters, filaments, and voids, making this approach inherently lossy.
- Complex Covariance: The method requires estimating a covariance matrix, which is both cosmology-dependent and non-Gaussian. This necessitates running thousands of computationally intensive N-body simulations for each model, a massive and often impractical undertaking.
- Systematic Errors: Incorporating real-world complications—such as survey masks, intrinsic galaxy alignments, and baryonic feedback—into this framework is notoriously difficult.

In response to these limitations, a new paradigm has emerged: likelihood-free inference via forward modelling. This technique bypasses the need for a covariance matrix by directly comparing real data to synthetic observables generated from a forward model. Its advantages are profound: it eliminates the storage and computational burden of massive simulation sets, naturally incorporates high-order statistical information, and can seamlessly integrate systematic effects. However, this new method has its own hurdles: it demands immense GPU resources to process Euclid-sized surveys, and its conclusions are only as reliable as the simulations it uses, potentially leading to circular debates if simulations and observations disagree.

A recent breakthrough (Tinnaneni Sreekanth, 2024) offers a compelling path forward. This work provides the first theoretical framework to directly predict key wavelet statistics of weak lensing convergence maps—exactly the kind Euclid will produce—for any given set of cosmological parameters. It has been shown in Ajani et al (2021) that the wavelet coefficient L1-norm is extremely powerful to constraint the cosmological parameters. This innovation promises to harness the power of advanced, non-Gaussian statistics without the traditional computational overhead, potentially unlocking a new era of precision cosmology. We have demonstrated that this theoretical prediction can be used to build a highly efficient emulator (Tinnaneri Sreekanth et al, 2025), dramatically accelerating the computation of these non-Gaussian statistics. However, it is crucial to note that this emulator, in its current stage, provides only the mean statistic and does not include cosmic variance. As such, it cannot yet be used for full statistical inference on its own. 

This PhD thesis aims to revolutionize the analysis of weak lensing data by constructing a complete, end-to-end framework for likelihood-free cosmological inference. The project begins by addressing the core challenge of stochasticity: we will first calculate the theoretical covariance of wavelet statistics, providing a rigorous mathematical description of their uncertainty. This model will then be embedded into a stochastic map generator, creating realistic mock data that captures the inherent variability of the Universe.
To ensure our results are robust, we will integrate a comprehensive suite of systematic effects—such as noise, masks, intrinsic alignments, and baryonic physics—into the forward model. The complete pipeline will be integrated and validated within a simulation-based inference framework, rigorously testing its power to recover unbiased cosmological parameters. The culmination of this work will be the application of our validated tool to the Euclid weak lensing data, where we will leverage non-Gaussian information to place competitive constraints on dark energy and modified gravity.

References
V. Ajani, J.-L. Starck and V. Pettorino, "Starlet l1-norm for weak lensing cosmology", Astronomy and Astrophysics,  645, L11, 2021.
V. Tinnaneri Sreekanth, S. Codis, A. Barthelemy, and J.-L. Starck, "Theoretical wavelet l1-norm from one-point PDF prediction", Astronomy and Astrophysics,  691, id.A80, 2024.
V. Tinnaneri Sreekanth, J.-L. Starck and S. Codis, "Generative modeling of convergence maps based in LDT theoretical prediction", Astronomy and Astrophysics,  701, id.A170, 2025.

Development and Characterization of Terahertz Source Matrices Co-integrated in Silicon and III-V Photonics Technology

The terahertz (THz) range (0.1–10 THz) is increasingly exploited for imaging and spectroscopy (e.g. security scanning, medical diagnostics, non-destructive testing) because many materials are transparent to THz radiation and have unique spectral signatures. However, existing sources struggle to offer both high power and wide tunability: electronic sources (diodes, QCLs) deliver milliwatts but over narrow bands, while photonic emitters (photomixers in III–V semiconductors) are tunable across broad bands but emit only microwatts. This thesis aims to overcome these limitations by developing an integrated matrix of THz sources. The approach is based on photomixing two 1.55 µm lasers in III–V photodiodes to generate a phase-coherent THz current coupled to THz antennas.
Initially, the PhD student will experimentally investigate an existing 16-element THz antenna array (STYX project) CEA-CTReg/DNAQ: setting up the test bench, measuring phase coherence, optical coupling, radiation lobes, and constructive interference. These experiments will provide a scientific foundation for the subsequent design of an integrated photonic array on silicon. The student will simulate the photonic architecture (couplers, waveguides, phase modulators, Si/III–V transitions) synchronizing multiple InGaAs photodiodes. Prototyping will include the fabrication of silicon photonic circuits (CEA-LETI) and THz photodiodes/antennas in InP (III-V Lab or, to be confirmed, Heinrich-Hertz-Institut of the Fraunhofer—HHI), followed by their hybrid integration (bonding, alignment).
This thesis will also rely on close collaboration with the IMS laboratory (Bordeaux), which is nationally and internationally recognized for its expertise in silicon photonics and THz systems. IMS will provide complementary expertise in optical modeling, electromagnetic simulation, and experimental characterization, reinforcing the multidisciplinary strength of the project.
Finally, the ultimate goal of this thesis is to develop a proof-of-concept demonstrator with a few phase-locked THz emitters (e.g. 4–16) will be produced and characterized, showing enhanced beam directivity and output power thanks to constructive interference. This demonstration will pave the way for large-scale THz source arrays with significantly improved range and penetration for advanced THz imaging systems.

Modeling of a magnonic diode based on spin-wave non-reciprocity in nanowires and nanotubes

This PhD project focuses on the emerging phenomenon of spin wave non-reciprocity in cylindrical magnetic wires, from their fundamental properties, to their exploitation towards realizing magnonic diode based devices. Preliminary experiments conducted in our laboratory SPINTEC on cylindrical wires, with axial magnetization in the core and azimuthal magnetization on the wire surface, revealed a giant non-symmetrical effect (non-symmetrical dispersion curves with different speeds and periods for left- and right-propagating waves), up to an extent of creating a band gap for a given direction of motion, related to the circulation of magnetization (right or left). This particular situation has not been yet described theoretically or modeled, which sets an unexplored and promising ground for this PhD project. To model spin-wave propagation and derive dispersion curves for a given material we plan to use different numerical tools: our in-home 3D finite element micromagnetic software feeLLGood and open source 2D TetraX package dedicated to eigen modes spectra calculations. This work will be conducted in tight collaboration with experimentalists, with a view both to explain experimental results and to guide further experiments and research directions.

AI-Driven Network Management with Large Language Models LLMs

The increasing complexity of heterogeneous networks (satellite, 5G, IoT, TSN) requires an evolution in network management. Intent-Based Networking (IBN), while advanced, still faces challenges in unambiguously translating high-level intentions into technical configurations. This work proposes to overcome this limitation by leveraging Large Language Models (LLMs) as a cognitive interface for complete and reliable automation.
This thesis aims to design and develop an IBN-LLM framework to create the cognitive brain of a closed control loop on the top of an SDN architecture. The work will focus on three major challenges: 1) developing a reliable semantic translator from natural language to network configurations; 2) designing a deterministic Verification Engine (via simulations or digital twins) to prevent LLM "hallucinations"; and 3) integrating real-time analysis capabilities (RAG) for Root Cause Analysis (RCA) and the proactive generation of optimization intents.
We anticipate the design of an IBN-LLM architecture integrated with SDN controllers, along with methodologies for the formal verification of configurations. The core contribution will be the creation of an LLM-based model capable of performing RCA and generating optimization intents in real-time. The validation of the approach will be ensured by a functional prototype (PoC), whose experimental evaluation will allow for the precise measurement of performance in terms of accuracy, latency, and resilience.

Axion searches in the SuperDAWA experiment with superconducting magnets and microwave radiometry

Axions are hypothetical particles that could both explain a fundamental problem in strong interactions (the conservation of CP symmetry in QCD) and account for a significant fraction of dark matter. Their direct detection is therefore a key challenge in both particle physics and cosmology.

The SuperDAWA experiment, currently under construction at CEA Saclay, uses superconducting magnets and a microwave radiometer placed inside a cryogenic cryostat. This setup aims to convert potential axions into measurable radio waves, with frequencies directly linked to the axion mass.

The proposed PhD will combine numerical modeling with hands-on experimental work. The student will develop a detailed model of the experiment, including magnetic fields, radio signal propagation, and detector electronics, validated step by step with real measurements. Once the experiment is running, the PhD candidate will participate in data-taking campaigns and their analysis.

This project provides a unique opportunity to contribute to a state-of-the-art experiment in experimental physics, with direct implications for the global search for dark matter.

Multi-Probe Cosmological Mega-Analysis of the DESI Survey: Standard and Field-Level Bayesian Inference

The large-scale structure (LSS) of the Universe is probed through multiple observables: the distribution of galaxies, weak lensing of galaxies, and the cosmic microwave background (CMB). Each probe tests gravity on large scales and the effects of dark energy, but their joint analysis provides the best control over nuisance parameters and yields the most precise cosmological constraints.

The DESI spectroscopic survey maps the 3D distribution of galaxies. By the end of its 5-year nominal survey this year, it will have observed 40 million galaxies and quasars — ten times more than previous surveys — over one third of the sky, up to a redshift of z = 4.2. Combining DESI data with CMB and supernova measurements, the collaboration has revealed a potential deviation of dark energy from a cosmological constant.

To fully exploit these data, DESI has launched a “mega-analysis” combining galaxies, weak lensing of galaxies (Euclid, UNIONS, DES, HSC, KIDS) and the CMB (Planck, ACT, SPT), aiming to deliver the most precise constraints ever obtained on dark energy and gravity. The student will play a key role in developing and implementing this multi-probe analysis pipeline.

The standard analysis compresses observations into a power spectrum for cosmological inference, but this approach remains suboptimal. The student will develop an alternative, called field-level analysis, which directly fits the observed density and lensing field, simulated from the initial conditions of the Universe. This constitutes a very high-dimensional Bayesian inference problem, which will be tackled using recent gradient-based samplers and GPU libraries with automatic differentiation. This state-of-the-art method will be validated alongside the standard approach, paving the way for a maximal exploitation of DESI data.

Hybrid Compression of Neural Networks for Embedded AI: Balancing Efficiency and Accuracy

Convolutional Neural Networks (CNNs) have become a cornerstone of computer vision, yet deploying them on embedded devices (robots, IoT systems, mobile hardware) remains challenging due to their large size and energy requirements. Model compression is a key solution to make these networks more efficient without severely impacting accuracy. Existing methods (such as weight quantization, low-rank factorization, and sparsity) show promising results but quickly reach their limits when used independently. This PhD will focus on designing a unified optimization framework that combines these techniques in a synergistic way. The work will involve both theoretical aspects (optimization methods, adaptive rank selection) and experimental validation (on benchmark CNNs like ResNet or MobileNet, and on embedded platforms such as Jetson, Raspberry Pi, and FPGA). An optional extension to transformer architectures will also be considered. The project benefits from complementary supervision: academic expertise in tensor decompositions and an industrial-oriented partner specialized in hardware-aware compression.

Top