Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices

The goal of this PhD thesis is to develop methods that enhance resource efficiency in federated learning (FL), with particular attention to the constraints and heterogeneity of client resources. The work will first focus on the classical client-server FL architecture, before extending the investigation to decentralised FL settings. The proposed methods will be studied in the context of both federated model training and distributed fine-tuning of large models, such as large language models (LLMs).

Internalisation of external knowledge by foundation models

To perform an unknown task, a subject (human or robot) has to consult external information, which involves a cognitive cost. After several similar experiments, it masters the situation and can act automatically. The 1980s and 1990s saw explorations in AI using conceptual graphs and schemas, but their large-scale implementation was limited by the technology available at the time.

Today's neural models, including transformers and LLM/VLMs, learn universal representations through pre-training on huge amounts of data. They can be used with prompts to provide local context. Fine-tuning allows these models to be specialised for specific tasks.

RAG and GraphRAG methods can be used to exploit external knowledge, but their use for inference is resource-intensive. This thesis proposes a cognitivist approach in which the system undergoes continuous learning. It consults external sources during inference and uses this information to refine itself regularly, as it does during sleep. This method aims to improve performance and reduce resource consumption.

In humans, these processes are linked to the spatial organisation of the brain. The thesis will also study network architectures inspired by this organisation, with dedicated but interconnected “zones”, such as the vision-language and language models.

These concepts can be applied to the Astir and Ridder projects, which aim to exploit foundation models for software engineering in robotics and the development of generative AI methods for the safe control of robots.

New experimental constraints on the weak interaction coupling constants by coincidence measurements of complex decay schemes

Accurate experimental knowledge of forbidden non-unique beta transitions, which constitute about one third of all known beta transitions, is an important and very difficult subject. Only a few reliable studies exist in the literature. Indeed, the continuous energy spectrum of these transitions is difficult to measure precisely for various reasons that cumulate: high diffusivity of electrons in matter and non-linearity of the detection system, unavailability of some radionuclides and presence of impurities, long half-lives and complex decay schemes, etc. Accurate theoretical predictions are equally difficult because of the necessity of coupling different models for the atomic, the nuclear and the weak interaction parts in the same, full-relativistic formalism. However, improving our knowledge of forbidden non-unique beta transitions is essential in radioactivity metrology to define the becquerel SI unit in the case of pure beta emitters. This can have a strong impact in nuclear medicine, for the nuclear industry, and for some studies in fundamental physics such as dark matter detection and neutrino physics.
Our recent study, both theoretical and experimental, of the second forbidden non-unique transition in 99Tc decay has highlighted that forbidden non-unique transitions can be particularly sensitive to the effective values of the weak interaction coupling constants. The latter act as multiplicative factors of the nuclear matrix elements. The use of effective values compensates for the approximations used in the nuclear structure models, such as simplified correlations between the nucleons in the valence space, or the absence of core excitation. However, they can only be adjusted by comparing with a high-precision experimental spectrum. The predictability of the theoretical calculations, even the most precise currently available, is thus strongly questioned. While it has already been demonstrated that universal values cannot be fixed, effective values for each type of transition, or for a specific nuclear model, are possible. The aim of this thesis is therefore to establish new experimental constraints on the weak interaction coupling constants by precisely measuring the energy spectra of beta transitions. Ultimately, establishing robust average effective values of these coupling constants will be possible, and a real predictive power for theoretical calculations of beta decay will be obtained.
Most of the transitions of interest for constraining the coupling constants have energies greater than 1 MeV, occur in complex decay schemes and are associated to the emission of multiple gamma photons. In this situation, the best strategy consists in beta-gamma detection in coincidence. The usual detection techniques in nuclear physics are appropriate but they must be extremely well implemented and controlled. The doctoral student will rely on the results obtained in two previous theses. To minimize self-absorption of the electrons in the source, they will have to adapt a preparation technique of ultra-thin radioactive sources developed at LNHB to the important activities that will be required. He will have to implement a new apparatus, in a dedicated vacuum chamber, including a coincidence detection of two silicon detectors and two gamma detectors. Several studies will be necessary, mechanical and by Monte Carlo simulation, to optimize the geometric configuration with regard to the different constraints. The optimization of the electronics, acquisition, signal processing, data analysis, spectral deconvolution and the development of a complete and robust uncertainty budget will all be topics covered. These instrumental developments will make possible the measurement with great precision of the spectra from 36Cl, 59Fe, 87Rb, 141Ce, or 170Tm decays. This very comprehensive subject will allow the doctoral student to acquire instrumental and analytical skills that will open up many career opportunities. The candidate should have good knowledge of nuclear instrumentation, programming and Monte Carlo simulations, as well as a reasonable knowledge of nuclear disintegrations.

Development of ultra-high-resolution magnetic microcalorimeters for isotopic analysis of actinides by X-ray and gamma-ray spectrometry

The PhD project focuses on the development of ultra-high-resolution magnetic microcalorimeters (MMCs) to improve the isotopic analysis of actinides (uranium, plutonium) by X- and gamma-ray spectrometry around 100 keV. This type of analysis, which is essential for the nuclear fuel cycle and non-proliferation efforts, traditionally relies on HPGe detectors, whose limited energy resolution constrains measurement accuracy. To overcome these limitations, the project aims to employ cryogenic MMC detectors operating at temperatures below 100 mK, capable of achieving energy resolutions ten times better than that of HPGe detectors. The MMCs will be microfabricated at CNRS/C2N using superconducting and paramagnetic microstructures, and subsequently tested at LNHB. Once calibrated, they will be used to precisely measure the photon spectra of actinides in order to determine the fundamental atomic and nuclear parameters of the isotopes under study with high accuracy. The resulting data will enhance the nuclear and atomic databases used in deconvolution codes, thereby enabling more reliable and precise isotopic analysis of actinides.

Evaluation of nanoscale surface coatings on high energy density positive electrodes for lithium-ion batteries.

Nickel-rich layered oxides LiNi1-x-yMnxCoyO2 (NMC) and LiNi1-x-yCoyAlzO2 (NCA) are exceptional materials for the positive electrode of lithium batteries due to their high reversible storage capacity. However, under real conditions, undesired reactions can lead to the dissolution of transition metals and electrodes cracking, thus affecting their electrochemical properties. This phenomenon is linked to the presence of hydrofluoric acid (HF) in the electrolyte, mainly due to the degradation of the LiPF6 salt. To address this problem, surface treatments are needed to protect the active material and improve performance. The EVEREST project proposes an innovative, flexible, and affordable method for creating inorganic coatings at the nanoscale. This method is based on a recent technique, coaxial electrospinning, which allows the production of nanofibers with a well-defined core-sheath structure. For this project, we propose to evaluate the impact of nanofiber shaping parameters on morphology, electrochemical performance and the underlying mechanism. The electrochemical performances of the coated and the pristine positive electrodes will be compared in a half-cell with Li metal as a counter electrode. Redox processes, charge transfer mechanisms and structural modifications will be studied in the operando mode using the synchrotron radiation beam.

Quantum simulation of atomic nulei

Atomic nuclei constitute strongly correlated quantum many-body systems governed by the strong interaction of QCD. The nuclear shell model, which diagonalizes the Hamiltonian in a basis whose dimension grows exponentially with the number of nucleons, represents a well-established approach for describing their structure. However, this combinatorial explosion confines classical high-performance computing to a restricted fraction of the nuclear chart.
Quantum computers offer a promising alternative through their natural ability to manipulate exponentially large Hilbert spaces. Although we remain in the NISQ era with its noisy qubits, they could revolutionize shell model applications.
This thesis aims to develop a comprehensive approach for quantum simulation of complex nuclear systems. A crucial first milestone involves creating a software interface that integrates nuclear structure data (nucleonic orbitals, nuclear interactions) with quantum computing platforms, thereby facilitating future applications in nuclear physics.
The project explores two classes of algorithms: variational and non-variational approaches. For the former, the expressivity of quantum ansätze will be systematically analyzed, particularly in the context of symmetry breaking and restoration. Variational Quantum Eigensolvers (VQE), especially promising for Hamiltonian-based systems, will be implemented with emphasis on the ADAPT-VQE technique tailored to the nuclear many-body problem.
A major challenge lies in accessing excited states, which are as crucial as the ground state in nuclear structure, while VQE primarily focuses on the latter. The thesis will therefore develop quantum algorithms dedicated to excited states, testing various methods: Hilbert space expansion (Quantum Krylov), response function techniques (quantum equations of motion), and phase estimation-based methods. The ultimate objective is to identify the most suitable approaches in terms of scalability and noise resilience for applications with realistic nuclear Hamiltonians.

Modeling and prediction of electromagnetic emissions from power converters using deep learning

In recent years, electromagnetic compatibility (EMC) in power converters based on wide bandgap (WBG) semiconductors has attracted growing interest, due to the high switching speeds and increased frequencies they enable. While these devices improve power density and system efficiency, they also generate more complex conducted and radiated emissions that are challenging to control. In this context, this thesis focuses on the prediction, modeling, and characterization of electromagnetic interference (EMI) (> 30 MHz), both conducted and radiated, in high-frequency power electronic systems. The work is based on a multi-subsystem partitioning method and an iterative co-simulation approach, combined with in situ characterization to capture non-ideal and nonlinear phenomena. In addition, deep learning techniques are employed to model EMI behavior using both measured and simulated data. Generative artificial intelligence (Generative AI) is also leveraged to automatically generate representative and diverse configurations commonly encountered in power electronics, thereby enabling efficient exploration of a wide range of EMI scenarios. This hybrid approach aims to enhance analysis accuracy while accelerating simulation and design phases.

In situ study of the impact of the electric field on the properties of chalcogenide materials

Chalcogenide materials (PCM, OTS, NL, TE, FESO, etc.) are the basis of the most innovative concepts in microelectronics, from PCM memories to the new neuromorphic and spinorbitronic devices (FESO, SOT-RAM, etc.). Part of their operation relies on out-of-equilibrium physics induced by the electronic excitation resulting from the application of an intense electric field. The aim of this thesis is to measure experimentally on chalcogenide thin films the effects induced by the intense electric field on the atomic structure and electronic properties of the material with femtosecond (fs) time resolution. The 'in-operando' conditions of the devices will be reproduced using a THz fs pulse to generate electric fields of the order of a few MV/cm. The induced changes will then be probed using various in situ diagnostic methods (optical spectroscopy or x-ray diffraction and/or ARPES). The results will be compared with ab initio simulations using a state-of-the-art method developed with the University of Liège. Ultimately, the ability to predict the response of different chalcogenide alloys on time scales fs under extreme field conditions will make it possible to optimise the composition and performance of the materials (e- switch effect, electromigration of species under field conditions, etc.), while providing an understanding of the underlying fundamental mechanisms linking electronic excitation, evolution and the properties of the chalcogenide alloys.

Reducing the complexity of France's building stock to better anticipate anticipate energy demand flexibility and the integration of solar solar resources

The aim of this work is to respond to the current challenges of energy transition in the building sector, France's leading energy consumer. French public policies are currently proposing far-reaching solutions, such as support for energy-efficient home renovation and incentives for the installation of renewable energy production systems. On a large scale, this is leading to structural changes for both building managers and energy network operators. As a result, players in the sector need to review their energy consumption and carbon impact forecasts, integrating flexibility solutions adapted to the French standard. Some flexibility levers are already in place to meet the challenges of energy and greenhouse gas emission reduction, but others need to be anticipated, taking into account long-term scenarios for energy renovation and the deployment of renewable energy sources, particularly photovoltaic energy, across the whole of France. The issue of massification is therefore an underlying one. That's why this thesis proposes to implement a methodology for reducing the size of the French installed base based on previously defined criteria. In particular, the aim will be to define a limited number of reference buildings that are statistically representative of the behavior resulting from the application of flexibility strategies that meet the challenges of energy efficiency and limiting greenhouse gas emissions. To this end, the CSTB (Centre Scientifique et Technique du Bâtiment) is developing and making available a database of French buildings (BDNB: Base de Données Nationale des Bâtiments), containing information on morphology, uses, construction principles and energy consumption and performance.

Fine-grained and spatio-temporally grounded large multimodal models

This PhD project focuses on enhancing Large Multimodal Models (LMMs) through the integration of fine-grained and spatio-temporal information into training datasets. While current LMMs such as CLIP and Flamingo show strong performance, they rely on noisy and coarse-grained image-text pairs and often lack spatial or temporal grounding. The thesis aims to develop automatic pipelines to enrich image datasets with geographic and temporal metadata, refine captions using fine-grained semantic descriptors, and balance dataset diversity and compactness by controlling class-wise sample sizes.

Training strategies will incorporate hierarchical class structures and adapt protocols to improve alignment between caption elements and image regions. The work will also explore joint training regimes that integrate fine-grained, spatial, and temporal dimensions, and propose set-based inference to improve the diversity of generated outputs. The enriched datasets and models will be evaluated using existing or newly developed benchmarks targeting contextual relevance and output diversity. The project also addresses challenges in metadata accuracy, efficient model adaptation, and benchmarking methodologies for multi-dimensional model evaluation.

Applications include improved synthetic data generation for autonomous driving, enhanced annotation of media archives through contextual captioning, and better visual reasoning in industrial simulation scenarios.

Top