Structure monitoring in harsh environments: fiber Bragg gratings for passive guided wave tomography
The use of fiber Bragg gratings on optical fiber as receivers of guided elastic waves has been studied for several years at CEA LIST as an innovative solution for monitoring structures subjected to severe operational stresses.
Recent advances in optoelectronic instrumentation dedicated to this type of measurement have demonstrated the team's ability to measure elastic waves at temperatures exceeding 1000°C and to achieve degrees of multiplexing on a single optical fiber that enable the implementation of guided elastic wave tomography algorithms. In addition, a model of elastic waves measurement using fiber Bragg gratings has recently been introduced into CIVA simulation platform developed by CEA LIST. This model will be used in order to adapt the tomography algorithms, developed and tested for “standard” piezoelectric sensors, to the specific characteristics of Bragg measurements.
This thesis will be take place in parallel to experimental campaigns planned as part of European projects and industrial collaborations, which will enable this type of instrumentation to be implemented on real industrial structures in 2027/2028 (especially nuclear power plants), providing unique data for analysis.
The doctoral student will work on purely algorithmic aspects (adapting tomography algorithms to the specificity of Bragg measurement, taking into account geometric complexities on real industrial structures, calibration issues related to high temperatures/gradients) and on the development of demonstrators in the laboratory. He or she will also participate in the deployment of the instrumentation on industrial sites and in data analysis to demonstrate the performances of the technology.
Sofware support for computing accelerators and memory transferts accelerators
For energy reasons, future computers will have to use accelerators for both computation and memory access (GPUs, TPUs, NPUs, smart DMAs). AI applications have intensive computational requirements in terms of both computing power and memory throughput.
These accelerators are not based on a simple instruction set (ISA), they break the Von Neuman model: they require specialized code to be written manually.
Furthermore, it is difficult to compare the use of these accelerators with code using a non-specialized processor, as the initial source codes are very different.
HybroLang is a hardware-close programming language that allows programs to be written using all of a processor's computing capabilities, while also allowing code to be specialized based on data known at runtime.
The HybroGen compiler has already demonstrated its ability to program in-memory computing accelerators, as well as to optimize code on conventional CPUs by performing innovative optimizations.
This thesis proposes to extend the HybroLang language in order to
- facilitate the programming of AI applications by providing support for complex data: stencils, convolution, sparse computing
- enable code generation both on CPUs and with hardware accelerators currently under development at the CEA (sparse computing, in-memory computing, memory access)
- allow to benchmark different computing architectures with the same initial source code
Ideally, a candidate should have knowledge of computer architecture, programming language implementation, code optimization and compilation.
LLM-Assisted Generation of Functional and Formal Hardware Models
Modern hardware systems, such as RISC-V processors and hardware accelerators, rely on functional simulators and formal verification models to ensure correct, reliable, and secure operation. Today, these models are mostly developed manually from design specifications, which is time-consuming and increasingly difficult as hardware architectures become more complex.
This PhD proposes to explore how Large Language Models (LLMs) can be used to assist the automatic generation of functional and formal hardware models from design specifications. The work will focus on defining a methodology that produces consistent and executable models while increasing confidence in their correctness. To achieve this, the approach will combine LLM-based generation with feedback from simulation and formal verification tools, possibly using reinforcement learning to refine the generation process.
The expected outcomes include a significant reduction in manual modeling effort, improved consistency between functional and formal models, and experimental validation on realistic hardware case studies, particularly RISC-V architectures and hardware accelerators.
Few-shot event and complex relation extraction from text applied to scientific literature
Information extraction from text, which falls under the broader field of Natural Language Processing, has been the subject of research for many years. These efforts have primarily focused on Named Entity Recognition, relation extraction between entities, and, in its most complex form, event extraction, a task typically formulated as filling predefined templates from unstructured text. Within this framework, the objective of this thesis is to design, develop, and evaluate event extraction models operating on scientific articles. In this context, an "event" may correspond to a set of entities and relations characterizing, for instance, a chemical reaction or an experiment. Furthermore, these models must be capable of being defined from a highly restricted set of annotated data to allow for rapid adaptation to new scientific domains.
From a methodological standpoint, the proposed thesis seeks to move beyond the current, almost reflexive tendency to rely exclusively on Large Language Models (LLMs). Instead, it advocates for a potential synergy between LLMs and smaller encoder-based models within a few-shot context. In this synergy, the former are leveraged, through the generation of synthetic data and annotations, to build the resources necessary to implement the latter via pre-training mechanisms. This thesis will be conducted within the framework of the AIKO project of the Digital Programs Agency, which focuses on knowledge extraction from scientific publications.
Growth of 2D Ferromagnetic Chalcogenide Materials for Spintronics
Chalcogenide materials, particularly Ge-Sb-Te (GST) alloys, are essential for phase-change memory (PCMs).
Although high-performance, these memories consume a great deal of energy, which
is driving the search for alternative solutions. GST alloys offer unique opportunities in the field of spin-orbitronics as spin-charge conversion materials or as sources of spin-polarized current. Two-dimensional ferromagnetic alloys such as Fe-Ge-Te or Ge-Mn-Te offer promising avenues as sources of spin current for new types of more efficient memory devices. For efficient spin injection, we are seeking a material that not only exhibits a high Curie temperature (TC) and significant spin polarization, but is also fully compatible with existing silicon-based CMOS technology.
The aim of this thesis is to develop and master, on an industrial scale on 300 mm Si substrates, the van der Waals epitaxial growth of 2D ferromagnetic films based on Fe-Ge (Ga)Te2 (n=3, 5) or Ge_(1-x)Mn_xTe, for example to integrate them in situ with spin-charge conversion chalcogenide layers such as ferroelectric layers (a-GeTe(111)) or topological insulators (Bi_(2-x)Sb2Te3).
Architecture of small animal single photon emission tomograph.
Medical imaging, a source of major innovations, presents remarkable potential for meeting new challenges with the growing demand for precision medicine, which requires cutting-edge diagnostic and therapeutic approaches personalized for each patient.
In this context, CEA-Leti proposes a PhD internship to develop a dedicated preclinical SPECT (Single Photon Emission Tomography) imager that will provide the performance (spectral information, high resolution, and high sensitivity) needed by researchers developing new radiopharmaceuticals.
The laboratory has a recognized expertise on CZT (Cadmium Zinc Telluride) semiconductor imagers enabling better spatial and energy resolution than scintillators used by most systems. They open new opportunities for emission imaging like easier Compton imaging, multi-isotope imaging and better contrast.
The candidate will have to handle the following tasks:
1. Study the state of the art of small animal SPECT imagers to participate with the team to the choice of system specification and choice of a draft architecture.
2. Simulate this architecture by using Monte-Carlo codes and optimize free parameters.
3. Design and manufacture the prototype system, with the help of the team including system engineers.
4. Test and validate the imaging capabilities, using reconstruction algorithms provided by the team.
The PhD will be conducted inside an instrumentation laboratory with access to acquisition electronics, detectors, motorized mechanics, gamma-ray sources and processing/simulation software. The candidate will also work in collaboration with a clinical and preclinical centre (at Orsay’s hospital) for conducting imaging test on phantoms and animals.
Sustainable development of digital circuits and systems: Taking planetary boundaries into account
Technological developments in the electronics sector are experiencing rapid growth, accompanied by increasing interest in accounting for their environmental impacts. However, current approaches remain largely focused on relative impact reductions (energy efficiency, resource optimization), without ensuring compatibility with planetary boundaries. In this context, the concept of absolute sustainability emerges as an essential framework for guiding future developments of electronic systems.
This PhD thesis addresses several major scientific challenges: how can carrying capacities and sharing principles (core concepts of absolute sustainability) be identified for the electronics sector and consistently translated down to the levels of digital systems and integrated circuits? How can planetary boundaries be concretely integrated into the design of systems and circuits?
The main objective of the thesis is to move from a logic of relative environmental impact reduction toward designs that are compatible with planetary boundaries. It aims to define socio-technical scenarios to identify sharing principles, to conduct the first absolute life cycle assessment of a digital system, and to propose the first design of a circuit based on absolute limits, paving the way for sustainable development in electronics.
Model-Driven DevOps for Cloud Orchestration : Bridging Design-Time and Runtime Guarantees
Model-Driven Engineering (MDE) has traditionally relied on a clear separation between design and runtime, but this boundary no longer holds in today's cloud-native and edge environments, where infrastructures are heterogeneous, dynamic, and continuously evolving. Assumptions validated at design time may become invalid during execution, and modern orchestration platforms such as Kubernetes or OpenStack, while effective, remain weakly connected to architectural modeling environments. This results in a structural gap between architectural specification and actual operational behavior. To bridge this gap, this thesis proposes to develop a formal modeling framework for placement constraints across heterogeneous orchestration platforms, ensuring continuity between design-time validation and runtime guarantees. This framework would elevate placement constraints — resource locality, affinity, network latency, security isolation, and quality-of-service objectives — to first-class modeling constructs. At design time, it would enable static feasibility analysis and automated generation of deployment artifacts; at runtime, it would ensure continuous compliance monitoring and adaptive reconfiguration in response to violations. Expected contributions include a formal modeling language, bidirectional transformations between design-time models and runtime representations, and integration with Papyrus-based tooling. The ultimate goal is to ensure that architectural intent remains consistent and verifiable throughout the entire system lifecycle, from initial design through to production operation.
Topologically Isolated Mode Acoustic Resonators
Timing is a key function in electronic circuits. Beyond on-chip signals synchronization, it also allows the synchronization of wireless data transmissions. Accurate time references require stable frequency sources, which also benefit to sensor applications. The gold standard for time or frequency generation is still quartz resonators, which are however bulky and difficult to miniaturize. Research is therefore still ongoing to provide high quality factor (> 10,000) resonators, ideally capable of operating at frequencies of several GHz. A key to reach such high quality factors is to confine strongly the mechanical vibration of micro-size structures in order to make them insensitive to external perturbations. Recently, the field of topological acoustics has demonstrated the capability to confine elastic waves in very small volumes concentrated at the interface between periodic structure, and to provide extremely high quality factor resonances.
This PhD position focuses on exploiting topologically protected modes in piezoelectric microstructures to provide next generations of high quality factor resonators, which may be used in oscillators or even filter circuits. Leveraging the know-how of CEA Leti in the design and fabrication of such components, the PhD will be part of an international collaboration with well established academic laboratories (Politecnico di Milano, Imperial College FEMTO-ST Institute) and industrial partners.
The candidate will model and design structures supporting topologically protected modes, combinining finite element simulations with simplified numerical approaches which reduce computation times. He will follow the fabrication of demonstrators in collaboration with the process integration teams in the CEA Leti clean rooms, and carry on measurements of the proposed resonators.
Distributed multimodal learning for cooperative acoustic source localization and classification
In many complex environments, such as industrial sites, disaster-stricken buildings, or public spaces, it is necessary to automatically detect and localize sound events (falls, alarms, voices, mechanical failures). Mobile platforms equipped with cameras and microphones represent a promising solution, but a single platform remains limited: its microphone array provides an approximate direction towards the source but not a precise position in space, and its camera may be obstructed. This thesis proposes to study how a network of mobile platform, each carrying a calibrated audio-visual unit, can collaborate to localize and classify such events in 3D. Each platform analyses its own audio-visual observations and shares an estimate of the source direction with its neighbours; the network then combines these estimates to reconstruct the position of the event and identify it. The expected outcomes are a cooperative localization system that is robust to occlusions and partial platform failures.