Low Power Image Sensor for Distributed Processing in Cameras Network

Working in a collaborative academic project, your task will be to develop a smart image sensor for a wireless camera network embedding distributed AI computing.
Current camera network contains several standard cameras that transmit their images to a global server performing the targeted inference processing. This kind of architecture proposes energy and frugality performances that are not compatible with IoT requirements.
The project goal is to tackle hardware frugality through a distributed and collaborative approach based on ultra-low-power computing nodes. Each node’s inference core will be built around ASIC processors performing calculations in analog form. The final demonstrator will consist of a wireless network of “motes” (sensor network nodes) integrating dedicated image sensors paired with hybrid processors performing analog processing.
In this context, the mote’s image sensor must extract strategic features with frugality and efficiency which implies that you have to define, design and test an innovative readout architecture of a standard imager. In collaboration with the academic partners, you will be involved in the definition of the overall mote architecture allowing to define basically the output data format and the output procedure of the imager including potential pre-processing for the distributed inference computations. The studied architecture will integrate innovative low power solutions to address the targeted IoT applications and perform both image acquisitions and AI pre-processing.
As an image sensor demonstrator is planned in this PhD Thesis, the work will be conducted at CEA-Leti in the L3i Laboratory, using professional IC design tools and software development environments.

Post-training neural architecture optimization for small language models

Generative AI, and particularly language models (LLM), have sparked a new revolution in AI with applications across all domains. However, LLMs are highly resource-intensive and, hence, difficult to implement on autonomous embedded systems. LLMs can be optimized by modifying their architecture to replace heavy Transformer layers with lighter alternatives. Given the difficulty of training LLM "from scratch," this thesis aims to develop post-training neural architecture optimization methods applicable to small LLM (SLM). Additionally, the thesis seeks to propose performance metrics of different layers of an SLM and their alternatives, to guide the replacement, and thus propose a comprehensive methodology for optimizing SLMs while considering hardware constraints. The work will be valorized through publications in major AI conferences and journals, and the developed codes and methods could be integrated into the tools developed at CEA.

An electrochemical flow microreactor for a greener synthesis of gold nanoparticles

Gold nanoparticles (AuNPs) possess unique electronic, photonic, and chemical properties of invaluable interest in a variety of medical and technological applications. They are typically produced by controlled chemical precipitation from a salt solution to achieve the precise size control critical for most applications. Continuous flow microreactors, which efficiently mix the salt solution and the reducing agent, are known to offer improved size control. However, even in these reactors, the smallest AuNPs can only be formed using powerful reducing agents that are harmful to human health or the environment. We propose to minimize their impact and to develop a more resource-efficient process by inserting an electrochemical cell into the reactor to form the reducing agent in-situ in the adjusted amount necessary to produce the desired AuNPs.
Your goal will be to test and adapt continuous-flow electrochemical cells for the synthesis of AuNPs, exploring various electrochemical reactions and cell designs. You will also explore the use of several capping agents of biological interest. A careful examination of AuNPs characteristics (size, interfacial and optical properties, etc.) will guide you in this research.

Multi-scale approach for ultrasonic propagation in inhomogeneous multiple-scattering media

Ultrasonic waves are strongly influenced by the microstructure of the materials through which they propagate, leading to attenuation, dispersion, and noise. Modeling these effects is essential, particularly in non-destructive testing, where they may either hinder defect detection or provide valuable information about the material. Analytical and numerical models help to better predict and interpret these phenomena. Homogeneous statistical properties are generally assumed in such approaches. In practice, however, microstructures often exhibit significant spatial variations, for instance due to manufacturing processes. Depending on the scale of these variations relative to the wavelength, they may induce either abrupt or gradual changes in effective properties. This PhD aims to establish a theoretical framework that accounts for both microstructural randomness and its spatial variations, in order to propose relevant simulation strategies depending on the scales involved. The approach will first be developed in 1D, then extended to 2D and 3D using tools developed in the laboratory, with numerical and possibly experimental validations.

Prediction of elastic wave dispersion effects using a semi-analytical model under high-frequency approximation

Ultrasonic testing (UT) methods are a fundamental component of non-destructive testing (NDT). They are widely used to inspect mechanical components such as welds (in nuclear and petrochemical industries) and composite material structures (in aeronautics). To understand the physical phenomena involved in a given configuration, simulation is a valuable tool and sometimes an essential step in implementing the inspection process.
Modeling approaches fall into two main categories: purely numerical models based on finite elements (FE) and semi-analytical methods derived from high-frequency (HF) approximations, such as paraxial rays. While the latter are often favored for their computational efficiency, they introduce simplifications that can compromise the quantitative accuracy of results, particularly for phenomena like dispersion (variation in wave speed with frequency), which are common in certain industrial contexts.
This thesis project aims to enhance the paraxial ray approach by integrating models of dispersive interfaces (composite interplies, coupling layers), dispersive viscoelastic media, and a modal guided wave model. The goal is to develop a simulation tool capable of faithfully reproducing realistic inspection configurations, thereby improving the representativeness of the results.

Reconciling predictability and performance in processor architectures for critical systems

Critical systems have both functional and timing requirements, the latter ensuring that deadlines are always met during operation; failure to do so may lead to catastrophic consequences. The critical nature of such systems demands specialized hardware and software solutions. This PhD thesis topic focuses on the development of computer architecture designs for critical systems, known as predictable architectures, capable of providing the necessary timing guarantees. Several such architectures exist, typically based on in-order pipelines and incorporating behavioral restrictions (e.g., disabling complex speculation mechanisms) or structural specializations (e.g., redesigned caches or deterministic arbitration for shared resources). These restrictions and specializations inevitably impact performance, and the design of predictable architectures must therefore address the predictability–performance tradeoff directly. This PhD thesis aims to explore this tradeoff in a novel way, by adapting a high-performance variant of an in-order processor (CVA6) and developing top-down techniques to make it predictable. Performance in such processors is usually achieved through mechanisms like branch prediction, prefetching, and value prediction, implemented via specialized storage elements (e.g., buffers) and supported by control mechanisms such as rollback on misprediction. Within this context, the goal of the thesis is to define a general predictability scheme for speculative execution, covering both storage organization and rollback behavior.

Out-of-Distribution Detection with Vision Foundation Models and Post-hoc Methods

The thesis focuses on improving the reliability of deep learning models, particularly in detecting out-of-distribution (OoD) samples, which are data points that differ from the training data and can lead to incorrect predictions. This is especially important in critical fields like healthcare and autonomous vehicles, where errors can have serious consequences. The research leverages vision foundation models (VFMs) like CLIP and DINO, which have revolutionized computer vision by enabling learning from limited data. The proposed work aims to develop methods that maintain the robustness of these models during fine-tuning, ensuring they can still effectively detect OoD samples. Additionally, the thesis will explore solutions for handling changing data distributions over time, a common challenge in real-world applications. The expected results include new techniques for OoD detection and adaptive methods for dynamic environments, ultimately enhancing the safety and reliability of AI systems in practical scenarios.

Structure monitoring in harsh environments: fiber Bragg gratings for passive guided wave tomography

The use of fiber Bragg gratings on optical fiber as receivers of guided elastic waves has been studied for several years at CEA LIST as an innovative solution for monitoring structures subjected to severe operational stresses.
Recent advances in optoelectronic instrumentation dedicated to this type of measurement have demonstrated the team's ability to measure elastic waves at temperatures exceeding 1000°C and to achieve degrees of multiplexing on a single optical fiber that enable the implementation of guided elastic wave tomography algorithms. In addition, a model of elastic waves measurement using fiber Bragg gratings has recently been introduced into CIVA simulation platform developed by CEA LIST. This model will be used in order to adapt the tomography algorithms, developed and tested for “standard” piezoelectric sensors, to the specific characteristics of Bragg measurements.
This thesis will be take place in parallel to experimental campaigns planned as part of European projects and industrial collaborations, which will enable this type of instrumentation to be implemented on real industrial structures in 2027/2028 (especially nuclear power plants), providing unique data for analysis.
The doctoral student will work on purely algorithmic aspects (adapting tomography algorithms to the specificity of Bragg measurement, taking into account geometric complexities on real industrial structures, calibration issues related to high temperatures/gradients) and on the development of demonstrators in the laboratory. He or she will also participate in the deployment of the instrumentation on industrial sites and in data analysis to demonstrate the performances of the technology.

Sofware support for computing accelerators and memory transferts accelerators

For energy reasons, future computers will have to use accelerators for both computation and memory access (GPUs, TPUs, NPUs, smart DMAs). AI applications have intensive computational requirements in terms of both computing power and memory throughput.

These accelerators are not based on a simple instruction set (ISA), they break the Von Neuman model: they require specialized code to be written manually.

Furthermore, it is difficult to compare the use of these accelerators with code using a non-specialized processor, as the initial source codes are very different.

HybroLang is a hardware-close programming language that allows programs to be written using all of a processor's computing capabilities, while also allowing code to be specialized based on data known at runtime.

The HybroGen compiler has already demonstrated its ability to program in-memory computing accelerators, as well as to optimize code on conventional CPUs by performing innovative optimizations.

This thesis proposes to extend the HybroLang language in order to

- facilitate the programming of AI applications by providing support for complex data: stencils, convolution, sparse computing

- enable code generation both on CPUs and with hardware accelerators currently under development at the CEA (sparse computing, in-memory computing, memory access)

- allow to benchmark different computing architectures with the same initial source code

Ideally, a candidate should have knowledge of computer architecture, programming language implementation, code optimization and compilation.

LLM-Assisted Generation of Functional and Formal Hardware Models

Modern hardware systems, such as RISC-V processors and hardware accelerators, rely on functional simulators and formal verification models to ensure correct, reliable, and secure operation. Today, these models are mostly developed manually from design specifications, which is time-consuming and increasingly difficult as hardware architectures become more complex.

This PhD proposes to explore how Large Language Models (LLMs) can be used to assist the automatic generation of functional and formal hardware models from design specifications. The work will focus on defining a methodology that produces consistent and executable models while increasing confidence in their correctness. To achieve this, the approach will combine LLM-based generation with feedback from simulation and formal verification tools, possibly using reinforcement learning to refine the generation process.

The expected outcomes include a significant reduction in manual modeling effort, improved consistency between functional and formal models, and experimental validation on realistic hardware case studies, particularly RISC-V architectures and hardware accelerators.

Top