Low Power Image Sensor for Distributed Processing in Cameras Network

Working in a collaborative academic project, your task will be to develop a smart image sensor for a wireless camera network embedding distributed AI computing.
Current camera network contains several standard cameras that transmit their images to a global server performing the targeted inference processing. This kind of architecture proposes energy and frugality performances that are not compatible with IoT requirements.
The project goal is to tackle hardware frugality through a distributed and collaborative approach based on ultra-low-power computing nodes. Each node’s inference core will be built around ASIC processors performing calculations in analog form. The final demonstrator will consist of a wireless network of “motes” (sensor network nodes) integrating dedicated image sensors paired with hybrid processors performing analog processing.
In this context, the mote’s image sensor must extract strategic features with frugality and efficiency which implies that you have to define, design and test an innovative readout architecture of a standard imager. In collaboration with the academic partners, you will be involved in the definition of the overall mote architecture allowing to define basically the output data format and the output procedure of the imager including potential pre-processing for the distributed inference computations. The studied architecture will integrate innovative low power solutions to address the targeted IoT applications and perform both image acquisitions and AI pre-processing.
As an image sensor demonstrator is planned in this PhD Thesis, the work will be conducted at CEA-Leti in the L3i Laboratory, using professional IC design tools and software development environments.

Post-training neural architecture optimization for small language models

Generative AI, and particularly language models (LLM), have sparked a new revolution in AI with applications across all domains. However, LLMs are highly resource-intensive and, hence, difficult to implement on autonomous embedded systems. LLMs can be optimized by modifying their architecture to replace heavy Transformer layers with lighter alternatives. Given the difficulty of training LLM "from scratch," this thesis aims to develop post-training neural architecture optimization methods applicable to small LLM (SLM). Additionally, the thesis seeks to propose performance metrics of different layers of an SLM and their alternatives, to guide the replacement, and thus propose a comprehensive methodology for optimizing SLMs while considering hardware constraints. The work will be valorized through publications in major AI conferences and journals, and the developed codes and methods could be integrated into the tools developed at CEA.

Contribution in the study of Power Partial Converters in Energy sources Hybridization

One of the key areas for reducing the carbon footprint is transport, particularly the development of electric mobility, which is currently growing rapidly. In this context, the hybrid electric transport market is growing. Hybridization applications have seen their power increase and with it that of power electronics converters allowing to adapt the voltage levels of energy sources and the energy exchanges between them. This increase in power is accompanied by higher losses to be evacuated, resulting in a significant impact firstly on the size of the converters, and therefore of the overall system, and then on the energy efficiency of the entire chain. Efforts have already been made at CEA-LITEN to develop high-efficiency DC-DC converters (in particular by using interleaved DC-DC converters). The objective of the thesis will be to go further by studying the so-called partial power converters (PPC). The different architectures/topologies will be studied for hybrid applications associating a fuel cell and a battery on the one hand, and applications associating 2 batteries (one power type battery and the other, energy type battery) on the other hand. The work aims to determine the best architecture/topologies for each of the typical applications allowing a significant reduction in the size of the converters and the improvement of the efficiency of the whole system

Control coordination of power converters on the distribution grid to enhance overal system stability

With the increasing number of generation and consumption units connected through power electronic converters, the electrical grid is evolving toward a more dynamic and decentralized structure. This transformation strengthens both the need and the potential for these converters to actively contribute to system flexibility and stability—particularly in compensating for renewable energy fluctuations and maintaining the balance between supply and demand.

Optimized coordination of their control functions offers significant potential to improve grid resilience, by intelligently leveraging their capabilities in voltage regulation, frequency support, and reactive power control. However, to integrate these contributions effectively at scale, it is essential to develop holistic modeling approaches that capture multi-scale interactions—both in time and space.

The modeling work in this thesis aims to represent the relationship between the active/reactive power flexibility of power electronic converters and the stability margin they provide to the grid, as well as to model the aggregation of their actions for system-wide contribution. Building on this foundation, coordinated control architectures and algorithms between the distribution and transmission networks will be investigated, developed, and validated.

Prediction of elastic wave dispersion effects using a semi-analytical model under high-frequency approximation

Ultrasonic testing (UT) methods are a fundamental component of non-destructive testing (NDT). They are widely used to inspect mechanical components such as welds (in nuclear and petrochemical industries) and composite material structures (in aeronautics). To understand the physical phenomena involved in a given configuration, simulation is a valuable tool and sometimes an essential step in implementing the inspection process.
Modeling approaches fall into two main categories: purely numerical models based on finite elements (FE) and semi-analytical methods derived from high-frequency (HF) approximations, such as paraxial rays. While the latter are often favored for their computational efficiency, they introduce simplifications that can compromise the quantitative accuracy of results, particularly for phenomena like dispersion (variation in wave speed with frequency), which are common in certain industrial contexts.
This thesis project aims to enhance the paraxial ray approach by integrating models of dispersive interfaces (composite interplies, coupling layers), dispersive viscoelastic media, and a modal guided wave model. The goal is to develop a simulation tool capable of faithfully reproducing realistic inspection configurations, thereby improving the representativeness of the results.

Reconciling predictability and performance in processor architectures for critical systems

Critical systems have both functional and timing requirements, the latter ensuring that deadlines are always met during operation; failure to do so may lead to catastrophic consequences. The critical nature of such systems demands specialized hardware and software solutions. This PhD thesis topic focuses on the development of computer architecture designs for critical systems, known as predictable architectures, capable of providing the necessary timing guarantees. Several such architectures exist, typically based on in-order pipelines and incorporating behavioral restrictions (e.g., disabling complex speculation mechanisms) or structural specializations (e.g., redesigned caches or deterministic arbitration for shared resources). These restrictions and specializations inevitably impact performance, and the design of predictable architectures must therefore address the predictability–performance tradeoff directly. This PhD thesis aims to explore this tradeoff in a novel way, by adapting a high-performance variant of an in-order processor (CVA6) and developing top-down techniques to make it predictable. Performance in such processors is usually achieved through mechanisms like branch prediction, prefetching, and value prediction, implemented via specialized storage elements (e.g., buffers) and supported by control mechanisms such as rollback on misprediction. Within this context, the goal of the thesis is to define a general predictability scheme for speculative execution, covering both storage organization and rollback behavior.

Out-of-Distribution Detection with Vision Foundation Models and Post-hoc Methods

The thesis focuses on improving the reliability of deep learning models, particularly in detecting out-of-distribution (OoD) samples, which are data points that differ from the training data and can lead to incorrect predictions. This is especially important in critical fields like healthcare and autonomous vehicles, where errors can have serious consequences. The research leverages vision foundation models (VFMs) like CLIP and DINO, which have revolutionized computer vision by enabling learning from limited data. The proposed work aims to develop methods that maintain the robustness of these models during fine-tuning, ensuring they can still effectively detect OoD samples. Additionally, the thesis will explore solutions for handling changing data distributions over time, a common challenge in real-world applications. The expected results include new techniques for OoD detection and adaptive methods for dynamic environments, ultimately enhancing the safety and reliability of AI systems in practical scenarios.

Evaluation of the impact of dry extrusion process on cathode microstructure and performances for polymer-based solid-state batteries

Solid-state batteries (SSB) are expected to outperform standard lithium-ion technology in terms of energy density and safety, with application in electric vehicles or stationary energy storage. Manufacturing of these new battery technologies can rely on existing infrastructure (solvent-based electrode slurry mixing and coating) or need new processing methods. In this context, twin-screw extrusion process exhibits several advantages when applied to SSB, particularly with polymer-based electrolytes.
To speed up the implementation of polymer-based SSB, a better understanding of extrusion process applied to positive electrode manufacturing is needed. The objective of this thesis is to develop new electrode formulations using hot-melt extrusion and understand the impact of process parameters on final performances. It should finally give a clear picture about the advantages and limitations of extrusion compared to standard wet casting.
This PhD project will be part of a collaboration between CEA and Stellantis on the development of new solid-state batteries. The study will focus on the development of extrusion-processed composite electrodes to be used in polymer-based SSB. First, materials will be selected and characterized for a preliminary screening of formulations using lab-scale extrusion. Then, a systematic evaluation of the impact of input materials and operational conditions during extrusion process will be undertaken to highlight the relationships between process, electrode microstructures and performances. Finally, the best performing electrode formulations will be integrated in a fully-extruded prototype and characterized by electrical tests as well as post-mortem analysis.
The PhD candidate will benefit from CEA-LITEN's multidisciplinary environment (Grenoble campus) and Stellantis industrial know-how. Battery Prototyping Platform will be used for extrusion trials and cell assembly, whereas access to advanced characterization equipment (SEM, XPS, rheometers, electrochemical methods, etc.) will guarantee deep understanding of underlying mechanisms.

Heat Transfer Enhancement by Convective Boiling in Microchannels applied to the Cooling of Computing Units in Data Centers

The proposed PhD thesis aims to improve the understanding and modeling of convective boiling phenomena in microchannels for new low-environmental-impact refrigerants. The candidate will adopt a combined experimental and multi-scale modeling approach, including the design of a test bench simulating the behavior of a micro-evaporator, the implementation of CFD simulations (ANSYS Fluent, CATHARE) to describe two-phase flow regimes, and the evaluation of various eco-friendly alternative fluids. The expected outcomes include, for each of these new fluids, the characterization of confined boiling mechanisms, the development of a predictive heat transfer model, and the proposal of innovative cooling solutions.

The growing demand for high-performance computing, driven by artificial intelligence and cloud technologies, leads to a significant increase in power dissipation in electronic chips. Current single-phase cooling technologies are reaching their limits when dealing with heat fluxes exceeding 100 W/cm². Two-phase cooling, based on fluid boiling to remove heat, can achieve much higher heat transfer performance than single-phase systems while reducing overall energy consumption. The results of this research will contribute to the development of more efficient and sustainable cooling solutions for future data centers, helping to reduce the digital sector’s energy footprint and strengthen European technological sovereignty in advanced cooling technologies.

Sofware support for computing accelerators and memory transferts accelerators

For energy reasons, future computers will have to use accelerators for both computation and memory access (GPUs, TPUs, NPUs, smart DMAs). AI applications have intensive computational requirements in terms of both computing power and memory throughput.

These accelerators are not based on a simple instruction set (ISA), they break the Von Neuman model: they require specialized code to be written manually.

Furthermore, it is difficult to compare the use of these accelerators with code using a non-specialized processor, as the initial source codes are very different.

HybroLang is a hardware-close programming language that allows programs to be written using all of a processor's computing capabilities, while also allowing code to be specialized based on data known at runtime.

The HybroGen compiler has already demonstrated its ability to program in-memory computing accelerators, as well as to optimize code on conventional CPUs by performing innovative optimizations.

This thesis proposes to extend the HybroLang language in order to

- facilitate the programming of AI applications by providing support for complex data: stencils, convolution, sparse computing

- enable code generation both on CPUs and with hardware accelerators currently under development at the CEA (sparse computing, in-memory computing, memory access)

- allow to benchmark different computing architectures with the same initial source code

Ideally, a candidate should have knowledge of computer architecture, programming language implementation, code optimization and compilation.

Top