The technology choice in the eco-design of AI architectures
Electronic systems have a significant environmental impact in terms of resource consumption, greenhouse gas emissions and electronic waste, all of which are experiencing a massive upward trend. A large part of the impact is due to production, and more particularly the manufacturing of integrated circuits, which is becoming more and more complex, energy-intensive and resource-intensive with new technological nodes. The technology used for the implementation of a circuit has direct effects on the environmental costs for production and use, the lifespan of the circuit and the possibilities of several life cycles in a circular economy perspective. The technological choice therefore becomes an essential step in the ecodesign phase of a circuit.
The thesis aims to integrate the exploration of different technologies into an eco-design flow of integreted circuit. The purpose of the work is to define a methodology for a systematic integration of the technological choice into the flow, with identification of the best configuration of the architecture implemented for maximizing the lifespan and taking into account the strategies of circular economy. The architectures targeted by the thesis fall into the field of embedded AI, which is experiencing an upward deployment trend and involves major societal challenges. The thesis will constitute a first step in research towards sustainable embedded AI.
Assimilation of transient data and calibration of simulation codes using time series
In the context of scientific simulation, some computational tools (codes) are built as an assembly of (physical) models coupled in a numerical framework. These models and their coupling use data sets fitted on results given by experiments or fine computations of “Direct Numerical Simulation” (DNS) type in an up-scaling approach. The observables of these codes, as well as the results of the experiments or the fine computations are mostly time dependent (time series). The objective of this thesis is then to set up a methodology to improve the reliability of these codes by adjusting their parameters through data assimilation from these time series.
Work on parameter fitting has already been performed in our laboratory in a previous thesis, but using scalars derived from the temporal results of the codes. The methodology developed during this thesis has integrated screening, surrogate models and sensitivity analysis that can be extended and adapted to the new data format. A preliminary step of transformation of the time series will be developed, in order to reduce the data while limiting the loss of information. Machine learning /deep learning tools could be considered.
The application of this method will be performed within the framework of the nuclear reactor severe accident simulation. During these accidents, the core loses its integrity and corium (fuel and structure elements resulting from the reactor core fusion) is formed and can relocate and interact with its environment (liquid coolant, vessel’s steel, concrete from the basemat…). Some severe accident simulation codes describe each step / interaction individually while others describe the whole accident sequence. They have in common that they are multiphysic and have a large number of models and parameters. They describe transient physical phenomena in which the temporal aspect is important.
The thesis will be hosted by the Severe Accident Modeling Laboratory (LMAG) of the IRESNE institute at CEA Cadarache, in a team that is at the top of the national and international level for the numerical study of corium-related phenomena, from its generation to its propagation and interaction with the environment. The techniques implemented for data assimilation also have an important generic potential which ensures important opportunities for the proposed work, in the nuclear world and elsewhere.
Multi-block and non-conformal domain decomposition, applied to the 'exact' boundary coupling of the SIMMER-V thermohydraulics code
This thesis is part of the research required for the sustainable use of nuclear energy in a decarbonized, climate-friendly energy mix. Sodium-cooled 4th generation reactors are therefore candidates of great interest for saving uranium resources and minimizing the volume of final waste.
In the context of the safety of such reactors, it is important to be able to precisely describe the consequences of possible core degradation. A collaboration with its Japanese counterpart JAEA allows the CEA to develop the SIMMER-V code dedicated to simulating core degradation. The code calculates sodium thermohydraulics, structural degradation and core neutronics during the accident phase. The objective is to be able to represent not only the core but also its direct environment (primary circuit) with precision. Taking this topology into account requires partitioning the domain and using a boundary coupling method. The limitation of this approach generally lies in the quality and robustness of the coupling method, particularly during fast transients during which pressure and density waves cross boundaries.
A coupling method was initiated (Annals of Nuclear Energy 2022, Implementation of multi-domains in SIMMER-V thermohydraulic code https://doi.org/10.1016/j.anucene.2022.109338) at LMAG, which consists of merging the different decompositions of each of the domains, with the aim of constituting a unique decomposition of the overall calculation. This method was developed in a simplified framework where the (Cartesian) meshes connect in a conformal manner at the boundary level. The opportunity that opens up is to extend this method to non-conform meshes by using the MEDCoupling library. This first step, the feasibility of which has been established, will make it possible to assemble components to constitute a 'loop' type system. The second step will consist of extending the method so that one computational domain can be completely nested within another. This nesting will then make it possible to constitute a domain by juxtaposition or by nesting with non-conforming domain meshes and decompositions. After verifying the numerical qualities of the method, the last application step will consist of building a simulation of the degradation of a core immersed in its primary tank ('pool' configuration) allowing the method followed to be validated.
This job will enable the student to develop knowledge in numerical techniques and modeling for complex physical systems with flows. He or she will apply techniques ranging from method design to validation, as part of a dynamic, multidisciplinary team at CEA Cadarache.
AI-assisted generation of Instruction Set Simulators
The simulation tools for digital architectures rely on various types of models with different levels of abstraction to meet the requirements of hardware/software co-design and co-validation. Among these models, higher-level ones enable rapid functional validation of software on target architectures.
Developing these functional models often involves a manual process, which is both tedious and error-prone. When low-level RTL (Register Transfer Level) descriptions are available, they serve as a foundation for deriving higher-level models, such as functional ones. Preliminary work at CEA has resulted in an initial prototype based on MLIR (Multi-Level Intermediate Representation), demonstrating promising results in generating instruction execution functions from RTL descriptions.
The goal of this thesis is to further explore these initial efforts and subsequently automate the extraction of architectural states, leveraging the latest advancements in machine learning for EDA. The expected result is a comprehensive workflow for the automatic generation of functional simulators (a.k.a Instruction Set Simulators) from RTL, ensuring by construction the semantic consistency between the two abstraction levels.
Software and hardware acceleration of Neural Fields in autonomous robotics
Since 2020, Neural Radiance Fields, or NeRFs, have been the focus of intense interest in the scientific community for their ability to implicitly reconstruct 3D and synthesize new points of view of a scene from a limited set of images. Recent scientific advances have drastically improved initial performance (reduction in data requirements, memory needs and processing speed), paving the way for new uses of these networks, particularly in embedded applications, or for new purposes.
This thesis therefore focuses on the use of these networks for autonomous robotic navigation, with the embedded constraints involved: power consumption, limited computing and memorization hardware resources, etc. The navigation context will involve extending work already underway on incremental versions of these neural networks.
The student will be in charge of proposing and designing innovative algorithmic, software and hardware mechanisms enabling the execution of NeRFs in real time for autonomous robotic navigation.
Nanocrystalline Soft Magnetic Composites: Powder morphology and design for controlling their magnetic properties for high frequency applications
Context: Achieving carbon neutrality by 2050 will require massive electrification of the power production systems. Power electronics (PE) is a key-enabler that will this transformation possible (renewables, integration of energy micro-grids, development of electric mobility)
Problem: Current developments in PE converters aim at increasing the switching frequencies of large bandgap switches (SiC or GaN). At low frequencies, magnetic components remain bulky, occupying up to 40% of the total footprint. At high frequencies (HF > 100 kHz), very significant gains are expected, but only if the losses generated by these components remain under control. Today, the main class of magnetic materials applied to HF is MnZn or NiZn ferrites, due to their low cost and convenient electrical resistivity (?elec > 1 O.m). The main drawbacks of these materials are their low saturation induction (Bsat < 0.4 T), which limits their size reduction, and their mechanical fragility. Nanocrystallines materials have better Bsat (1.3 T), but their ?elec is about 1.5 µO.m (6 times less resistive than ferrites), which generates significant induced current losses at HF.
Thesis objective: To develop magnetic composites by grinding nanocrystalline ribbons, electrically insulating the powders (coating fabricated by sol-gel), compacting of the powder at high pressure (1000-2000 MPa) for the core shaping and finally by applying an annealing treatment to relax the thermal constraints.
High speed/High capacity distributed Fiber Bragg Grating sensing technique for Structural Health Monitoring (SHM) applications
Schedule-driven Non-Destructive Evaluations (NDE) are carried out during structure/equipment’ life to detect major degradations endangering safety and impairing service availability. In addition to NDE, Structural Health Monitoring (SHM) involves the use of in-situ Fiber Bragg Grating (FBG) sensing systems and algorithms to evaluate structure worthiness. FBGs are mostly used as strain/temperature sensors but are also used for acoustic sensing, as substitutes to piezoelectric actuators. The SHM of large structures or acoustic measurements for passive/active tomographic techniques simultaneously require a high capacity and readout rate. However, commercially available FBG readout units rely upon Wavelength-Division Multiplexing (WDM) or Optically Frequency-Domain Reflectometry (OFDR) techniques. WDM-based units are limited in capacity (several tens of sensors) but may reach high scan rate (MHz or beyond MHz). Conversely, OFDR-based units are limited in scan rate (typically several tens of Hz) but may accommodate large number of sensors (typically up to 2000). Tomography with acoustic techniques requires both high capacity and high scan rate with the aim to improve quality of image reconstruction. Optical Time-Stretch (OTS) is a time-domain technique that has potential to improve both capacity and scan rate and to open the way to efficient tomography reconstruction processes. The basics of OTS is to use a pulsed laser, a highly dispersive medium and a high bandpass photodetector in order to convert a Bragg wavelength shift into a time delay. The doctoral candidate will investigate several ways to implement OTS to SHM. Draw-Tower Gratings (DTG) and chirped gratings will be used for the measurement of strain profiles and acoustic field emission on metallic and carbon fiber-reinforced plastics (CFRP) composite structures. The candidate will first assess the performance of the OTS technique in laboratory (LSPM) with piezoelectric actuators and laser-ultrasonics (if available, with CNRS/PIMM). Then, the OTS device will be tested onto several demonstrators provided by partners within the MSCA USES 2 doctoral network: civil engineering structure (BAM, Berlin), hydrogen storage canister (Faber, Cividale del Friuli) and CEA DAM (Le Ripault) and finally onto a metallic pipeline for fluid transport (ENI, Milano). The doctoral candidate will move onto those test sites during three 2-month periods. He will implement the OTS technique and gather experimental feedback.
Study of the thermoconversion and de-polymerization mechanisms of plastic wastes in supercritical water conditions
The waste valorization is a hot topic that has attracted great interest in the Circular Carbon Economy. Substantial efforts have been devoted to strengthening sustainable processes in recent years. These are based on the development of systems to improve carbon circularity (material and energy recycling).Global production of plastics doubled from 230 million tons in 2000 to 460 million tons in 2019. This exponential production/consumption has significant consequences on the environment. Despite the existence of recycling methods, only 9% of global plastic production is currently recycled, and the remaining quantity (not valorized) represents a real source of pollution [1].
Mixtures of different types of plastics make sorting stages difficult, which represents the main disadvantage for material recycling systems. An interesting application recently reported in the literature is the use of the hydrothermal gasification process to treat waste (and mixtures of difficult-to-sort) plastics to produce a gas rich in CH4 and H2 [2]. Hydrothermal gasification (HTG) is a thermochemical process which employs the supercritical conditions of water (T > 374 ° C, P > 221 bar), in order to convert the organic carbon contained in the wet feedstock into a gaseous phase (which contains CH4, H2, CO and CO2, mainly). In addition, the flexibility of the process also allows the study of de-polymerization of these wastes in conditions close to the critical point of water, which facilitates the production of chemical intermediates (and their reuse) in the chemical industry.
Thus, the understanding of the conversion mechanisms of different types of plastics (and their mixtures) seems essential to valorize these wastes. However, the identification of reaction pathways is still a major scientific obstacle. The objective of the thesis is the study of the reaction mechanisms of transformation of model plastics (and their mixtures) in supercritical water conditions. Understanding the phenomena will lead to the optimization of the HTG process (with and without catalysts) to facilitate the production of a gas rich in CH4/H2 and the production of intermediates for the chemical industry. The focus of this PhD work will involve: i) the study of thermo-conversion and de-polymerization of plastics; ii) the study of the behavior of catalysts in the supercritical water environment (activation/deactivation); iii) the study of selectivity towards the production of a gas containing CH4/H2 and the production of chemical intermediates.
Deployment strategy for energy infrastructures on a regional scale: an economic and environmental optimisation approach
The general context is "Design and optimisation of multi-vector energy systems on a territorial scale".
More specifically, the aim is to develop new methods for studying trajectories for reducing the overall environmental impact (underlying LCA) of a territory while controlling costs in various applications, for example:
- Opportunity to develop infrastructures (e.g. H2 network, or heat network) to enhance decarbonisation, by expanding new uses of energy where these infrastructures exist or will exist, while reducing the overall environmental impact for given uses.
- Based on these studies, study the impact of centralising or decentralising production and consumption resources,
- Taking into account the long-term dynamic of investments, with the compromise of renovating/replacing installations at a given moment, in order to reduce the overall environmental impact for given uses.
Possible applications for hydrogen infrastructures have been identified or are being identified
Development of catalysts for CO2 hydrogenation to light olefins
Light olefins, mainly ethylene and propylene, are amongst the organic compounds with the largest production volume. They are currently produced from fossil resources. The reduction of the carbon footprint of products synthesized from these intermediates necessitates the use of alternative feedstock, such as atmospheric CO2.
The objective of this phD is the development of catalyst for the direct hydrogenation of CO2 into light olefins. Fe based catalyst combining reverse water gas shift (RWGS) and Fischer-Tropsch polymerization (FT) capabilities will be developed. In order to have a better understanding of iron forms involved in the reaction, Fe nanoparticles of controlled composition and dsizes will be prepared and dispersed on different support (silica, alumina, carbon,…). The catalytic properties will then be evaluated on a dynamic reactor and finely characterized using numerous techniques (XRD, XPS, HRTEM, …).