Quantum dot auto-tuning assisted by physics-informed neural networks
Quantum computers hold great promise for advancing science, technology, and society by solving problems beyond classical computers' capabilities. One of the most promising quantum bit (qubit) technologies are spin qubits, based on quantum dots (QDs) that leverage the great maturity and scalability of semiconductor technologies. However, scaling up the number of spin qubits requires overcoming significant engineering challenges, such as the charge tuning of a very large number of QDs. The QD tuning process implies multiple complex steps that are currently performed manually by experimentalists, which is cumbersome and time consuming. It is now crucial to address this problem in order to both accelerate R&D and enable truly scalable quantum computers.
The main goal of the postdoctoral project is to develop a QD automatic tuning software combining Bayesian neural networks and a QD physical model fitted on CEA-Leti’s device behavior. This innovative approach leveraging the BayNN uncertainty estimations and the predictive aspect of QD models will enable to achieve fast and non-ideality-resilient automatic QD tuning solutions.
As part of a project that concerns the creation of innovative materials, we wish to strengthen our platform in its ability to learn from little experimental data.
In particular, we wish to work firstly on the extraction of causal links between manufacturing parameters and properties. Causality extraction is a subject of great importance in AI today and we wish to adapt existing approaches to experimental data and their particularities in order to select the variables of interest. Secondly, we will focus on these causal links and their characterization (causal inference) using an approach based on fuzzy rules, that is to say we will create fuzzy rules adapted to their representation.
Development of noise-based artifical intellgence approaches
Current approaches to AI are largely based on extensive vector-matrix multiplication. In this postdoctoral project we would like to pose the question, what comes next? Specifically we would like to study whether (stochastic) noise could be the computational primitive that the a new generation of AI is built upon. This question will be answered in two steps. First, we will explore theories regarding the computational role of microscopic and system-level noise in neuroscience as well as how noise is increasingly leveraged in machine leaning and artificial intelligence. We aim to establish concrete links between these two fields and, in particular, we will explore the relationship between noise and uncertainty quantification.
Building on this, the postdoctoral researcher will then develop new models that leverage noise to carry out cognitive tasks, of which uncertainty is an intrinsic component. This will not only serve as an AI approach, but should also serve as a computational tool to study cognition in humans and also as a model for specific brain areas known to participate in different aspects of cognition, from perception to learning to decision making and uncertainty quantification.
Perspectives of the postdoctoral project should inform how future fMRI imaging and invasive and non-invasive electrophysiological recordings may be used to test theories of this model. Additionally, the candidate will be expected to interact with other activates in the CEA related to the development of noise-based analogue AI accelerators.
LLMs hybridation for requirements engineering
Developing physical or digital systems is a complex process involving both technical and human challenges. The first step is to give shape to ideas by drafting specifications for the system to be. Usually written in natural language by business analysts, these documents are the cornerstones that bind all stakeholders together for the duration of the project, making it easier to share and understand what needs to be done. Requirements engineering proposes various techniques (reviews, modeling, formalization, etc.) to regulate this process and improve the quality (consistency, completeness, etc.) of the produced requirements, with the aim of detecting and correcting defects even before the system is implemented.
In the field of requirements engineering, the recent arrival of very large model neural networks (LLMs) has the potential to be a "game changer" [4]. We propose to support the work of the functional analyst with a tool that facilitates and makes reliable the writing of the requirements corpus. The tool will make use of a conversational agent of the transformer/LLM type (such as ChatGPT or Lama) combined with rigorous analysis and assistance methods. It will propose options for rewriting requirements in a format compatible with INCOSE or EARS standards, analyze the results produced by the LLM, and provide a requirements quality audit.
Development of Algorithms for the Detection and Quantification of Biomarkers from Voltammograms
The objective of the post-doctoral research is to develop a high-performance algorithmic and software solution for the detection and quantification of biomarkers of interest from voltammograms. These voltammograms are one-dimensional signals obtained from innovative electrochemical sensors. The study will be carried out in close collaboration with another laboratory at CEA-LIST, the LIST/DIN/SIMRI/LCIM, which will provide dedicated and innovative electrochemical sensors, as well as with the start-up USENSE, which is developing a medical device for measuring multiple biomarkers in urine.
POST-DOC/CDD X-ray tomography reconstruction based on Deep-Learning methods
CEA-LIST is developing the CIVA software platform, a benchmark for the simulation of non-destructive testing processes. In particular, it offers tools for X-ray and tomographic inspection which, for a given inspection, can simulate all radiographies, taking into account various associated physical phenomena, as well as the corresponding tomographic reconstruction. CEA-LIST also has an experimental platform for robotized X-ray tomography inspection.
The proposed work is part of the laboratory's contribution to a bilateral French-German ANR project involving academic and industrial partners, focusing on the inspection of large-scale objects using the robotized platform. A sufficient number of X-rays must be taken in order to carry out a 3D reconstruction of the object. In many situations, some angles of view cannot be acquired due to the dimensions of the object and/or the motion limitations of the robots used, resulting in a loss of quality in the 3D reconstruction.
Expected contributions focus on the use of Deep-Learning methods, to complete missing projections on the one hand, and reduce reconstruction artifacts on the other. This work includes the CIVA-based steps of building a simulated database and evaluating the obtained results using POD (Probability Of Detection) measurements.
The candidate will have access to the facilities of the Paris Saclay research center and will be expected to promote his/her results in the form of scientific communications (international conferences, publications).
Candidate profile:
PhD in data processing or artificial intelligence.
Fluent English (oral presentations, scientific publications).
Previous knowledge of X-ray physics and tomographic reconstruction methods would be appreciated.
X-ray tomography reconstruction based on analytical methods and Deep-Learning
CEA-LIST develops the CIVA software platform, a reference for the simulation of non-destructive testing processes. In particular, it proposes tools for X-ray and tomographic inspection, which allow, for a given tomographic testing, to simulate all the radiographic projections (or sinogram) taking into account various associated physical phenomena, as well as the corresponding tomographic reconstruction.
The proposed work is part of the laboratory's contribution to a European project on tomographic testing of freight containers with inspection systems using high-energy sources. The spatial constraints of the projection acquisition stage (the trucks carrying the containers pass through an inspection gantry) imply an adaptation of the geometry of the source/detector system and consequently of the corresponding reconstruction algorithm. Moreover, the system can only generate a reduced number of projections, which makes the problem ill-posed in the context of inversion.
The expected contributions concern two distinct aspects of the reconstruction methodology from the acquired data. On the one hand, it is a question of adapting the analytical reconstruction methods to the specific acquisition geometry of this project, and on the other hand, to work on methods allowing to overcome the lack of information related to the limited number of radiographic projections. In this objective, supervised learning methods, more specifically by Deep-Learning, will be used both to complete the sinogram, and to reduce the reconstruction artifacts caused by the small number of projections available. A constraint of adequacy to the data and the acquisition system will also be introduced in order to generate physically coherent projections.
Development of artificial intelligence algorithms for narrow-band localization
Narrowband (NB) radio signals are widely used in the context of low power, wide area (LPWA) networks, which are one of the key components of the Internet-of-Things (NB-IoT). However, because of their limited bandwidth, such signals are not well suited for accurate localization, especially when used in a complex environment like high buildings areas or urban canyons, which create signals reflections and obstructions. One approach to overcome these difficulties is to use a 3D model of the city and its buildings in order to better predict the signal propagation. Because this modelling is very complex, state-of-the art localization algorithms cannot handle it efficiently and new techniques based on machine learning and artificial intelligence should be considered to solve this very hard problem. The LCOI laboratory has deployed a NB-IoT network in the city of Grenoble and is currently building a very large database to support these studies.
Based on an analysis of the existing literature and using the knowledge acquired in the LCOI laboratory, the researcher will
- Contribute and supervise the current data collection.
- Exploit existing database to perform statistical analysis and modelling of NB-IoT signal propagation in various environments.
- Develop a toolchain to simulate signal propagation using 3D topology.
- Refine existing performance bounds through a more accurate signal modelling.
- Develop and implement real-time as well as off line AI-based localization algorithms using 3D topology.
- Evaluate and compare developed algorithms with respect to SoTA algorithms.
- Contribute to collaborative or industrial projects through this research work.
- Publish research papers in high quality journals and conference proceedings.
Development of a digital twin of complex processes
The current emergence of new digital technologies is opening up new opportunities for industry, making production more efficient, safer, more flexible and more reliable than ever. The application of these technologies to the vitrification processes could improve the knowledge of the processes, optimise their operation, train operators, help with predictive maintenance and assist in the management of the process.
The SOSIE project aims at providing a first proof of concept for the implementation of digital technologies in the field of vitrification processes, by integrating virtual reality, augmented reality, IoT (Internet of Things) and Artificial Intelligence.
This project, carried out in collaboration between the CEA and the SME GAMBI-M, is a READYNOV project. GAMBI-M is a company specialised in the reconstruction of complex environments and in digital engineering. The work will be carried out in close collaboration with the CEA teams developing the vitrification processes for nuclear waste.
The project consists of developing a digital twin of 2 vitrification processes, and will be implemented on 2 platforms in parallel, one in a conventional zone, the other in a high activity zone. The first step will be to develop a visual digital twin, the virtual 3D model of each cell, which will allow the user to visit the cells and access any point virtually. Based on this reconstructed model, an "augmented" twin will be developed and connected to the supervisory controller. Finally, the last step will be to develop the "intelligent twin" by exploiting existing databases on the operation of the process. By training machine learning algorithms on these data, a predictive model of nominal operation will be generated.
Publications are expected on the implementation of virtual reality and augmented reality tools on shielded chain operations, as well as on the development of deep learning methods for the assistance to the control of such complex processes.
Hybrid CMOS / spintronic circuits for Ising machines
The proposed research project is related to the search for hardware accelerators for solving NP-hard optimization problems. Such problems, for which finding exact solutions in polynomial time is out of reach for deterministic Turing machines, find many applications in diverse fields such as logistic operations, circuit design, medical diagnosis, Smart Grid management etc.
One approach in particular is derived from the Ising model, and is based on the evolution (and convergence) of a set of binary states within an artificial neural network (ANN).In order to improve the convergence speed and accuracy, the network elements may benefit from an intrinsic and adjustable source of fluctuations. Recent proof-of-concept work highlights the interest of implementing such neurons with stochastic magnetic tunnel junctions (MTJ).
The main goals will be the simulation, dimensioning and fabrication of hybrid CMOS/MTJ elements. The test vehicles will then be characterized in order to validate their functionality.
This work will be carried out in the frame of a scientific collaboration between CEA-Leti and Spintec.