Generative AI for model driven engineering

Generative AI and large language models (LLMs), such as Copilot and ChatGPT can complete code based on initial fragments written by a developer. They are integrated in software development environments such as VS code. Many papers analyse the advantages and limitations of these approaches for code generation. Besides some deficiencies, the produced code is often correct and the results are improving.

However, a surprisingly small amount of work has been done in the context of software modeling. The paper from Cámara et al. concludes that while the performance of the current LLMs for software modeling is still limited (in contrast to code generation), there is a need that (in contrast to code generation) we should adapt our model-based engineering practices to these new assistants and integrate these into MBSE methods and tools.

The goal of this post-doc is to explore generative AI in the context of system modeling and associated tool support. For instance, AI assistance can support the completion, re-factoring and analysis (for instance identified design patterns or anti-patterns) at the model level. Propositions are discussed in the team and in a second step prototyped and evaluated the mechanism in the context of the open-source UML modeler Papyrus.

Development of noise-based artifical intellgence approaches

Current approaches to AI are largely based on extensive vector-matrix multiplication. In this postdoctoral project we would like to pose the question, what comes next? Specifically we would like to study whether (stochastic) noise could be the computational primitive that the a new generation of AI is built upon. This question will be answered in two steps. First, we will explore theories regarding the computational role of microscopic and system-level noise in neuroscience as well as how noise is increasingly leveraged in machine leaning and artificial intelligence. We aim to establish concrete links between these two fields and, in particular, we will explore the relationship between noise and uncertainty quantification.
Building on this, the postdoctoral researcher will then develop new models that leverage noise to carry out cognitive tasks, of which uncertainty is an intrinsic component. This will not only serve as an AI approach, but should also serve as a computational tool to study cognition in humans and also as a model for specific brain areas known to participate in different aspects of cognition, from perception to learning to decision making and uncertainty quantification.
Perspectives of the postdoctoral project should inform how future fMRI imaging and invasive and non-invasive electrophysiological recordings may be used to test theories of this model. Additionally, the candidate will be expected to interact with other activates in the CEA related to the development of noise-based analogue AI accelerators.

Co-design strategy (SW/HW) to enable a structured spatio-temporal sparsity for NN inference/learning

The goal of the project is to identify, analyze and evaluate mechanisms for modulating the spatio-temporal sparsity of activation functions in order to minimize the computational load of transformer NN model (learning/inference). A combined approach with extreme quantization will also be considered.
The aim is to jointly refine an innovative strategy to assess the impacts and potential gains of these mechanisms on the model execution under hardware constraints. In particular, this co-design should also enable to qualify and to exploit a bidirectional feedback loop between a targeted neural network and a hardware instantiation to achieve the best tradeoff (compactness/latency).

LLMs hybridation for requirements engineering

Developing physical or digital systems is a complex process involving both technical and human challenges. The first step is to give shape to ideas by drafting specifications for the system to be. Usually written in natural language by business analysts, these documents are the cornerstones that bind all stakeholders together for the duration of the project, making it easier to share and understand what needs to be done. Requirements engineering proposes various techniques (reviews, modeling, formalization, etc.) to regulate this process and improve the quality (consistency, completeness, etc.) of the produced requirements, with the aim of detecting and correcting defects even before the system is implemented.
In the field of requirements engineering, the recent arrival of very large model neural networks (LLMs) has the potential to be a "game changer" [4]. We propose to support the work of the functional analyst with a tool that facilitates and makes reliable the writing of the requirements corpus. The tool will make use of a conversational agent of the transformer/LLM type (such as ChatGPT or Lama) combined with rigorous analysis and assistance methods. It will propose options for rewriting requirements in a format compatible with INCOSE or EARS standards, analyze the results produced by the LLM, and provide a requirements quality audit.

POST-DOC/CDD X-ray tomography reconstruction based on Deep-Learning methods

CEA-LIST is developing the CIVA software platform, a benchmark for the simulation of non-destructive testing processes. In particular, it offers tools for X-ray and tomographic inspection which, for a given inspection, can simulate all radiographies, taking into account various associated physical phenomena, as well as the corresponding tomographic reconstruction. CEA-LIST also has an experimental platform for robotized X-ray tomography inspection.
The proposed work is part of the laboratory's contribution to a bilateral French-German ANR project involving academic and industrial partners, focusing on the inspection of large-scale objects using the robotized platform. A sufficient number of X-rays must be taken in order to carry out a 3D reconstruction of the object. In many situations, some angles of view cannot be acquired due to the dimensions of the object and/or the motion limitations of the robots used, resulting in a loss of quality in the 3D reconstruction.
Expected contributions focus on the use of Deep-Learning methods, to complete missing projections on the one hand, and reduce reconstruction artifacts on the other. This work includes the CIVA-based steps of building a simulated database and evaluating the obtained results using POD (Probability Of Detection) measurements.
The candidate will have access to the facilities of the Paris Saclay research center and will be expected to promote his/her results in the form of scientific communications (international conferences, publications).
Candidate profile:
PhD in data processing or artificial intelligence.
Fluent English (oral presentations, scientific publications).
Previous knowledge of X-ray physics and tomographic reconstruction methods would be appreciated.

Development of Algorithms for the Detection and Quantification of Biomarkers from Voltammograms

The objective of the post-doctoral research is to develop a high-performance algorithmic and software solution for the detection and quantification of biomarkers of interest from voltammograms. These voltammograms are one-dimensional signals obtained from innovative electrochemical sensors. The study will be carried out in close collaboration with another laboratory at CEA-LIST, the LIST/DIN/SIMRI/LCIM, which will provide dedicated and innovative electrochemical sensors, as well as with the start-up USENSE, which is developing a medical device for measuring multiple biomarkers in urine.

X-ray tomography reconstruction based on analytical methods and Deep-Learning

CEA-LIST develops the CIVA software platform, a reference for the simulation of non-destructive testing processes. In particular, it proposes tools for X-ray and tomographic inspection, which allow, for a given tomographic testing, to simulate all the radiographic projections (or sinogram) taking into account various associated physical phenomena, as well as the corresponding tomographic reconstruction.
The proposed work is part of the laboratory's contribution to a European project on tomographic testing of freight containers with inspection systems using high-energy sources. The spatial constraints of the projection acquisition stage (the trucks carrying the containers pass through an inspection gantry) imply an adaptation of the geometry of the source/detector system and consequently of the corresponding reconstruction algorithm. Moreover, the system can only generate a reduced number of projections, which makes the problem ill-posed in the context of inversion.
The expected contributions concern two distinct aspects of the reconstruction methodology from the acquired data. On the one hand, it is a question of adapting the analytical reconstruction methods to the specific acquisition geometry of this project, and on the other hand, to work on methods allowing to overcome the lack of information related to the limited number of radiographic projections. In this objective, supervised learning methods, more specifically by Deep-Learning, will be used both to complete the sinogram, and to reduce the reconstruction artifacts caused by the small number of projections available. A constraint of adequacy to the data and the acquisition system will also be introduced in order to generate physically coherent projections.

Deep learning methods with Bayesian-based uncertainty quantification for the emulation of CPU-expensive numerical simulators

In the context of uncertainty propagation in numerical simulations, substitute mathematical models, called metamodels or emulators are used to replace a physico-numerical model by a statistical (or machine) learning model. This metamodel is trained on a set of available simulations of the model and mainly relies on machine learning (ML) algorithms. Among the usual ML methods, Gaussian process (GP) metamodels have attracted much interest since they propose both a prediction and an uncertainty for the output, which is very appealing in a context of safety studies or risk assessments. However, these GP metamodels have limitations, especially in the case of very irregular models. The objective of the post-doctorate will be to study the applicability and potential of Bayesian-based deep learning approaches to overcome these limitations. The work will be focused on Bayesian neural networks and deep GP and will consist in studying their tractability on medium size samples, evaluate their benefit compared to shallow GP, and assess the reliability of the uncertainty associated with their predictions.

High entropy alloys determination (predictive thermodynamics and Machine learning) and their fast elaboration by Spark Plasma Sintering

The proposed work aims to create an integrated system combining a computational thermodynamic algorithm (CALPHAD-type (calculation of phase diagrams)) with a multi-objective algorithm (genetic, Gaussian or other) together with data mining techniques in order to select and optimize compositions of High entropy alloys in a 6-element system: Fe-Ni-Co-Cr-Al-Mo.
Associated with computational methods, fast fabrication and characterization methods of samples (hardness, density, grain size) will support the selection process. Optimization and validation of the alloy’s composition will be oriented towards two industrial use cases: structural alloys (replacement of Ni-based alloys) and corrosion protection against melted salts (nuclear application)

Development of artificial intelligence algorithms for narrow-band localization

Narrowband (NB) radio signals are widely used in the context of low power, wide area (LPWA) networks, which are one of the key components of the Internet-of-Things (NB-IoT). However, because of their limited bandwidth, such signals are not well suited for accurate localization, especially when used in a complex environment like high buildings areas or urban canyons, which create signals reflections and obstructions. One approach to overcome these difficulties is to use a 3D model of the city and its buildings in order to better predict the signal propagation. Because this modelling is very complex, state-of-the art localization algorithms cannot handle it efficiently and new techniques based on machine learning and artificial intelligence should be considered to solve this very hard problem. The LCOI laboratory has deployed a NB-IoT network in the city of Grenoble and is currently building a very large database to support these studies.
Based on an analysis of the existing literature and using the knowledge acquired in the LCOI laboratory, the researcher will
- Contribute and supervise the current data collection.
- Exploit existing database to perform statistical analysis and modelling of NB-IoT signal propagation in various environments.
- Develop a toolchain to simulate signal propagation using 3D topology.
- Refine existing performance bounds through a more accurate signal modelling.
- Develop and implement real-time as well as off line AI-based localization algorithms using 3D topology.
- Evaluate and compare developed algorithms with respect to SoTA algorithms.
- Contribute to collaborative or industrial projects through this research work.
- Publish research papers in high quality journals and conference proceedings.

Top