Artificial Intelligence applied to Ion Beam Analysis
A one year contract postdoctoral research position is open at the laboratory for light element studies (LEEL, CEA/DRF) and the Data Science for Decision Laboratory (LS2D, DRT/LIST) and focuses on data processing based on AI and machine learning, here in the scope of Ion Beam Analysis (IBA).
In the context of this project, the successful candidate will have to fulfill the following tasks:
1- Design of a multispectral dictionary.
2- Learning module development.
3- Main code programming.
4- Development of a module dedicated to multispectral mappings.
5- Benchmarking.
The postdoctoral research associate will be hosted and supervised within LEEL and LS2D.
Detection of cyber-attacks in a smart multi-sensor embedded system for soil monitoring
The post-doc is concerned with the application of machine learning methods to detect potential cyber-security attacks on a connected multi-sensor system. The application domain is the agriculture, where CEA Leti has several projects, among which the H2020 project SARMENTI (Smart multi-sensor embedded and secure system for soil nutrient and gaseous emission monitoring). The objective of SARMENTI is to develop and validate a secure, low power multisensor systems connected to the cloud to make in situ soil nutrients analysis and to provide decision support to the farmers by monitoring soil fertility in real-time. Within this topic, the postdoc is concerned with the cyber-security analysis to determine main risks in our multi-sensor case and with the investigation of a attack detection module. The underlying detection algorithm will be based on anomaly detection, e.g., one-class classifier. The work has tree parts, implement the probes that monitor selected events, the communication infrastructure that connects the probes with the detector, and the detector itself.
Numerical quality analysis of simulation codes with CADNA, Verificarlo and Verrou
Numerical codes rely on floating-point arithmetic to represent real numbers and the operations applied to them. However, in general, real numbers cannot be exactly represented by floating-point numbers. The finite precision of the floating-point arithmetic may lead to round-off errors that may accumulate. With the increasing computational power, the algorithm complexification and the coupling of numerical codes, it is crucial to quantify the numerical robustness of an application or an algorithm.
CADNA [1], Verificarlo [2] and Verrou [3] are dedicated tools that allow estimating the round-off errors propagation and measuring the numerical accuracy of the obtained results. The objective of this work is to use these three tools on GYSELA [4, 5], a simulation code used to characterize the plasma dynamics in Tokamak, and PATMOS [6], a mini-app representative of a Monte Carlo neutron transport code. This analysis will be aimed at assessing the numerical robustness of these two applications or some of their algorithms. In addition to the analysis of the numerical quality, these tools will also be used to see whether it is possible to lower the precision (simple or even half precision instead of double) of some algorithms, thus improving the memory footprint and/or performances (vectorization, communications). Beyond the lessons learnt on the two analyzed codes, a second objective will be the elaboration of a methodology that could be more generic and be applied more broadly to other codes.
Conversational Agent for Medical Serious Games
The LVIC laboratory participates in a research project which aims to develop innovative tools for teaching medical students. The training will be done through serious games of second generation, in which the learner can interact directly with the environment:
- immersed in a 3D environment with a Virtual Reality Head Mounted Display and motion detection,
- with natural and ecological handling of the environment (instruments, patient …),
- and a voice interaction with conversational and emotional avatars.
The multimedia team of LVIC laboratory is involved in the project to develop tools allowing students to interact in natural language with conversational avatars.
In this context, the post-doctoral researcher will be in charge of:
- studying the state of art of conversational agents;
- understanding and mastering the technological components of the laboratory language processing;
- proposing and developing a dialogue system allowing interaction in natural language with conversational avatars of the project.
Application of ontology and knowledge engineering to complex system engineering
Model-Based System Engineering relies on using various formal descriptions of the system to make prediction, analysis, automation, simulation... However, these descriptions are mostly distributed across heterogeneous silos. The analysis and exploitation of the information are confined to their silos and thereby miss the big picture. The crosscutting insights remain hidden.
To overcome this problem, ontologies and knowledge engineering techniques provide desirable solutions that have been acknowledged by academic works. These techniques and paradigm notably help in giving access to a complete digital twin of the system thanks to their federation capabilities, in making sense to the information by embedding it with existing formal knowledge and in exploring and uncovering inconsistencies thanks to reasoning capabilities.
The objective of this work will be to propose an approach that gives access to a complete digital twin federated with knowledge engineering technologies. The opportunities and limits of the approach will be evaluated on industrial use cases.
Software and hardware combined acceleration solution for operations research algorithms
The purpose of the study is to prepare the next generation of OR solvers. We will study the hardware acceleration possibility based on FPGA to run some or all of the OR algorithm. The blocks for which such a solution is not effective can be parallelized and executed on a standard computing platform. Thus, the proposed runtime correspond to a standard computing platform integrating FPGA. To access to this platform we require a set of tools. These tools should provide features such as (a) analysis and pre-compiling an input or problem or sub-problem of OR, (b) HW / SW partitioning and dedicated logic optimization and finally (c) generating an software executable and a bitstream.
The first step will be to find OR algorithms that are well suited for hardware acceleration. We then propose a HW / SW partitioning methodologies for different classes of algorithms.
The results will be implemented to lead to a compilation prototype starting from an OR instance and generating a software executable and a bitstream. Theses results will be implemented and executed on a computing platform integrating FPGA to evaluate the performance gain and the impact on the energy consumption of the proposed solution.
Apprenticeship Learning Platform deployment for industrial applications
This project aims at developing a demonstrator that integrates state-of-the-art technologies and improve it on a use-case representative of the industrial world.
The demonstrator will consist in a robotic / cobotic arm coupled to an acquisition sensor (RGBD type). This device will be positioned in a workspace made of a rack / shelf containing objects / pieces of various shapes and qualities (materials, densities, colors ...) in front of which will be placed a typical conveyor prototype of industrial installations. The type of tasks expected to be carried out by the demonstrator will be "pick and place" type tasks where an object will have to be identified in shelf and then placed on the conveyor.
This type of demonstrator will be closer to the real industrial conditions of use than the "toy" examples used in the academic field.
This demonstrator will focus first on the short-term effectiveness based on state of the art technologies for both hardware and software, for a use case representative of the industrial world.
At first, it will thus be less focused on the evolution of the algorithms used than on the adaptation of the parameters, the injection of knowledge a priori dependent on the context making it possible to reduce the high-dimensional input space, etc.
3D occupancy grid analysis with a deep learning approach
The context of this subject is the development of autonomous vehicles / drones / robots.
The vehicle environment is represented by a 3D occupancy grid, in which each cell contains the probability of presence of an object. This grid is refreshed over time, thanks to sensor data (Lidar, Radar, Camera).
Higher-level algorithms, like path planning or collision avoidance, think in terms of objects described by their path, speed, and nature. It is thus mandatory to get these objects from individual grid cells, with clustering, classification, and tracking.
Many previous publications on this topic comes from the context of vision processing, many of them using deep learning. They show a big computational complexity, and do not benefit from occupancy grids specific characteristics (lack of textures, a priori knowledge of areas of interest…). We want to explore new techniques, tailored to occupation grids, and more compatible with embedded and low cost implementation.
The objective of the subject is to determine, from a series of 3D occupation grids, the number and the nature of the different objects, their position and velocity vector, exploiting the recent advances of deep learning on unstrucured 3D data.
Distributed optimal planning of energy resources. Application to district heating
Heating district networks in France fed more than one million homes and deliver a quantity of heat equal to about 5% of the heat consumed by the residential and tertiary sector. Therefore, they represent a significant potential for the massive introduction of renewable and recovery energy. However, heating networks are complex systems that must manage large numbers of consumers and producers of energy, and that are distributed in extended and highly branched geographical zones. The aim of the STRATEGE project, realized in collaboration among the CEA-LIST and the CEA-LITEN, is to implement an optimal and dynamic management of heating networks. We propose a multidisciplinary approach, by integrating the advanced network management using Multi-Agent Systems (MAS) and by considering simplified physical models of transport and recovery of heat developed on Modelica.
The post-doc’s goal is to design mechanisms of planning and optimization for allocation of heat resources that consider the geographical information from a GIS and the predictions of consumption, production and losses calculated with the physical models. In this way, several characteristics of the network will be considered: the continuous and dynamic aspect of the resource; sources with different behaviors, capabilities and production costs; the dependence of consumption/production to external aspects (weather, energy price); the internal characteristics of the network (losses, storage capacity). The developed algorithms will be implemented in a existing MAS management plateform and will constitute the main brick of a decision-support engine for the management of heating systems. It will initially operate in a simulated environment and in a second time online on a real system.