Development of noise-based artifical intellgence approaches

Current approaches to AI are largely based on extensive vector-matrix multiplication. In this postdoctoral project we would like to pose the question, what comes next? Specifically we would like to study whether (stochastic) noise could be the computational primitive that the a new generation of AI is built upon. This question will be answered in two steps. First, we will explore theories regarding the computational role of microscopic and system-level noise in neuroscience as well as how noise is increasingly leveraged in machine leaning and artificial intelligence. We aim to establish concrete links between these two fields and, in particular, we will explore the relationship between noise and uncertainty quantification.
Building on this, the postdoctoral researcher will then develop new models that leverage noise to carry out cognitive tasks, of which uncertainty is an intrinsic component. This will not only serve as an AI approach, but should also serve as a computational tool to study cognition in humans and also as a model for specific brain areas known to participate in different aspects of cognition, from perception to learning to decision making and uncertainty quantification.
Perspectives of the postdoctoral project should inform how future fMRI imaging and invasive and non-invasive electrophysiological recordings may be used to test theories of this model. Additionally, the candidate will be expected to interact with other activates in the CEA related to the development of noise-based analogue AI accelerators.

Study of the specific features of highly distributed architectures for decision and control requirements

Our electricity infrastructure has undergone and will continue to undergo profound changes in the coming decades. The rapid growth in the share of renewables in electricity generation requires solutions to secure energy systems, especially with regard to the variability, stability and balancing aspects of the electricity system and the protection of the grid infrastructure itself. The purpose of this study is to help design new decision-making methods, specially adapted to highly distributed control architectures for energy networks. These new methods will have to be evaluated in terms of performance, resilience, robustness and tested in the presence of various hazards and even byzantines.

LLMs hybridation for requirements engineering

Developing physical or digital systems is a complex process involving both technical and human challenges. The first step is to give shape to ideas by drafting specifications for the system to be. Usually written in natural language by business analysts, these documents are the cornerstones that bind all stakeholders together for the duration of the project, making it easier to share and understand what needs to be done. Requirements engineering proposes various techniques (reviews, modeling, formalization, etc.) to regulate this process and improve the quality (consistency, completeness, etc.) of the produced requirements, with the aim of detecting and correcting defects even before the system is implemented.
In the field of requirements engineering, the recent arrival of very large model neural networks (LLMs) has the potential to be a "game changer" [4]. We propose to support the work of the functional analyst with a tool that facilitates and makes reliable the writing of the requirements corpus. The tool will make use of a conversational agent of the transformer/LLM type (such as ChatGPT or Lama) combined with rigorous analysis and assistance methods. It will propose options for rewriting requirements in a format compatible with INCOSE or EARS standards, analyze the results produced by the LLM, and provide a requirements quality audit.

Robotics Moonshot : digital twin of a laser cutting process and implementation with a self-learning robot

One of the main challenges in the deployment of robotics in industry is to offer smart robots, capable of understanding the context in which they operate and easily programmable without advanced skills in robotics and computer science. In order to enable a non-expert operator to define tasks subsequently carried out by a robot, the CEA is developing various tools: intuitive programming interface, learning by demonstration, skill-based programming, interface with interactive simulation, etc.
Winner of the "moonshot" call for projects from the CEA's Digital Missions, the "Self-learning robot" project proposes to bring very significant breakthroughs for the robotics of the future in connection with simulation. A demonstrator integrating these technological bricks is expected on several use cases in different CEA centers.
This post-doc offer concerns the implementation of the CEA/DES (Energy Department) demonstrator on the use case of laser cutting under constraints for A&D at the Simulation and Dismantling Techniques Laboratory (LSTD) at the CEA Marcoule.

GPU acceleration of a CFD code for gas dynamics

Development and optimization of adaptive mesh refinement methods for fluid/structure interaction problems in a context of high performance computing

A new simulation code for structural and compressible fluid mechanics, named Manta, is currently under development at the french CEA. This code aims at both unifying the features of CEA’s legacy implicit and explicit codes and being natively HPC-oriented. With its many numerical methods (Finite Elements, Finite Volumes, hybrid methods, phase field, implicit or explicit solvers …), Manta enables the simulation of various static or dynamic kinds mechanical problems including fluids, structures, or fluid-structure interactions.

When looking for optimizing computation time, Adaptive Mesh Refinement (AMR) is a typical method for increasing numerical accuracy while managing computational load.

This postdoctoral position aims at defining and implementing parallel AMR algorithms in a high performance computing context, for fluid/structure interaction problems.

In a preliminary step, the functionalities for hierarchical AMR, such as cell refinement and coarsening, field transfers from parents to children cells, refinement criteria or hanging nodes management, will be integrated in Manta. This first work will probably rely on external libraries that should be identified.

In a second step, the distributed-memory parallel performances will be optimized. Especially, strategies for load balancing between the MPI processes should be studied, especially for fluid/structure interaction problems.

Finally, especially for explicit in time computations, one will have to define and implement spatially adapted time stepping to cope with the several levels of refinement and the different wave propagation velocities.

These last 2 points will give rise to some publications in specialized scientific journals.

Post-doctoral position in AI safety and assurance at CEA LIST

The position is related to safety assessment and assurance of AI (Artificial Intelligence)-based systems that used machine-learning components during operation time for performing autonomy functions. Currently, for non-AI system, the safety is assessed prior to the system deployment and the safety assessment results are compiled into a safety case that remains valid through system life. For novel systems integrating AI components, particularly the self-learners systems, such engineering and assurance approach are not applicable as the system can exhibit new behavior in front of unknown situations during operation.

The goal of the postdoc will be to define an engineering approach to perform accurate safety assessment of AI systems. A second objective is to define assurance case artefacts (claims, evidences, etc.) to obtain & preserve justified confidence in the safety of the system through its lifetime, particularly for AI system with operational learning. The approach will be implemented in an open-source framework that it will be evaluated on industry-relevant applications.

The position holder will join a research and development team in a highly stimulating environment with unique opportunities to develop a strong technical and research portfolio. He will be required to collaborate with LSEA academic & industry partners, to contribute and manage national & EU projects, to prepare and submit scientific material for publication, to provide guidance to PhD students.

High precision robotic manipulation with reinforcement learning and Sim2Real

High precision robotic assembly that handles high product variability is a key part of an agile and a flexible manufacturing automation system. To date however, most of the existing systems are difficult to scale with product variability since they need precise models of the environment dynamics in order to be efficient. This information is not always easy to get.
Reinforcement learning based methods can be of interest in this situation. They do not rely on the environment dynamics and only need sample data from the system to learn a new manipulation skill. The main caveat is the efficiency of the data generation process.
In this post-doc, we propose to investigate the use of reinforcement learning based algorithms to solve high precision robotic assembly tasks. To handle the problem of sample generation we leverage the use of simulators and adopt a sim2real approach. The goal is to build a system than can solve tasks such as those proposed in the World Robot Challenge and tasks that the CEA’s industrial partners will provide.

Application of a MDE approach to AI-based planning for robotic and autonomous systems

The complexity of robotics and autonomous systems (RAS) can only be managed with well-designed software architectures and integrated tool chains that support the entire development process. Model-driven engineering (MDE) is an approach that allows RAS developers to shift their focus from implementation to the domain knowledge space and to promote efficiency, flexibility and separation of concerns for different development stakeholders. One key goal of MDE approaches is to be integrated with available development infrastructures from the RAS community, such as ROS middleware, ROSPlan for task planning, BehaviorTree.CPP for execution and monitoring of robotics tasks and Gazebo for simulation.
The goal of this post-doc is to investigate and develop modular, compositional and predictable software architectures and interoperable design tools based on models, rather than code-centric approaches. The work must be performed in the context of European projects such as RobMoSys (www.robmosys.eu), and other initiatives on AI-based task planning and task execution for robotics and autonomous systems. The main industrial goal is to simplify the effort of RAS engineers and thus allowing the development of more advanced, more complex autonomous systems at an affordable cost. In order to do so, the postdoctoral fellow will contribute to set-up and consolidate a vibrant ecosystem, tool-chain and community that will provide and integrate model-based design, planning and simulation, safety assessment and formal validation and verification capabilities.

Artificial Intelligence applied to Ion Beam Analysis

A one year contract postdoctoral research position is open at the laboratory for light element studies (LEEL, CEA/DRF) and the Data Science for Decision Laboratory (LS2D, DRT/LIST) and focuses on data processing based on AI and machine learning, here in the scope of Ion Beam Analysis (IBA).
In the context of this project, the successful candidate will have to fulfill the following tasks:
1- Design of a multispectral dictionary.
2- Learning module development.
3- Main code programming.
4- Development of a module dedicated to multispectral mappings.
5- Benchmarking.
The postdoctoral research associate will be hosted and supervised within LEEL and LS2D.

Top