Advanced fuzzing for software supply-chain security

IoT devices (routers, video surveillance systems, etc.) rely on binary code to operate. This code often incorporates thousands of pre-existing software components, mostly drawn from open-source libraries whose code is freely accessible online. This complexity opens the door to software supply chain attacks, notably through the insertion of backdoors or the exploitation of known vulnerabilities.

The SECUBIC project aims to enhance the detection of these vulnerabilities within IoT firmware. In this context, the candidate will contribute to deepening existing research work and will take part in the development of new fuzzing and static analysis techniques designed to prevent and detect such attacks.

Deep learning methods for sustainable development applications: energy networks, city decarbonization

The post doctoral position is part of the AI4NRJ project. This project aims to develop a novel form of intelligent, embedded supervision for optimizing smart energy networks. Unlike existing approaches (AI, digital twins), it will simultaneously integrate adaptability to new data, new habits and robustness by considering cause-and-effect relationships. A foundation model-based AI, trained on multiple datasets and capable of performing various tasks, will be developed to handle heterogeneous data, including complex parameters like demand fluctuations and energy losses, while predicting consumption and detecting anomalies.

Definition and implementation of metrics for software obsolescence measurement

The environmental impact of digital technology has become a major concern, with a measurable and growing environmental footprint (particularly carbon). A significant part of this impact comes from the manufacture of equipment, which is often replaced prematurely, partly due to software-induced obsolescence. “Programs slow down faster than hardware improves” is how N. Wirth's law is formulated. Every computer or smartphone user experiences this during the many software updates, until the computer or phone can no longer support the demands of the applications.
Unfortunately, this law has never been formalized or measured experimentally; that is the objective of this project.

More specifically, the objective is to develop metrics on the evolution of the operational complexity of software across its different versions. These metrics can then be used in software workshops and possibly meet regulatory requirements: “my software must not increase in complexity by more than 7% per year” in order to increase the lifespan of hardware, which accounts for the majority of the environmental footprint of digital technology.

In practice, this will involve developing a methodology for tools of increasing complexity, using usage scenarios to measure operational complexity.
This method will be applied to one or more use cases, such as an open-source word processing scenario (LibreOffice) and a web-based scenario.

Integrating dynamic CRDTs replicas

Existing modeling frameworks have limited collaboration capabilities. Collaboration at the model level is one of the top desired features. However, most solutions rely on cloud-based and centralized databases as their technological solution. While these solutions ease collaboration among connected partners by employing concurrency control techniques, they do not support disconnected collaboration scenarios, which is an important feature for designing local-first software. This situation presents a significant compromise: utilizing cloud-based solutions with loss of data ownership control versus adopting separate instances without collaborative capabilities.

The objective of this postdoctoral project is to contribute to and extend an existing local-first Model-Based Systems Engineering (MBSE) framework, built upon specialized Conflict-free Replicated Data Types (CRDTs). The goal is to enable real-time collaboration through modeling-specific CRDTs. The proposed approach involves extending a middleware communication layer utilizing CRDTs to
seamlessly synchronize distributed, offline-capable engineering models.

The postdoctoral researcher will conduct a state-of-the-art review of communication and group membership approaches in P2P environments. One major issue to be taken into account is the entry and exit of members in a group, so the CRDT state is always
coherent. The components will be integrated into our CRDT and modeling framework.

Development of noise-based artifical intellgence approaches

Current approaches to AI are largely based on extensive vector-matrix multiplication. In this postdoctoral project we would like to pose the question, what comes next? Specifically we would like to study whether (stochastic) noise could be the computational primitive that the a new generation of AI is built upon. This question will be answered in two steps. First, we will explore theories regarding the computational role of microscopic and system-level noise in neuroscience as well as how noise is increasingly leveraged in machine leaning and artificial intelligence. We aim to establish concrete links between these two fields and, in particular, we will explore the relationship between noise and uncertainty quantification.
Building on this, the postdoctoral researcher will then develop new models that leverage noise to carry out cognitive tasks, of which uncertainty is an intrinsic component. This will not only serve as an AI approach, but should also serve as a computational tool to study cognition in humans and also as a model for specific brain areas known to participate in different aspects of cognition, from perception to learning to decision making and uncertainty quantification.
Perspectives of the postdoctoral project should inform how future fMRI imaging and invasive and non-invasive electrophysiological recordings may be used to test theories of this model. Additionally, the candidate will be expected to interact with other activates in the CEA related to the development of noise-based analogue AI accelerators.

Robotics Moonshot : digital twin of a laser cutting process and implementation with a self-learning robot

One of the main challenges in the deployment of robotics in industry is to offer smart robots, capable of understanding the context in which they operate and easily programmable without advanced skills in robotics and computer science. In order to enable a non-expert operator to define tasks subsequently carried out by a robot, the CEA is developing various tools: intuitive programming interface, learning by demonstration, skill-based programming, interface with interactive simulation, etc.
Winner of the "moonshot" call for projects from the CEA's Digital Missions, the "Self-learning robot" project proposes to bring very significant breakthroughs for the robotics of the future in connection with simulation. A demonstrator integrating these technological bricks is expected on several use cases in different CEA centers.
This post-doc offer concerns the implementation of the CEA/DES (Energy Department) demonstrator on the use case of laser cutting under constraints for A&D at the Simulation and Dismantling Techniques Laboratory (LSTD) at the CEA Marcoule.

GPU acceleration of a CFD code for gas dynamics

Development and optimization of adaptive mesh refinement methods for fluid/structure interaction problems in a context of high performance computing

A new simulation code for structural and compressible fluid mechanics, named Manta, is currently under development at the french CEA. This code aims at both unifying the features of CEA’s legacy implicit and explicit codes and being natively HPC-oriented. With its many numerical methods (Finite Elements, Finite Volumes, hybrid methods, phase field, implicit or explicit solvers …), Manta enables the simulation of various static or dynamic kinds mechanical problems including fluids, structures, or fluid-structure interactions.

When looking for optimizing computation time, Adaptive Mesh Refinement (AMR) is a typical method for increasing numerical accuracy while managing computational load.

This postdoctoral position aims at defining and implementing parallel AMR algorithms in a high performance computing context, for fluid/structure interaction problems.

In a preliminary step, the functionalities for hierarchical AMR, such as cell refinement and coarsening, field transfers from parents to children cells, refinement criteria or hanging nodes management, will be integrated in Manta. This first work will probably rely on external libraries that should be identified.

In a second step, the distributed-memory parallel performances will be optimized. Especially, strategies for load balancing between the MPI processes should be studied, especially for fluid/structure interaction problems.

Finally, especially for explicit in time computations, one will have to define and implement spatially adapted time stepping to cope with the several levels of refinement and the different wave propagation velocities.

These last 2 points will give rise to some publications in specialized scientific journals.

Post-doctoral position in AI safety and assurance at CEA LIST

The position is related to safety assessment and assurance of AI (Artificial Intelligence)-based systems that used machine-learning components during operation time for performing autonomy functions. Currently, for non-AI system, the safety is assessed prior to the system deployment and the safety assessment results are compiled into a safety case that remains valid through system life. For novel systems integrating AI components, particularly the self-learners systems, such engineering and assurance approach are not applicable as the system can exhibit new behavior in front of unknown situations during operation.

The goal of the postdoc will be to define an engineering approach to perform accurate safety assessment of AI systems. A second objective is to define assurance case artefacts (claims, evidences, etc.) to obtain & preserve justified confidence in the safety of the system through its lifetime, particularly for AI system with operational learning. The approach will be implemented in an open-source framework that it will be evaluated on industry-relevant applications.

The position holder will join a research and development team in a highly stimulating environment with unique opportunities to develop a strong technical and research portfolio. He will be required to collaborate with LSEA academic & industry partners, to contribute and manage national & EU projects, to prepare and submit scientific material for publication, to provide guidance to PhD students.

High precision robotic manipulation with reinforcement learning and Sim2Real

High precision robotic assembly that handles high product variability is a key part of an agile and a flexible manufacturing automation system. To date however, most of the existing systems are difficult to scale with product variability since they need precise models of the environment dynamics in order to be efficient. This information is not always easy to get.
Reinforcement learning based methods can be of interest in this situation. They do not rely on the environment dynamics and only need sample data from the system to learn a new manipulation skill. The main caveat is the efficiency of the data generation process.
In this post-doc, we propose to investigate the use of reinforcement learning based algorithms to solve high precision robotic assembly tasks. To handle the problem of sample generation we leverage the use of simulators and adopt a sim2real approach. The goal is to build a system than can solve tasks such as those proposed in the World Robot Challenge and tasks that the CEA’s industrial partners will provide.

Top