Attack detection in the electrical grid distributed control
To enable the emergence of flexible and resilient energy networks, we need to find solutions to the challenges facing these networks, in particular digitization and the protection of data flows that this will entail, and cybersecurity issues.
In the Tasting project, and in collaboration with RTE, the French electricity transmission network operator, your role will be to analyze data protection for all parties involved. The aim is to verify security properties on data in distributed systems, taking into account that those induce a number of uncertainties.
To this end, you will develop a tool-based methodology for protecting the data of power grid stakeholders. The approach will be based on formal methods, in particular runtime verification, applied to a distributed control system.
This postdoc position is part of the TASTING project, which aims to meet the key challenges of modernizing and securing power systems. This 4-year project, which started in 2023, addresses axis 3 of the PEPR TASE call “Technological solutions for the digitization of intelligent energy systems”, co-piloted by CEA and CNRS, which aims to generate innovations in the fields of solar energy, photovoltaics, floating wind power and for the emergence of flexible and resilient energy networks. The targeted scientific challenges concern the ICT infrastructure, considered as a key element and solution provider for the profound transformations that our energy infrastructures will undergo in the decades to come.
The project involves two national research organizations, INRIA and CEA through its technological research institute CEA-List. Also involved are 7 academic laboratories: G2Elab, GeePs, IRIT, L2EP, L2S and SATIE, as well as an industrial partner, RTE, which is supplying various use cases.
Advanced modeling of thermal turbulent flows
For several decades, numerical simulations in fluid mechanics have significantly contributed to the design and maintenance of industrial installations. Turbulence modeling, a key area at the intersection of research and industry, has seen substantial advancements in both LES and RANS approaches. Since the early 2010s, hybrid methods that combine RANS and LES techniques have emerged to leverage the advantages of each, necessitating proficiency in both modeling types. The TrioCFD code developed at STMF, although capable of handling these models, has not seen adequate investment in modern approaches. To incorporate hybrid models, it is essential to update and enhance the current models. The proposed task is to identify the most relevant models for industrial applications, restructure the software to accommodate these models, and validate their performance.
Study of the specific features of highly distributed architectures for decision and control requirements
Our electricity infrastructure has undergone and will continue to undergo profound changes in the coming decades. The rapid growth in the share of renewables in electricity generation requires solutions to secure energy systems, especially with regard to the variability, stability and balancing aspects of the electricity system and the protection of the grid infrastructure itself. The purpose of this study is to help design new decision-making methods, specially adapted to highly distributed control architectures for energy networks. These new methods will have to be evaluated in terms of performance, resilience, robustness and tested in the presence of various hazards and even byzantines.
Public and private contrats for ACSL
Frama-C is a collaborative platform for the analysis of C programs. It provides a specification language named ACSL, which is based on the notion of contracts. These contracts, provided though code annotations, enable specification of the expected behavior of the different functions of a program. It is then possible to check that the program conforms to the user-provided specification thanks to the different analyzers provided by Frama-C.
An important limitation about the contracts in the current version of ACSL with respect to the C programming language is that they do not allow specifying different contracts (internal/private, external/private) for a module when this module does not export all details of the implementation to the external modules. For this, differentiating public contract and private contract is necessary, but also how to link them together so that the global consistency of specification and analysis is assured.
As part of a project that concerns the creation of innovative materials, we wish to strengthen our platform in its ability to learn from little experimental data.
In particular, we wish to work firstly on the extraction of causal links between manufacturing parameters and properties. Causality extraction is a subject of great importance in AI today and we wish to adapt existing approaches to experimental data and their particularities in order to select the variables of interest. Secondly, we will focus on these causal links and their characterization (causal inference) using an approach based on fuzzy rules, that is to say we will create fuzzy rules adapted to their representation.
Development of noise-based artifical intellgence approaches
Current approaches to AI are largely based on extensive vector-matrix multiplication. In this postdoctoral project we would like to pose the question, what comes next? Specifically we would like to study whether (stochastic) noise could be the computational primitive that the a new generation of AI is built upon. This question will be answered in two steps. First, we will explore theories regarding the computational role of microscopic and system-level noise in neuroscience as well as how noise is increasingly leveraged in machine leaning and artificial intelligence. We aim to establish concrete links between these two fields and, in particular, we will explore the relationship between noise and uncertainty quantification.
Building on this, the postdoctoral researcher will then develop new models that leverage noise to carry out cognitive tasks, of which uncertainty is an intrinsic component. This will not only serve as an AI approach, but should also serve as a computational tool to study cognition in humans and also as a model for specific brain areas known to participate in different aspects of cognition, from perception to learning to decision making and uncertainty quantification.
Perspectives of the postdoctoral project should inform how future fMRI imaging and invasive and non-invasive electrophysiological recordings may be used to test theories of this model. Additionally, the candidate will be expected to interact with other activates in the CEA related to the development of noise-based analogue AI accelerators.
LLMs hybridation for requirements engineering
Developing physical or digital systems is a complex process involving both technical and human challenges. The first step is to give shape to ideas by drafting specifications for the system to be. Usually written in natural language by business analysts, these documents are the cornerstones that bind all stakeholders together for the duration of the project, making it easier to share and understand what needs to be done. Requirements engineering proposes various techniques (reviews, modeling, formalization, etc.) to regulate this process and improve the quality (consistency, completeness, etc.) of the produced requirements, with the aim of detecting and correcting defects even before the system is implemented.
In the field of requirements engineering, the recent arrival of very large model neural networks (LLMs) has the potential to be a "game changer" [4]. We propose to support the work of the functional analyst with a tool that facilitates and makes reliable the writing of the requirements corpus. The tool will make use of a conversational agent of the transformer/LLM type (such as ChatGPT or Lama) combined with rigorous analysis and assistance methods. It will propose options for rewriting requirements in a format compatible with INCOSE or EARS standards, analyze the results produced by the LLM, and provide a requirements quality audit.
Robotics Moonshot : digital twin of a laser cutting process and implementation with a self-learning robot
One of the main challenges in the deployment of robotics in industry is to offer smart robots, capable of understanding the context in which they operate and easily programmable without advanced skills in robotics and computer science. In order to enable a non-expert operator to define tasks subsequently carried out by a robot, the CEA is developing various tools: intuitive programming interface, learning by demonstration, skill-based programming, interface with interactive simulation, etc.
Winner of the "moonshot" call for projects from the CEA's Digital Missions, the "Self-learning robot" project proposes to bring very significant breakthroughs for the robotics of the future in connection with simulation. A demonstrator integrating these technological bricks is expected on several use cases in different CEA centers.
This post-doc offer concerns the implementation of the CEA/DES (Energy Department) demonstrator on the use case of laser cutting under constraints for A&D at the Simulation and Dismantling Techniques Laboratory (LSTD) at the CEA Marcoule.
Development and optimization of adaptive mesh refinement methods for fluid/structure interaction problems in a context of high performance computing
A new simulation code for structural and compressible fluid mechanics, named Manta, is currently under development at the french CEA. This code aims at both unifying the features of CEA’s legacy implicit and explicit codes and being natively HPC-oriented. With its many numerical methods (Finite Elements, Finite Volumes, hybrid methods, phase field, implicit or explicit solvers …), Manta enables the simulation of various static or dynamic kinds mechanical problems including fluids, structures, or fluid-structure interactions.
When looking for optimizing computation time, Adaptive Mesh Refinement (AMR) is a typical method for increasing numerical accuracy while managing computational load.
This postdoctoral position aims at defining and implementing parallel AMR algorithms in a high performance computing context, for fluid/structure interaction problems.
In a preliminary step, the functionalities for hierarchical AMR, such as cell refinement and coarsening, field transfers from parents to children cells, refinement criteria or hanging nodes management, will be integrated in Manta. This first work will probably rely on external libraries that should be identified.
In a second step, the distributed-memory parallel performances will be optimized. Especially, strategies for load balancing between the MPI processes should be studied, especially for fluid/structure interaction problems.
Finally, especially for explicit in time computations, one will have to define and implement spatially adapted time stepping to cope with the several levels of refinement and the different wave propagation velocities.
These last 2 points will give rise to some publications in specialized scientific journals.