Global offshore wind turbines monitoring using low cost devices and simplified deployment methods

This project follows previous work focused on on-shore wind turbine instrumentation with inertial sensors networks whose dataflows allows the detection of vibration modes specific to the wind turbine components, in particular the mast and the real-time monitoring of these signals.
The objectives of this project are manyfolds: to bring this work to offshore wind turbines; search for signatures in wider frequency bands; study the behavior of offshore platforms and their anchorages.
One of the challenges is to find the signatures of rotating elements (blades) without direct instrumentation. Instrumentation of these elements is indeed more expensive and more impacting on the structure.
In addition, the sensor technology will be suitable for monitoring the fatigue life cycle of moving wire structures (dynamic electrical connection cable and anchoring) in the case of an off-shore wind turbine. The ultimate goal is to propose a global method for offshore wind turbine health monitoring.

Post-doc: CNN neural network – managing data uncertainty in the learning database.

The aim is to develop algorithms able to take into account the uncertainty in the learning database of neural networks. The project fits into the context of the dynamic state estimation of liquid-liquid extraction and benefits of its knowledge-based simulator as well as industrial data. Indeed, the status of an industrial chemical process is accessible through operating parameters and available monitoring measures. However, the measures being inherently associated with uncertainty, it is necessary to make the data consistent with process knowledge. Therefore, the goal is to find the best data set of operational parameters (input of the knowledge-based simulator) to provide the model to estimate the real process state known through monitoring measures (output of the knowledge-based simulator). A convolutional neural network (CNN) is being developed in another postdoctoral project to solve the inverse problem to find the best input thanks to the measured output. A consistent set of operating parameters is going to be obtained and state of the process is going to be known during the dynamic regime of the liquid-liquid extraction process. This first step is to evaluate the impact of the uncertainty of operational parameters on the outputs of the knowledge-based model. This step will need to connect the knowledge-based model to URANIE, internal platform developed by CEA ISAS. This knowledge must be taken into account in the second part of the project. The uncertainty observed on the outputs should be taken into account in the learning loop to improve the estimation of the operational parameters by the CNN. The impact of these uncertainties on the CNN computed results must be assesed in order to trust the ability of the CNN to estimate the state of the process.
Through this project, we are at the heart of the thematic of digital simulation for the best control of complex systems.

Design of a safe and secure hypervisor in the context of a manycore architecture

The TSUNAMY project aims at thinking the design of future manycore chips in a collaborative hardware/software approach. It will investigate how crypto-processors can be incorporated into such a chip, turning it into a heterogeneous architecture, where scheduling, resource allocation, resource sharing, and resource isolation will be a concern.

The LaSTRE laboratory has designed Anaxagoros, a micro-kernel which ensures good properties in terms of safety and integration of mixed-criticality applications and is therefore well suited to the virtualization of operating systems. Making this virtualization software layer evolve in the context of the TSUNAMY project is the main goal of this post-doctoral proposal.

The first issue to address deals with the scalability of Anaxagoros on a manycore architecture. This system was designed with multicore scalability in mind : to help reach the highest level of parallelism in a lock-free fashion, innovative techniques were proposed to minimize the amount of synchronization points within the system. This is the first step, but scaling to manycore architectures brings new topics such as cache-coherency or non-uniform memory access that require to focus on data locality as well. The second challenge will be to incorporate genuine security features into Anaxagoros, e.g. regarding protection from covert channels, or confidentiality. The third and final challenge that will be addressed through interactions with the partners of the project is to devise techniques that could be implemented directly in hardware in order to ensure that even a breach in what is usually considered as trusted software will not allow an attacker to gain unprivileged data access or let information leak.

Optimal Multi Agent System management of smart heat grid using thermal storage

The aim of this work is a major contribution to a software framework based on coupling of Modelica/Jade environments that will allow to model, to simulate and to optimise the control of smart heat grid through dedicated thermal storage models development: interface specification to control the storages in the grid, simplified models design of heat grid’s most crucial components to be integrated in Agents (production, distribution/storage, consumption) and design of consumption and production forecast models in order to manage anticipation and improve the overall efficiency. The evaluation of performance is based on the test case build in Modelica simulation environment.

Large scale visual recognition

This post-doc deals with detection and recognition of objects in images and video streams, on a large scale. This is a fundamental task that is the subject of active research in the world, including recent challenges in the evaluation campaigns. The "large scale" aspect refer to both large size databases (eg ten million images) and large number of concepts to recognize (eg 100-10000).The work will concern bothimage description and classification.

At the description level, state of the art techniques rely on local descriptors, aggregated according to dictionaries of "visual words" possibly constructed using Fisher kernels. It is nevertheless necessary to recode these signatures effectively in order to manage large databases. Regarding learning of visual concepts or objects, many algorithms use SVM (support vector machines) but other approaches are sometimes considered, such as those based on boosting or logistic regression.

The proposed position involves research and development of efficient algorithms to find visual entities in very large databases. Tracks are considered and should be discussed with the candidate selected based on prior knowledge and technical discussions.

Deployment of distributed consensus protocols on blockchains with Smart Contract

The aim is to implement various distributed consensus protocols on both public and private blockchain platforms supporting Smart Contracts technology. The techniques based on Proof-of-Stake and token management will be analyzed and their level of security will be evaluated in terms of energy consumption and quality of the distribution of the trust in the system. The techniques to verify the transactions of the blockchain Ethereum will be implemented, as well as other algorithms, lighter and that consume less energy, dedicated to "private" blockchains where users are authenticated. The platform Hyperledger will be used to test the various distributed consensus protocols. New algorithms will be proposed and the solutions will be deployed for applications in the field of the Internet of Things.

Researcher in Artificial Intelligence applied to self-driven microfluidic

This postdoctoral position is part of the 2FAST project (Federation of Fluidic Autonomous labs to Speed-up material Tailoring), which is a part of the PEPR DIADEM initiative. The project aims to fully automate the synthesis and online characterization of materials using microfluidic chips. These chips provide precise control and leverage digital advancements to enhance materials chemistry outcomes. However, characterising nano/micro-materials at this scale remains challenging due to its cost and complexity. The 2FAST project aims to utilise recent advances in the automation and instrumentation of microfluidic platforms to develop interoperable and automatically controlled microfluidic chips that enable the controlled synthesis of nanomaterials. The aim of this project is to create a proof of concept for a microfluidic/millifluidic reactor platform that can produce noble metal nanoparticles continuously and at high throughput. To achieve this, feedback loops will be managed by artificial intelligence tools, which will monitor the reaction progress using online-acquired information from spectrometric techniques such as UV-Vis, SAXS, and Raman. The postdoctoral position proposed focuses on AI-related work associated with the development of feedback loop design, creation of a signal database tailored for machine learning, and implementation of machine learning methods to connect various data and/or control autonomous microfluidic devices.

Large-scale depletion calculations with Monte Carlo neutron transport code

One of the main goals of modern reactor physics is to perform accurate multi-physics simulations of the behaviour of a nuclear reactor core, with a detailed description of the geometry at the fuel pin level. Multi-physics calculations in nominal conditions imply a coupling between a transport equation solver for the neutron and precursor populations, thermal and thermal-hydraulics solvers for heat transfer, and a Bateman solver for computing the isotopic depletion of the nuclear fuel during a reactor cycle. The purpose of this post-doc is to carry out such a fully-coupled calculation using the PATMOS Monte Carlo neutron-transport mini-app and the C3PO coupling platform, both developed at CEA. The target system is core of the size of a commercial reactor.

Attack detection in the electrical grid distributed control

To enable the emergence of flexible and resilient energy networks, we need to find solutions to the challenges facing these networks, in particular digitization and the protection of data flows that this will entail, and cybersecurity issues.
In the Tasting project, and in collaboration with RTE, the French electricity transmission network operator, your role will be to analyze data protection for all parties involved. The aim is to verify security properties on data in distributed systems, taking into account that those induce a number of uncertainties.
To this end, you will develop a tool-based methodology for protecting the data of power grid stakeholders. The approach will be based on formal methods, in particular runtime verification, applied to a distributed control system.

This postdoc position is part of the TASTING project, which aims to meet the key challenges of modernizing and securing power systems. This 4-year project, which started in 2023, addresses axis 3 of the PEPR TASE call “Technological solutions for the digitization of intelligent energy systems”, co-piloted by CEA and CNRS, which aims to generate innovations in the fields of solar energy, photovoltaics, floating wind power and for the emergence of flexible and resilient energy networks. The targeted scientific challenges concern the ICT infrastructure, considered as a key element and solution provider for the profound transformations that our energy infrastructures will undergo in the decades to come.
The project involves two national research organizations, INRIA and CEA through its technological research institute CEA-List. Also involved are 7 academic laboratories: G2Elab, GeePs, IRIT, L2EP, L2S and SATIE, as well as an industrial partner, RTE, which is supplying various use cases.

ML assisted RF filter design

Top