Formal verification for quantum programs compilation

While recent progress in quantum hardware open the door for significant speedup in cryptography as well as additional key areas (biology, chemistry, optimization, machine learning, etc), quantum algorithms are still hard to implement right, and the validation of such quantum programs is a challenge.

Adapting the best practice for classical computing formal verification -- deductive verification, first order logic reasoning--, our recent development of Qbricks enables formal specification -- circuit well-formedness, functional behavior, complexity --and verification for quantum programming with ideal qubits.

The goal of PhD position is to extend this practice into the quantum compilation chain by making it usable from mainstream quantum programming environments (high-level programming and imperative paradigm).

Possibilities include, among others, developing (1) a formally verified intermediate representation for fault-tolerant implementations (2) reasoning tools for certifying quantum circuit transformation functions, (3) automatic verification tools over equivalence and/or proximity relations between quantum circuits.

Definition of an asynchronous on-the-fly data compression model on accelerators for HPC

This thesis is related to high-performance computing for numerical simulation of complex physical phenomena.
The CEA provides hardware and software resources to achieve the required computing power.
We have witnessed the advent of accelerators, leading to new problems. In particular, memory management becomes critical for achieving exascale performance as the memory ratio per number of computing units is reducing.
This problem affects all areas requiring a large volume of data. Thus, many aspects of this thesis will be general and of global interest.

This thesis will aim to propose an asynchronous model for making data available through compression/decompression techniques. It should be efficient enough to be used "on the fly" (during computations without slowing them down), allowing memory constraints to be relaxed.
Targeted codes are iterative and sequence different phases. Ideally, all computations will be performed on accelerators, leaving CPU resources unoccupied. The proposed model should take advantage of these specificities. The final goal will be to integrate the work into a representative code to evaluate gains in an industrial context.

Multi-architecture Adaptive Mesh Refinement for multi-material compressible hydrodynamics simulation

CEA DAM is actively developing scientific software in computational fluid mechanics (CFD) for the numerical simulation of compressible and multi-material flows. Such numerical tools requires the use of parallel programming models designed for efficient use of large supercomputers. From the algorithmic point view, the fluid dynamics equations must be discretized and solved using the adaptive mesh refinement (AMR) strategy which allows to reduce the computational cost of such simulations, in particular the number of cells (therefore the memory footprint) and to concentrate the computational work load on the areas of interest (discontinuities, shocks, multi-fluid interfaces, etc. ).

Over the past fifteen years, with the appearance of graphics processors (GPUs), the hardware architectures used in the field of high-performance computing (HPC) have evolved profoundly. This PhD thesis is about designing a parallel implementation of the AMR techniques for the case of multi-material flows with the aim of using as efficiently as possible a GPU-based supercomputer. After required numerical verification and validation process, the developed code will be used to perform numerical simulation of a blast wave and its interaction with surrounding structures.

Privacy-preserving federated learning over vertically partitioned data from heterogeneous participants

Federated learning enables multiple participants to collaboratively train a global model, without sharing their data, but only model parameters are exchanged between the participants and the server. In vertical federated learning (VFL), datasets of the participants share similar samples, but have different features. For instance, companies and institutions from different fields own data with different features of overlapping samples collaborate to solve a machine learning task. Though data are private, VFL remains vulnerable to attacks such as label and feature inference attacks. Various privacy measures (e.g., differential privacy, homomorphic encryption) have been investigated to prevent privacy leakage. Choosing the appropriate measures is a challenging task as it depends on the VLF architecture and the desired level of privacy (e.g., local models, intermediate results, learned models). The variability of each participant’s system can also result in high latency and asynchronous updates, affecting training efficiency and model effectiveness.

The aim of this thesis is to propose methods to enable privacy-preserving VFL, taking into account the heterogeneity of the participants. First, the candidate will study the architectures of VFL models and the privacy measures to propose privacy-preserving protocols for VFL. Second, the candidate will investigate the impacts of the heterogeneity of the participants’ system such as computation and communication resources to devise solutions to render the VFL protocols robust to such heterogeneity. Third, the trade-offs among effectiveness, privacy, and efficiency in VFL will be explored to propose a practical framework for adjusting the protocols according to the requirements of a given machine learning problem.

Discovery of new chromogenic probes for toxic using Chemistry-Trained Machine Learning

Today national and international situation justify new researches on the colorimetric detection of toxic and polluting gases (referred to as analytes in the following). For the already known and studied compounds, improvement of the detection capabilities involves increasing contrast and selectivity. For potential new analytes, it is also important to prepare for rapid identification of specific chromogenic probes. The objectives of the thesis will be to discover new chromogenic probes by using computational chemistry.
First stage of the thesis: Training of the model (ML/AI) on available database. This part of the thesis will focus on establishing a precise and robust model to classify the large experimental database available from our laboratory's previous work. This involves correlating the colorimetric results with the structures and chemical properties of the molecules described by state-of-the-art methods (e.g., https://pubs.acs.org/doi/10.1021/acs.chemrev.1c00107). At the end of this learning process, we will have a predictor (SVM, LCA, PCA…) validated on our data.
Second stage: Use of the predictor model to screen in silico several hundred thousand candidate probe molecules from commercial chemical libraries (and others), correlated with their chemical structure and property descriptions as in the first stage. After this initial screening, DFT prediction of the chromogenic response will be used to refine the selection of the best candidate molecules.
Third stage: Definition and implementation of an experimental chemical testing campaign. A fast organic synthesis platform HTE (high throughput experimentation) based on the miniaturization and parallelization of chemical reactions to optimize the implementation of synthesis reactions and tests, will save considerable time, while significantly increasing the number of possible combinations. HTE also enables the synthesis of libraries of analogous compounds. Following these massive tests, a second version of the predictor will be trained and will lead to the discovery of a new generation of chromogenic molecules.

Development of a numerical model of the POSEIDON irradiator for qualification in Co-60 radiosterilisation

CEA/Saclay research centre has several 60Co irradiation facilities dedicated to gamma irradiation for both CEA and industrials R&D needs in various fields such as electronuclear, defence, electronics, space as well as health applications.

In the more specific field of health, an irradiator is used to radiosterilise, i.e. to neutralise microbiological contaminants (such as bacteria, viruses, microbes, spores) using ionising radiation, for medical devices such as hip prostheses, orthopaedic screws or plates, on behalf of their suppliers. The great advantage of gamma radiation sterilization, compared with other sterilization alternatives (gas or cold immersion in liquid chemicals), is that medical devices do not have to be removed from their sealed pouches; they are processed directly through their packaging.

Radiosterilization of medical devices is a highly demanding process, in line with the requirements of ISO 13485 and ISO 11137. Firstly, the doses delivered must be neither too low, to ensure product sterility, nor too high, to avoid altering their integrity. Secondly, three qualification stages are required to guarantee validation of irradiation sterilization processes. The first two, known as installation and operational qualification, are respectively designed to demonstrate that the sterilization equipment has been installed in accordance with its specifications and is operating correctly, delivering appropriate doses within the limits of defined acceptance criteria. In particular, operational qualification consists in characterizing the irradiator in terms of dose distribution and reproducibility, by considering the volumes to be irradiated filled with a homogeneous material, with envelope densities representative of the products to be treated. Finally, the third qualification stage, known as performance qualification, must demonstrate, specifically for the medical products to be treated, that the equipment operates consistently, in accordance with predetermined criteria, and delivers doses within the specified dose range, thus producing a product that meets the specified requirement for sterility.

Depending on the supplier, irradiation packaging cartons are generally filled with a variety of different medical products, corresponding to a wide range of sizes and weights. The effects on spatial dose distribution of all possible product loading configurations should therefore be examined, including for different fill rates of the cartons on the irradiator's dynamic conveyor. Finally, it should be noted that the qualification processes must be repeated following any modification likely to affect dose or dose distribution, and therefore each time the sources are restocked. These processes are currently carried out exclusively by measurement, using a multitude of dosimeters appropriately distributed within and on the surface of the packages.

The Laboratoire des Rayonnements Appliqués (DRMP/SPC/LABRA), in charge of the POSEIDON gamma irradiator dedicated to the radiosterilization of medical equipment at CEA/Saclay would like to have a digital tool enabling these validation processes to be carried out by simulation. One of the major benefits expected from the use of this tool is to considerably reduce the downtime of the POSEIDON irradiator, imposed by the experimental validation phases.

The aim of the present thesis is to implement a numerical model of the POSEIDON irradiator, enabling the validation phases to be reproduced by simulation, as quickly as possible, while ensuring the accuracy of the results, to the desired precision. This work will be carried out at the DM2S/SERMA/CP2C laboratory (CEA/Saclay) with regular exchanges with the LABRA laboratory. CP2C is specialized, among other things, in radiation protection studies using numerical simulations.

Thus, the subject of the thesis, divided into three stages, will explore an alternative validation approach to that, carried out experimentally:

• The first stage will involve the development of a numerical model of the POSEIDON irradiator, integrating the dynamic nature of radiosterilization treatments. This numerical model will be based on a calculation methodology to be decided during the thesis (Monte-Carlo or deterministic method), with a compromise between the quality of the results obtained and the speed of calculation execution. For this stage, the radiation transport Monte Carlo code TRIPOLI-4® will be used as a reference, with comparisons made using other numerical tools such as MCNP®, PENELOPE, GEANT4, NARMER-1, etc.;

• The second stage will successively involve validation of the selected numerical model by comparison with experimental measurements, to be defined and carried out during the thesis, and its application to the calculation of operational qualification processes and performances for different families of supplier cartons. As regards validation of the calculations, the instrumentation used for gamma dose measurements will be numerically modelled and analysed, taking into account all the physical phenomena involved in absorbed dose (photon and electron doses). The aim is to consolidate calculation/measurement comparisons for experiments carried out during the thésis;

• The final step will be to analyze the contribution of the numerical model in relation to the experimental approach. This computational approach will nevertheless need to be optimized in terms of calculation time, in order to facilitate the sensitivity analyses to be carried out.

During the thesis, various directions of research will be investigated, such as improving the modelling of reflections during photon transport in a closed environment (PAGURE irradiator casemate; use of deep learning techniques for deterministic codes), implementing stochastic geometries to model the contents of the packaging to be irradiated, and improving algorithms to reduce computation times.

Dynamic Assurance Cases for Autonomous Adaptive Systems

Providing assurances that autonomous systems will operate in a safe and secure manner is a prerequisite for their deployment in mission-critical and safety-critical application domains. Typically, assurances are provided in the form of assurance cases, which are auditable and reasoned arguments that a high-level claim (usually concerning safety or other critical properties) is satisfied given a set of evidence concerning the context, design, and implementation of a system. Assurance case development is traditionally an analytic activity, which is carried out off-line prior to system deployment and its validity relies on assumptions/predictions about system behavior (including its interactions with its environment). However, it has been argued that this is not a viable approach for autonomous systems that learn and adapt in operation. The proposed PhD will address the limitations of existing assurance approaches by proposing a new class of security-informed safety assurance techniques that are continually assessing and evolving the safety reasoning, concurrently with the system, to provide through-life safety assurance. That is, safety assurance will be provided not only during initial development and deployment, but also at runtime based on operational data.

Modeling and Simulation of Human Behavior for Human-Centric Digital Twins

Thanks to synchronized virtual representation, digital twins are a means to produce analyses, predictions and optimizations of real-world systems. However, some of these systems tightly engage with humans so that the role of the latter is determining in the system’s operation. This is for example the case in contexts such as industry 5.0 or the management of the control of critical systems, where the quality of collaboration between humans and machines will depend on the anticipation of their respective actions, interactions and decisions. Thus, to improve the accuracy of predictions and expand the applicability in various fields, it is necessary, based on knowledge from the human and social sciences, to develop digital twins that account for the complexity and richness of human behaviors (decision-making processes, interactions, emotions, etc.). These behavioral models may notably rely on machine learning, data mining, agent-based modeling and knowledge engineering. After having identified the useful human behavior models, we will study their conceptual articulation and their technical integration with the models of cyber-physical entities in the digital twin system. Additionally, we will explore how digital twin services are impacted and can be revised to account for these human-centric aspects. Finally, we will evaluate the effectiveness of human-centric digital twins in various applications by implementing experiments on representative real cases.
This research work aims to make the following contributions:
• The development of an approach based on human behavior models to achieve human-centric digital twins.
• New knowledge on the impact of human behavior on the control of a system and vice versa.
• Practical applications and guidelines for using human-centric digital twins in real-world scenarios.
This PhD will be carried out at Grenoble.

Advanced type-based static analysis for operating system verification

In recent work [RTAS 2021], we have demonstrated the benefits of a static analysis guided
for analyzing low-level system programs, going so far as to be able to automatically verify the absence
to the point of being able to automatically verify the absence of privilege escalation in an embedded
operating system kernel as a consequence of the type-safety of the kernel code. Memory management
is a particularly difficult problem when analyzing system programs, or more broadly,
programs written in C, and type-based analysis provides a satisfactory solution with wide
wide applicability (e.g. data structure management code [VMCAI 2022], dynamic language runtime
dynamic language runtime, performance-oriented application code, etc.).

The aim of this thesis is to extend the applications that can be made of the use of type-based static analysis.
type-based static analysis.

Design of algorithms to optimize RADAR beam control

The arrival on the market of a new generation of Imaging Radars 4D brings new opportunities and challenges for the development of data processing algorithms. These new sensors, geared towards the autonomous vehicle market, offer greater resolution thanks to a larger number of antennas. However, this implies an increase in the amount of data to be processed, which requires significant computing resources.
The aim of this thesis is to develop algorithms to optimize Radar resolution while limiting computational costs, in order to embed processing as close as possible to the Radar. To achieve this, beamforming techniques will be used to control the shape and direction of the Radar beam, so as to concentrate the energy in regions deemed relevant. One of the challenges is therefore to create a high-performance feedback loop to control the Radar antennas according to the scene observed during previous measurements.
This thesis will take an experimental approach, using a radar owned by the laboratory. Simulation tools will also be used to test hypotheses and go beyond the possibilities offered by the equipment.

Top