Automatization of quantum computing kernel writing for quantum applications
The framework of Hamiltonian simulation opens up a new range of computational approaches for quantum computing. These approaches can be developed across all relevant fields of quantum computing applications, including, among others, partial differential equations (electromagnetism, fluid mechanics, etc.), quantum machine learning, finance, and various methods for solving optimization problems (both heuristic and exact).
The goal of this thesis is to identify a framework where these approaches—based on Hamiltonian simulation or block-encoding techniques—are feasible and can be written in an automated way.
This work could extend to the prototyping of a code generator, which would be tested on practical cases in collaboration with European partners (including a few months of internship within their teams).
Learning world models for advanced autonomous agent
World models are internal representations of the external environment that an agent can use to interact with the real world. They are essential for understanding the physics that govern real-world dynamics, making predictions, and planning long-horizon actions. World models can be used to simulate real-world interactions and enhance the interpretability and explainability of an agent's behavior within this environment, making them key components for advanced autonomous agent models.
Nevertheless, building an accurate world model remains challenging. The goal of this PhD is to develop methodology to learn world models and study their use in the context of autonomous driving, particularly for motion forecasting and developing autonomous agents for navigation.
Secure and Agile Hardware/Software Implementation of new Post-Quantum Cryptography Digital Signature Algorithms
Cryptography plays a fundamental role in securing modern communication systems by ensuring confidentiality, integrity, and authenticity. Public-key cryptography, in particular, has become indispensable for secure data exchange and authentication processes. However, the advent of quantum computing poses an existential threat to many of the traditional public-key cryptographic algorithms, such as RSA, DSA, and ECC, which rely on problems like integer factorization and discrete logarithms that quantum computers can solve efficiently. Recognizing this imminent challenge, the National Institute of Standards and Technology (NIST) initiated in 2016 a global effort to develop and standardize Post-Quantum Cryptography (PQC). After three rigorous rounds of evaluation, NIST announced its first set of standardized algorithms in 2022. While these algorithms represent significant progress, NIST has expressed an explicit need for additional digital signature schemes that leverage alternative security assumptions, emphasizing the importance of schemes that offer shorter signatures and faster verification times to enhance practical applicability in resource-constrained environments. Building on this foundation, NIST opened a new competition to identify additional general-purpose signature schemes. The second-round candidates, announced in October 2024, reflect a diverse array of cryptographic families.
This research focuses on the critical intersection of post-quantum digital signature algorithms and hardware implementations. As the cryptographic community moves toward adoption, the challenge lies not only in selecting robust algorithms but also in deploying them efficiently in real-world systems. Hardware implementations, in particular, must address stringent requirements for performance, power consumption, and security, while also providing the flexibility to adapt to multiple algorithms—both those standardized and those still under evaluation. Such agility is essential to future-proof systems against the uncertainty inherent in cryptographic transitions. The primary objective of this PhD research is to design and develop hardware-agile implementations for post-quantum digital signature algorithms. The focus will be on supporting multiple algorithms within a unified hardware framework, enabling seamless adaptability to the diverse needs of evolving cryptographic standards. This involves an in-depth study of the leading candidates from NIST’s fourth-round competition, as well as those already standardized, to understand their unique computational requirements and security properties. Special attention will be given to designing modular architectures that can support different signatures, ensuring versatility and extensibility. The proposed research will also explore optimizations for resource efficiency, balancing trade-offs between performance, power consumption, and area utilization. Additionally, resilience against physical attacks (side-channel attacks and fault injection attacks) will be a key consideration in the design process. This PhD project will be conducted within the PEPR PQ-TLS project in collaboration with the TIMA laboratory (Grenoble), the Agence nationale de la sécurité des systèmes d’information (ANSSI) and INRIA.
Software support for sparse computation
The performance of computers has become limited by data movement in the fields of AI, HPC and embedded computing. Hardware accelerators do exist to handle data movement in an energy-efficient way, but there is no programming language that allows them to be implemented in the code supporting the calculations.
It's up to the programmer to explicitly configure DMAs and use function calls for data transfers and do program analysis to identify memory bottleneck
In addition, compilers were designed in the 80s, when memories worked at the same frequency as computing cores.
The aim of this thesis will be to integrate into a compiler the ability to perform optimizations based on data transfers.
HW/SW Contracts for Security Analysis Against Fault Injection Attacks on Open-source Processors
This thesis focuses on the cybersecurity of embedded systems, particularly the vulnerability of processors and programs to fault injection attacks. These attacks disrupt the normal functioning of systems, allowing attackers to exploit weaknesses to access sensitive information. Although formal methods have been developed to analyze the robustness of systems, they often limit their analyses to hardware or software separately, overlooking the interaction between the two.
The proposed work aims to formalize hardware/software (HW/SW) contracts specifically for security analysis against fault injection. Building on a hardware partitioning approach, this research seeks to mitigate scalability issues related to the complexity of microarchitecture models. Expected outcomes include the development of techniques and tools for effective security verification of embedded systems, as well as the creation of contracts that facilitate the assessment of compliance for both hardware and software implementations. This approach could also reduce the time-to-market for secure systems.
Cryptographic security of RISC-V processor enclaves with CHERI
CHERI (Capability Hardware Enhanced RISC Instructions) is a solution for securing the processor against spatial and temporal memory leaks by transforming any pointer into a capability that clearly defines the access limits to the data or instructions addressed.
In this thesis, we propose to enrich CHERI and its control-flow integrity capabilities on a RISC-V application processor, by protecting instructions right up to their execution against any type of modification. Secondly, based on authenticated memory encryption, we will study the possibility of using CHERI to define secure enclaves enabling cryptographic isolation between processes. The processor will be modified so that each process is encrypted with its own key and can have a secure life cycle. All keys must be efficiently protected in hardware.
Contact : olivier.savry@cea.fr
Combining over and underapproximation of memory abstractions for low-level code analysis
Rice's theorem stating that no method can automatically tell whether a property of a program is true or not has led to the separation of verification tools into two groups: sound tools operating by over-approximation, such as abstract interpretation, are able to automatically prove that certain properties are true, but are sometimes unable to conclude and produce alarms; conversely, complete tools operating by under-approximation, such as symbolic execution, are able to produce counter-examples, but are unable to demonstrate whether a property is true.
*The general aim of the thesis is to study the combination of sound and complete methods of programanalysis, and in particular static analysis by abstract interpretation and the generation of underapproximated formulae by symbolic execution*.
We are particularly interested in the combination of over- and sub-approximating abstractions, especially for memory. The priority applications envisaged concern the analysis of code at the binary level, as achieved by the combination of the BINSEC and CODEX analysis platforms, so as to automatically discover new security vulnerabilities, or prove their absence.
Portable GPU-based parallel algorithms for nuclear fuel simulation on exascale supercomputers
In a context where the standards of high performance computing (HPC) keep evolving, the design of supercomputers includes always more frequently a growing number of accelerators or graphics processing units (GPUs) that provide the bulk of the computing power in most supercomputers. Due to their architectural departures from CPUs and still-evolving software environments, GPUs pose profound programming challenges. GPUs use massive fine-grained parallelism, and thus programmers must rewrite their algorithms and code in order to effectively utilize the compute power.
CEA has developed PLEIADES, a computing platform devoted to simulating nuclear fuel behavior, from its manufacture all the way to its exploitation in reactors and its storage. PLEIADES can count on an MPI distributed memory parallelization allowing simulations to run on several hundred cores and it meets the needs of CEA's partners EDF and Framatome. Porting PLEIADES to use the most recent computing infrastructures is nevertheless essential. In particular providing a flexible, portable and high-performance solution for simulations on supercomputers equipped with GPUs is of major interest in order to capture ever more complex physics on simulations involving ever larger computational domains.
Within such a context the present thesis aims at developing and evaluating different strategies for porting computational kernels to GPUs and at using dynamic load balancing methods tailored to current and upcoming GPU-based supercomputers. The candidate will rely on the tools developed at CEA such as the thermo-mechanical solver MFEM-MGIS [1,2] or MANTA [3]. The software solutions and parallel algorithms proposed with this thesis will eventually enable large 3D multi-physics modeling calculations of the behavior of fuel rods on supercomputers comprising thousands of computing cores and GPUs.
The candidate will work at the PLEIADES Fuel Scientific Computing Tools Development Laboratory (LDOP) of the department for fuel studies (DEC - IRESNE, CEA Cadarache). They will be brought to evolve in a multidisciplinary team composed of mathematicians, physicists, mechanicians and computer scientists. Ultimately, the contributions of the thesis aim at enriching the computing platform for nuclear fuel simulations PLEIADES.
References :[1] MFEM-MGIS - https://thelfer.github.io/mfem-mgis/[2]; Th. Helfer, G. Latu. « MFEM-MGIS-MFRONT, a HPC mini-application targeting nonlinear thermo-mechanical simulations of nuclear fuels at mesoscale ». IAEA Technical Meeting on the Development and Application of Open-Source Modelling and Simulation Tools for Nuclear Reactors, June 2022, https://conferences.iaea.org/event/247/contributions/20551/attachments/10969/16119/Abstract_Latu.docx, https://conferences.iaea.org/event/247/contributions/20551/attachments/10969/19938/Latu_G_ONCORE.pdf; [3] O. Jamond et al. «MANTA : un code HPC généraliste pour la simulation de problèmes complexes en mécanique », https://hal.science/hal-03688160
Dynamic Assurance Cases for Autonomous Adaptive Systems
Providing assurances that autonomous systems will operate in a safe and secure manner is a prerequisite for their deployment in mission-critical and safety-critical application domains. Typically, assurances are provided in the form of assurance cases, which are auditable and reasoned arguments that a high-level claim (usually concerning safety or other critical properties) is satisfied given a set of evidence concerning the context, design, and implementation of a system. Assurance case development is traditionally an analytic activity, which is carried out off-line prior to system deployment and its validity relies on assumptions/predictions about system behavior (including its interactions with its environment). However, it has been argued that this is not a viable approach for autonomous systems that learn and adapt in operation. The proposed PhD will address the limitations of existing assurance approaches by proposing a new class of security-informed safety assurance techniques that are continually assessing and evolving the safety reasoning, concurrently with the system, to provide through-life safety assurance. That is, safety assurance will be provided not only during initial development and deployment, but also at runtime based on operational data.