Stocastic integrated power supplies based on emerging components

Context:
The widespread utilization of connected devices that process sensitive information necessitates the creation of new secure systems. The prevalent attack, referred to as power side-channel, involves the retrieval of encryption key information by analyzing the power consumption of the system. Integrating the system with its power supply management blocks can conceal the consumption of sensitive blocks, especially by utilizing various techniques to introduce randomized variations during power transfer. The CEA has wide experience in the design and testing of secure integrated circuits and it is exploring a new approach to DC-DC conversion that uses emerging devices available at CEA-Léti.
The work of the PhD researcher will be the following:
- Specification of integrated power supplies using switched-capacitor architecture.
- Study the circuit using emerging components and evaluate the improvement of its robustness against side channel attacks.
- Design of the integrated power supply in silicon technology.
- Performance and security characterization of the designed blocks and security primitives in
their whole.
The division of labor is 10% advanced study, 20% system architecture, 50% circuit design, 20% experimental measurement.

Laser Fault Injection Physical Modelling in FD-SOI technologies: toward security at standard cells level on FD-SOI 10 nm node

The cybersecurity of our infrastructures is at the very heart in the digital transition on-going, and security must be ensured throughout the entire chain. At the root of trust lies the hardware, integrated circuits providing essential functions for the integrity, confidentiality and availability of processed information.
But hardware is vulnerable to physical attacks, and defence has to be organised. Among these attacks, some are more tightly coupled to the physical characteristics of the silicon technologies. An attack using a pulsed laser in the near infrared is one of them and is the most powerful in terms of accuracy and repeatability. Components must therefore be protected against this threat.
As the FD-SOI is now widely deployed in embedded systems (health, automotive, connectivity, banking, smart industry, identity, etc.) where security is required. FD-SOI technologies have promising security properties as being studied as less sensitive to a laser fault attack. But while the effect of a laser fault attack in traditional bulk technologies is well handled, deeper studies on the sensitivity of FD-SOI technologies has to be done in order to reach a comprehensive model. Indeed, the path to security in hardware comes with the modelling of the vulnerabilities, at the transistor level and extend it up to the standard cells level (inverter, NAND, NOR, Flip-Flop) and SRAM. First a TCAD simulation will be used for a deeper investigation on the effect of a laser pulse on a FD-SOI transistor. A compact model of an FD-SOI transistor under laser pulse will be deduced from this physical modelling phase. This compact model will then be injected into various standard cell designs, for two different objectives: a/ to bring the modelling of the effect of a laser shot to the level of standard cell design (where the analog behaviour of a photocurrent becomes digital) b/ to propose standard cell designs in FD-SOI 10nm technology, intrinsically secure against laser pulse injection. Experimental data (existing and generated by the PhD student) will be used to validate the models at different stages (transistor, standard cells and more complex circuits on ASIC).
Ce sujet de thèse est interdisciplinaire, entre conception microélectronique, simulation TCAD et simulation SPICE, tests de sécurité des systèmes embarqués. Le candidat sera en contact/encadré avec deux équipes de recherche; conception microélectronique , simulation TCAD et sécurité des systèmes embarqués.

Contacts: romain.wacquez@cea.fr, jean-frederic.christmann@cea.fr, sebastien.martinie@cea.fr

Secure Hardware/Software Implementation of Post-Quantum Cryptography on RISC-V Platforms

Traditional public-key cryptography algorithms are considered broken when a large-scale quantum computer is successfully realized. Consequently, the National Institute of Standards and Technology (NIST) in the USA has launched an initiative to develop and standardize new Post-Quantum Cryptography (PQC) algorithms, aiming to replace established public-key mechanisms. However, the adoption of PQC algorithms in Internet of Things (IoT) and embedded systems poses several implementation challenges, including performance degradation and security concerns arising from the potential susceptibility to physical Side-Channel Attacks (SCAs).
The idea of this Ph.D. project is to explore the modularity, extensibility and customizability of the open-source RISC-V ISA with the goal of proposing innovative, secure and efficient SW/HW implementations of PQC algorithms. One of the main challenge related to the execution of PQC algorithms on embedded processors is to achieve good performance (i.e. low latency and high throughput) and energy efficiency while incorporating countermeasures against physical SCAs. In the first phase, the Ph.D. candidate will review the State-Of-the-Art (SoA) with the objective of understanding weaknesses and attack points of PQC algorithms, the effectiveness and overhead of SoA countermeasures, and SoA acceleration strategies. In the second phase, the candidate will implement new solutions by exploiting all degrees of freedom offered by the RISC-V architecture and characterize the obtained results in terms of area overhead, execution time and resistance against SCAs.
Beyond the exciting scientific challenges, this PhD will take place in Grenoble, a picturesque city nestled in the French Alps. The research will be conducted at the CEA, in LETI and LIST institutes, and in collaboration with the TIMA laboratory.

Dynamic Assurance Cases for Autonomous Adaptive Systems

Providing assurances that autonomous systems will operate in a safe and secure manner is a prerequisite for their deployment in mission-critical and safety-critical application domains. Typically, assurances are provided in the form of assurance cases, which are auditable and reasoned arguments that a high-level claim (usually concerning safety or other critical properties) is satisfied given a set of evidence concerning the context, design, and implementation of a system. Assurance case development is traditionally an analytic activity, which is carried out off-line prior to system deployment and its validity relies on assumptions/predictions about system behavior (including its interactions with its environment). However, it has been argued that this is not a viable approach for autonomous systems that learn and adapt in operation. The proposed PhD will address the limitations of existing assurance approaches by proposing a new class of security-informed safety assurance techniques that are continually assessing and evolving the safety reasoning, concurrently with the system, to provide through-life safety assurance. That is, safety assurance will be provided not only during initial development and deployment, but also at runtime based on operational data.

Security blind spots in Machine Learning systems: modeling and securing complex ML pipeline and lifecycle

With a strong context of regulation of AI at the European scale, several requirements have been proposed for the "cybersecurity of AI" and more particularly to increase the security of AI systems and not only the core ML models. This is important especially as we are experience an impressive development of large models that are deployed to be adapted to specific tasks in a large variety of platforms and devices. However, considering the security of the overall lifecycle of an AI system is far more complex than the constraint, unrealistic traditional ML pipeline, composed of a static training, then inference steps.

In that context, there is an urgent need to focus on core operations from a ML system that are poorly studied and are real blind spot for the security of AI systems with potentially many vulnerabilities. For that purpose, we need to model the overall complexity of an AI system thanks to MLOps (Machine Learning Operations) that aims to encapsulate all the processes and components including data management, deployment and inference steps as well as the dynamicity of an AI system (regular data and model updates).

Two major “blind spots” are model deployment and systems dynamicity. Regarding deployment, recent works highlight critical security issues related to model-based backdoor attacks processed after training time by replacing small parts of a deep neural network. Additionally, other works focused on security issues against model compression steps (quantization, pruning) that are very classical steps performed to deploy a model into constrained inference devices. For example, a dormant poisoned model may become active only after pruning and/or quantization processes. For systems dynamicity, several open questions remain concerning potential security regressions that may occur when core models of an AI system are dynamically trained and deployed (e.g., because of new training data or regular fine-tuning operations).

The objectives are:
1. model security of modern AI systems lifecycle with a MLOps framework and propose threat models and risk analysis related to critical steps, typically model deployment and continuous training
2. demonstrate and characterize attacks, e.g., attacks targeting the model optimization processes, fine tuning or model updating
3. propose and develop protection schemes and sound evaluation protocols.

Advanced type-based static analysis for operating system verification

In recent work [RTAS 2021], we have demonstrated the benefits of a static analysis guided
for analyzing low-level system programs, going so far as to be able to automatically verify the absence
to the point of being able to automatically verify the absence of privilege escalation in an embedded
operating system kernel as a consequence of the type-safety of the kernel code. Memory management
is a particularly difficult problem when analyzing system programs, or more broadly,
programs written in C, and type-based analysis provides a satisfactory solution with wide
wide applicability (e.g. data structure management code [VMCAI 2022], dynamic language runtime
dynamic language runtime, performance-oriented application code, etc.).

The aim of this thesis is to extend the applications that can be made of the use of type-based static analysis.
type-based static analysis.

Signal processing in cybersecurity: development of frequency tools for side-channel attacks and application to voice biometrics

Embedded cryptography on smartcards can be vulnerable to side-channel attacks, based on the interpretation of the information retrieved during the execution of the algorithm. This information leak is generally measured at the hardware level thanks to a consumption signal or electromagnetic radiation. Many methods, based mainly on statistical tools, exist to exploit these signals and to find secret elements.
However, the information used during this process is partial, because the current methods mainly exploit the signal in the time space. The signals being more and more complex, noisy and out of sync, and also very variable from one component to the other, the application of signal processing methods, in particular a time / frequency analysis, makes it possible to obtain additional information from the frequency space. The use of this information can lead to improved attacks. The state of the art presents several methods around side-channel attacks in frequency domain, but they are currently sparsely exploited.
As a first step, the PhD student will be able to use the existing signals and tools to become familiar with the side-channel attacks. He will then be able to rely on the existing literature around frequency attacks, in particular works of G. Destouet [1-2-3] which explore new techniques for filtering, compression, but also pattern detection for the purpose of optimal resynchronization, or for cutting signals in the context of so-called "horizontal" attacks.
These researches will be analyzed deeply and the Phd Student will be able to explore new techniques, for example new wavelet bases, and will test his algorithms on suitable signal bases.
Moreover, the "machine learning" method applied to side-channel attacks is currently studied, and the contribution of frequency data is also a way of improving the use of neural networks. The doctoral student will be able to rely on the different methods already existing in time and expand them thanks to wavelet transforms, in order to improve learning.
These different methods are applicable to signals analysis in voice biometrics. The Phd student will be able, among other things, to study neural networks using frequency data, adapted to audio signals obtained in biometrics, also using wavelets or so-called “cepstral” analysis.

At CEA-Leti Grenoble the student will be in a reference laboratory in the evaluation of high security devices(http://www.leti-cea.fr/cea-tech/leti/Pages/innovation-industrielle/innover-avec-le-Leti/CESTI.aspx).

[1] Gabriel Destouet Ondelettes pour le traitement des signaux compromettants. (Wavelets for side-channel analysis) https://theses.hal.science/tel-03758771
[2] Gabriel Destouet et al. Wavelet Scattering Transform and Ensemble Methods for Side-Channel Analysis". In : Constructive Side-Channel Analysis and Secure Design. Sous la dir. de Guido Marco Bertoni et Francesco Regazzoni. T. 12244. Series Title : Lecture Notes in Computer Science. Cham : Springer International Publishing, 2021, p. 71-89. isbn : 978-3-030-68772-4 978-3-030-68773-1. doi : 10 . 1007 / 978 - 3 - 030 -68773-1_4.
[3] Gabriel Destouet et al. Generalized Morse Wavelet Frame Estimation Applied to Side-Channel Analysis. ICFSP 2021: 52-57

Trusted imager: integrated security based on physically unclonable functions

Images, and therefore the sensors that generate them, must respond to the challenges posed by their illicit use, either to divert their content through deep fakes, or for unauthorized access. The concept of trusted imagers responds to the need to ensure the security, authentication or encryption of images as soon as they are acquired.
Based on our first developments, the thesis will consist of searching for innovative solutions to integrate security functions into imagers. Faced with the challenges of robustness and compact integrability, the thesis aims to explore the use of physically non-clonable functions within an image sensor.
After improving the required skills, based in particular on a bibliographical study, and depending on the candidate's interests, the work will consist of:
- Develop compact circuit models in Python to identify and test physically non-clonable functions,
- Validate the proposed physically non-clonable function structures and their associated encryptions
- Analyze their robustness
- Design and simulate integrated circuits corresponding to these functions
With the objective of creating an integrated circuit, the work will take place, within CEA-Léti, with integrated circuit design and software development tools.

Formalization and Analysis of Countermeasures Against Fault Injection Attacks on Open-source Processors

Join our dynamic research team at CEA-List within the DSCIN division for a PhD opportunity in the field of hardware security and formal analysis of processor micro-architectures. The focus of this research is the formalization and analysis of countermeasures against fault injection attacks on open-source processors. Operating at the cutting edge of cyber-security for embedded systems, we aim to build formal guarantees for the robustness of these systems in the face of evolving security threats, particularly those arising from fault injection attacks.

As a PhD candidate, you will contribute to advancing the understanding of fault injection attacks and their impact on both hardware and software aspects of open-source processors. The scientific challenge lies in devising methods and tools that can effectively analyze the robustness of embedded systems under fault injection. You will work on jointly considering the RTL model of the target processor and the executed program, addressing the limitations of current methods (be it simulation or formal analysis), and exploring innovative approaches to scale the analysis to larger programs and complex processor microarchitectures. The experimental work will be based on RTL simulators such as Verilator or QuestaSim, the formal analysis tool µARCHIFI developped at CEA-List, and open-source implementations of secured processors such as the RISC-V processor CV32E40S.

Upon the successful completion of this PhD thesis, you will have contributed to the development of formalized countermeasures against fault injection attacks. This research not only aligns with the broader goals of enhancing cyber-security for embedded systems but also has practical implications, such as contributing to the security verification of realistic secured architectures. Additionally, your work will pave the way for the design of efficient techniques and tools that have the potential to streamline the evaluation of secured systems, impacting fields like Common Criteria certification and reducing time-to-market during the design phase of secure systems.

Security-by-design for embedded deep neural network models on RISC-V

With a strong context of regulation of Artificial Intelligence (AI) at the European scale, several requirements have been proposed for the "cybersecurity of AI". Among the most important concepts related to the security of the machine learning models and the AI-based systems, "security-by-design" is mostly linked to model hardening approaches (e.g., adversarial training against evasion attacks, differential privacy against confidentiality-based attacks).
We propose to cover a wider panorama of "security-by-design" by studying software (SW) and hardware (HW) mechanisms to strengthen the intrinsic reobustness of Embedded AI-based systems on RISC-V platforms.
Objectives are: (1) define and model SW and HW vulnerabilities of embedded models, (2) develop and evaluate protections (3) demonstrate the impact of SW and HW protections - and their combination - against state-of-the-art attacks such as weight-based adversarial attacks and model extraction.

Top