Design and Analysis of Side-Channel Feedback for Vulnerability Discovery
Fuzzing is a dynamic testing technique that enables vulnerabilities to be discovered very efficiently. Hundreds or even thousands of vulnerabilities are detected (and repaired) every year in the software we use. When we try to transpose the fuzzing approach to embedded systems, we are faced with a number of problems: the source code is not always available, very little information is available about the behaviour of the system at runtime and, finally, it is difficult to detect whether a bug has appeared. For several years now, the LTSO laboratory has been developing state-of-the-art techniques for analysing auxiliary channels, in particular the electromagnetic radiation produced by systems during operation. These measurements make it possible to infer information (data, executed code) about the behaviour of the system in a non-intrusive way. The aim of this thesis is therefore to determine whether these side-channel measurements can be used to improve the fuzzing process on embedded systems. The use of this new source of information also opens the door to the discovery of new classes of vulnerabilities, such as micro-architectural vulnerabilities.
The PhD will take place at CEA Grenoble, within the LETI institute, in a research team dedicated to the study and development of solutions for the security of present and future electronic systems (http://www.leti-cea.com/cea-tech/leti/english/Pages/Applied-Research/Facilities/cyber-security-platform.aspx).
Translated with www.DeepL.com/Translator (free version)
Advancing image sensor security: using deep learning for simultaneous robust and fragile watermarking
This PhD project aims at advancing the field of image sensor security through a comprehensive exploration of recent deep learning techniques applied to both robust and fragile invisible watermarking. In the specific context of embedded image rendering pipelines, this study aims to address the dual challenges of ensuring resistance against intentional attacks to break the mark (robust watermarking) while maintaining a high sensitivity to alterations (fragile watermarking). The goal of this multifaceted design approach is not only to enhance the security of imager data but additionally opens avenues for applications in authentication, tamper and forgery detection, combined with data integrity checking. The research will delve into fields of research from image sensor rendering pipeline design using attention-augmented deep learning models to the intricacies of embedding multiple watermarks simultaneously, addressing the requirement for both robust and fragile characteristics.
This research is therefore an exciting opportunity for PhD candidates showing interest in the intersection of deep learning, image processing, and security. It provides not only a rich academic landscape for impactful scientific contributions but also holds potential for concrete results for upcoming technological transfers. In practice, the work will consists in finding novel algorithmic solutions to improve watermarking performance, designed to deal with most advanced Deep Learning based attacks, while maintaining a high image quality rendering.
Combining over and under-approximations for low-level code analysis
Because program analysis is an undecidable problem, static analyzes fall into 2 categories:
- Sound analyses, which calculate an over-approximation of all the program's behaviors, and make it possible to verify the absence of errors.
- Complete analyses, which calculate an under-approximation of the possible behaviors of the program, and allow errors to be found.
The aim of the thesis is to study this combination of techniques for the verification of low-level programs, in particular binary executables, by targeting in particular the analyzes of memories and control flow, which are the capital information for understanding the possible behaviors of a binary executable.
[POPL'23]: SSA Translation is an Abstract Interpretation, M. Lemerre (Distinguished paper award)
[VMCAI'22]: Lightweight Shape Analysis Based on Physical Types, O. Nicole, M. Lemerre, X. Rival
[RTAS'21]: No Crash, No Exploit: Automated Verification of Embedded Kernels, O. Nicole, M. Lemerre, S. Bardin, X. Rival (best paper award)
[LPAR'18] Arrays Made Simpler: An Efficient, Scalable and Thorough Preprocessing B. Farinier, S. Bardin, R. David, M. Lemerre
Proof of functional equivalence of binary codes in the context of embedded program hardening
The general context of this thesis is the cyber-security of embedded
systems. The research background of this thesis is tied to the
automatic application of counter-measures against the so-called physical
attacks, which encompass observation attacks (side-channel attacks) and
perturbation attacks (fault-injection attacks).
The CEA List is working on COGITO, a compiler toolchain based on LLVM
for the automatic application of software counter-measures against
physical attacks. Given a source-level implementation of an unprotected
program, our toolchain produces an optimised binary program including
targeted counter-measures, such that the compiled program is hardened
against a specified threat model. Two key points are today crucial to
trust the compiled programs:
1. the proof of robustness of programs produced by our toolchain,
2. the proof that adding counter-measures does not alter the
functionality of the target programs.
This thesis will target the second point: bringing formal guarantees
about the functional correctness of the secured programs. We will use
sound and exhaustive symbolic reasoning, supported by BINSEC
(). BINSEC is an open-source toolset
developed at CEA List to help improve software security at the binary
level. It relies on cutting-edge research in binary code analysis, at
the intersection of formal methods, program analysis, security and
software engineering.
The PhD thesis will be hosted at the CEA in Saclay, within the BINSEC team.
Short-term stays
at CEA Grenoble will be planned throughout the three
years of the thesis to collaborate with experts and developers of
COGITO.
Artful guidance of test generation tools
Fuzzing is an automatic test generation technique. It consists in repeatedly executing a program with automatically generated inputs, in order to trigger crashes, symptoms of underlying bugs in the code, which can then be fixed. A major challenge in this area is moving from indiscriminate exploration of how programs work to artful guidance towards the discovery of important bugs. Taking advantage of the expertise of our team and our previous work, the objective of the thesis is to propose, implement and evaluate means to meet this challenge, taking advantage of finer-grained guidance mechanisms.
Scenario-Based Testing for Automated Systems: Enhancing Safety and Reliability in Compliance with Regulations and Standards
This research aims to investigate the effectiveness of scenario-based testing as a comprehensive and robust approach for evaluating ASs' performance while enhancing their safety and reliability with respect to regulations and standards.
The primary objective of this thesis will be to investigate the benefits of scenario-based testing for automated systems and its compliance with regulations and standards. The following sub-objectives will be pursued:
- Conduct an in-depth review of relevant literature and industry practices on ASs testing methodologies, with a focus on the unique challenges posed by ASs' complex decision-making algorithms and interaction with the dynamic environment.
- Develop a scenario-based testing framework for systematic identification, generation, selection, and execution of realistic and diverse scenarios, for automated systems
- Analyze gaps or areas of non-compliance about the key concepts and requirements outlined in regulations and standards and examine their applicability, implications and improvments for scenario-based testing.
- Conduct extensive validation of the proposed scenario-based testing framework through different real-world and near-realistic transportation applications (various types, varying levels of automation, and diverse real-world scenarios) to evaluate the practical applicability and benefits of the methodology.
The findings and recommendations from this research will ultimately guide AV manufacturers, regulators, and stakeholders in developing and validating ASs that comply with the regulatory framework, fostering the safe and responsible deployment of automated systems in the future.