Dynamic Assurance Cases for Autonomous Adaptive Systems

Providing assurances that autonomous systems will operate in a safe and secure manner is a prerequisite for their deployment in mission-critical and safety-critical application domains. Typically, assurances are provided in the form of assurance cases, which are auditable and reasoned arguments that a high-level claim (usually concerning safety or other critical properties) is satisfied given a set of evidence concerning the context, design, and implementation of a system. Assurance case development is traditionally an analytic activity, which is carried out off-line prior to system deployment and its validity relies on assumptions/predictions about system behavior (including its interactions with its environment). However, it has been argued that this is not a viable approach for autonomous systems that learn and adapt in operation. The proposed PhD will address the limitations of existing assurance approaches by proposing a new class of security-informed safety assurance techniques that are continually assessing and evolving the safety reasoning, concurrently with the system, to provide through-life safety assurance. That is, safety assurance will be provided not only during initial development and deployment, but also at runtime based on operational data.

Modeling and Simulation of Human Behavior for Human-Centric Digital Twins

Thanks to synchronized virtual representation, digital twins are a means to produce analyses, predictions and optimizations of real-world systems. However, some of these systems tightly engage with humans so that the role of the latter is determining in the system’s operation. This is for example the case in contexts such as industry 5.0 or the management of the control of critical systems, where the quality of collaboration between humans and machines will depend on the anticipation of their respective actions, interactions and decisions. Thus, to improve the accuracy of predictions and expand the applicability in various fields, it is necessary, based on knowledge from the human and social sciences, to develop digital twins that account for the complexity and richness of human behaviors (decision-making processes, interactions, emotions, etc.). These behavioral models may notably rely on machine learning, data mining, agent-based modeling and knowledge engineering. After having identified the useful human behavior models, we will study their conceptual articulation and their technical integration with the models of cyber-physical entities in the digital twin system. Additionally, we will explore how digital twin services are impacted and can be revised to account for these human-centric aspects. Finally, we will evaluate the effectiveness of human-centric digital twins in various applications by implementing experiments on representative real cases.
This research work aims to make the following contributions:
• The development of an approach based on human behavior models to achieve human-centric digital twins.
• New knowledge on the impact of human behavior on the control of a system and vice versa.
• Practical applications and guidelines for using human-centric digital twins in real-world scenarios.
This PhD will be carried out at Grenoble.

Advanced type-based static analysis for operating system verification

In recent work [RTAS 2021], we have demonstrated the benefits of a static analysis guided
for analyzing low-level system programs, going so far as to be able to automatically verify the absence
to the point of being able to automatically verify the absence of privilege escalation in an embedded
operating system kernel as a consequence of the type-safety of the kernel code. Memory management
is a particularly difficult problem when analyzing system programs, or more broadly,
programs written in C, and type-based analysis provides a satisfactory solution with wide
wide applicability (e.g. data structure management code [VMCAI 2022], dynamic language runtime
dynamic language runtime, performance-oriented application code, etc.).

The aim of this thesis is to extend the applications that can be made of the use of type-based static analysis.
type-based static analysis.

A formal framework for the specification and verification of distributed processes communication flows in clouds

Clouds are constituted of servers interconnected via the Internet, on which systems can be implemented, making use of applications and databases deployed on the servers. Cloud-based computing is gaining in popularity, and that includes the context of critical systems. As a result, it is useful to define formal frameworks for reasoning about cloud-based systems. One requirement about such a framework is that it enables reasoning about the concepts manipulated in a cloud, which naturally includes the ability to reason about distributed systems, composed of subsystems deployed on different machines and interacting through message passing to implement services. In this context, the ability to reason about communication flows is central. The aim of this thesis is to define a formal framework dedicated to the specification and verification of systems deployed on clouds. This framework will capitalize on the formal framework of "interactions". Interactions are models dedicated to the specification of communication flows between different actors in a system. The thesis work will study how to define structuring (enrichment, composition) and refinement operators to enable the implementation of classical software engineering processes based on interactions.

Tool supported model completion with support for design pattern application

Generative AI and large language models (LLMs), such Copilot and ChatGPT can complete code based on initial fragments written by a developer. They are integrated in software development environments such as VS code. Many papers analyse the advantages and limitations of these approaches for code generation, see for instance Besides some deficiencies, the produced code is often correct and the results that are getting increasingly better.

However, a surprisingly small amount of work has been done in the context of completion software models (for instance based on UML). The paper concludes that while the performance of the current LLMs for software modeling is still limited (in contrast to code generation), there is a need that (in contrast to code generation) we should adapt our model-based engineering practices to these new assistants and integrate these into MBSE methods and tools.

The integration of design-patterns is a complementary part of this work. Originally coming from building architecture, the term design patterns has been adopted in the software domain to capture a proven solution for a given problem along with its advantages and disadvantages. A bit later, the term anti-pattern has been proposed to identify patterns that are known not to work or having severe disadvantages. Thus, when proposing a completion, then assistant could explicitly reference an existing design pattern with its implications. The completion proposal can be based either on identified model fragments (including modeled requirements) or an explicit pattern selection. This thesis will explore the state-of-the-art of model completion with AI and design patterns and associated tool support. Up to now, little work is available on pattern formalization and the use in model tools. It will propose to identify the modelers intention, based on partial models. The task could be rule-based but should also explore machine-learning approaches. Implement a completion proposal in the context of a design tool, notably Papyrus SW designer. The solution will be evaluated.

Combining over and under-approximations for low-level code analysis

Because program analysis is an undecidable problem, static analyzes fall into 2 categories:
- Sound analyses, which calculate an over-approximation of all the program's behaviors, and make it possible to verify the absence of errors.
- Complete analyses, which calculate an under-approximation of the possible behaviors of the program, and allow errors to be found.

The aim of the thesis is to study this combination of techniques for the verification of low-level programs, in particular binary executables, by targeting in particular the analyzes of memories and control flow, which are the capital information for understanding the possible behaviors of a binary executable.

[POPL'23]: SSA Translation is an Abstract Interpretation, M. Lemerre (Distinguished paper award)
[VMCAI'22]: Lightweight Shape Analysis Based on Physical Types, O. Nicole, M. Lemerre, X. Rival
[RTAS'21]: No Crash, No Exploit: Automated Verification of Embedded Kernels, O. Nicole, M. Lemerre, S. Bardin, X. Rival (best paper award)
[LPAR'18] Arrays Made Simpler: An Efficient, Scalable and Thorough Preprocessing B. Farinier, S. Bardin, R. David, M. Lemerre

Proof of functional equivalence of binary codes in the context of embedded program hardening

The general context of this thesis is the cyber-security of embedded
systems. The research background of this thesis is tied to the
automatic application of counter-measures against the so-called physical
attacks, which encompass observation attacks (side-channel attacks) and
perturbation attacks (fault-injection attacks).

The CEA List is working on COGITO, a compiler toolchain based on LLVM
for the automatic application of software counter-measures against
physical attacks. Given a source-level implementation of an unprotected
program, our toolchain produces an optimised binary program including
targeted counter-measures, such that the compiled program is hardened
against a specified threat model. Two key points are today crucial to
trust the compiled programs:
1. the proof of robustness of programs produced by our toolchain,
2. the proof that adding counter-measures does not alter the
functionality of the target programs.

This thesis will target the second point: bringing formal guarantees
about the functional correctness of the secured programs. We will use
sound and exhaustive symbolic reasoning, supported by BINSEC
(). BINSEC is an open-source toolset
developed at CEA List to help improve software security at the binary
level. It relies on cutting-edge research in binary code analysis, at
the intersection of formal methods, program analysis, security and
software engineering.

The PhD thesis will be hosted at the CEA in Saclay, within the BINSEC team.
Short-term stays
at CEA Grenoble will be planned throughout the three
years of the thesis to collaborate with experts and developers of
COGITO.

Artful guidance of test generation tools

Fuzzing is an automatic test generation technique. It consists in repeatedly executing a program with automatically generated inputs, in order to trigger crashes, symptoms of underlying bugs in the code, which can then be fixed. A major challenge in this area is moving from indiscriminate exploration of how programs work to artful guidance towards the discovery of important bugs. Taking advantage of the expertise of our team and our previous work, the objective of the thesis is to propose, implement and evaluate means to meet this challenge, taking advantage of finer-grained guidance mechanisms.

Deep Neural Network Uncertainty Estimation on Embedded Targets

Over the last decade, Deep NeuralNetworks (DNNs) have become a popular choice to implement Learning-Enabled Components LECs in automated systems thanks to their effectiveness in processing complex sensory inputs, and their powerful representation learning that surpasses the performance of traditional methods. Despite the remarkable progress in representation learning, DNNs should also represent the confidence in their predictions to deploy them in safety-critical systems. Bayesian Neural Networks (BNNs) offer a principled framework to model and capture uncertainty in LECs. However, exact inference in BNNs is difficult to compute. Thus, we rely on sampling techniques to approximate the true posterior of the weights for computing the posterior predictive distribution (inference). In this regard, relatively simple though computationally and memory expensive sample-based methods have been pro posed for approximate Bayesian inference to quantify uncertainty in DNNs, e.g., Monte-Carlo dropout or Deep Ensembles. Efficient DNN uncertainty estimation in resource-constrained hardware platforms remains an open problem, limiting the adoption within applications from highly automated systems that possess strict computation and memory budgets, tight time constraints, and safety requirements. This thesis aims to develop novel methods and hardware optimizations for efficient and reliable uncertainty estimation in modern DNN architectures deployed in hardware platforms with limited computation resources.

Scenario-Based Testing for Automated Systems: Enhancing Safety and Reliability in Compliance with Regulations and Standards

This research aims to investigate the effectiveness of scenario-based testing as a comprehensive and robust approach for evaluating ASs' performance while enhancing their safety and reliability with respect to regulations and standards.
The primary objective of this thesis will be to investigate the benefits of scenario-based testing for automated systems and its compliance with regulations and standards. The following sub-objectives will be pursued:
- Conduct an in-depth review of relevant literature and industry practices on ASs testing methodologies, with a focus on the unique challenges posed by ASs' complex decision-making algorithms and interaction with the dynamic environment.
- Develop a scenario-based testing framework for systematic identification, generation, selection, and execution of realistic and diverse scenarios, for automated systems
- Analyze gaps or areas of non-compliance about the key concepts and requirements outlined in regulations and standards and examine their applicability, implications and improvments for scenario-based testing.
- Conduct extensive validation of the proposed scenario-based testing framework through different real-world and near-realistic transportation applications (various types, varying levels of automation, and diverse real-world scenarios) to evaluate the practical applicability and benefits of the methodology.
The findings and recommendations from this research will ultimately guide AV manufacturers, regulators, and stakeholders in developing and validating ASs that comply with the regulatory framework, fostering the safe and responsible deployment of automated systems in the future.

Top