Software support for sparse computation
The performance of computers has become limited by data movement in the fields of AI, HPC and embedded computing. Hardware accelerators do exist to handle data movement in an energy-efficient way, but there is no programming language that allows them to be implemented in the code supporting the calculations.
It's up to the programmer to explicitly configure DMAs and use function calls for data transfers and do program analysis to identify memory bottleneck
In addition, compilers were designed in the 80s, when memories worked at the same frequency as computing cores.
The aim of this thesis will be to integrate into a compiler the ability to perform optimizations based on data transfers.
Scalable thermodynamic computing architectures
Large-scale optimisation problems are increasingly prevalent in industries such as finance, materials development, logistics and artificial intelligence. These algorithms are typically realised on hardware solutions comprising clusters of CPUs and GPUs. However, at scale, this can quickly translate into latencies, energies and financial costs that are not sustainable. Thermodynamic computing is a new computing paradigm in which analogue components are coupled together in a physical network. It promises extremely efficient implementations of algorithms such as simulated annealing, stochastic gradient descent and Markov chain Monte Carlo using the intrinsic physics of the system. However, no clear vision of how a realistic programmable and scalable thermodynamic computer exists. It is this ambitious challenge that will be addressed in this PhD topic. Aspects ranging from the development computing macroblocks, their partitioning and interfacing to a digital system to the adaptation and compilation of algorithms to thermodynamic hardware may be considered. Particular emphasis will be put on understanding the trade-offs required to maximise the scalability and programmability of thermodynamic computers on large-scale optimisation benchmarks and their comparison to implementations on conventional digital hardware.
HW/SW Contracts for Security Analysis Against Fault Injection Attacks on Open-source Processors
This thesis focuses on the cybersecurity of embedded systems, particularly the vulnerability of processors and programs to fault injection attacks. These attacks disrupt the normal functioning of systems, allowing attackers to exploit weaknesses to access sensitive information. Although formal methods have been developed to analyze the robustness of systems, they often limit their analyses to hardware or software separately, overlooking the interaction between the two.
The proposed work aims to formalize hardware/software (HW/SW) contracts specifically for security analysis against fault injection. Building on a hardware partitioning approach, this research seeks to mitigate scalability issues related to the complexity of microarchitecture models. Expected outcomes include the development of techniques and tools for effective security verification of embedded systems, as well as the creation of contracts that facilitate the assessment of compliance for both hardware and software implementations. This approach could also reduce the time-to-market for secure systems.
Scalable NoC-based Programmable Cluster Architecture for future AI applications
Context
Artificial Intelligence (AI) has emerged as a major field impacting various sectors, including healthcare, automotive, robotics, and more. Hardware architectures must now meet increasingly demanding requirements in terms of computational power, low latency, and flexibility. Network-on-Chip (NoC) technology is a key enabler in addressing these challenges, providing efficient and scalable interconnections within multiprocessor systems. However, despite its benefits, designing NoCs poses significant challenges, particularly in optimizing latency, energy consumption, and scalability.
Programmable cluster architectures hold great promise for AI as they enable resource adaptation to meet the specific needs of deep learning algorithms and other compute-intensive AI applications. By combining the modularity of clusters with the advantages of NoCs, it becomes possible to design systems capable of handling ever-increasing AI workloads while ensuring maximum energy efficiency and flexibility.
Summary of the Thesis Topic
This PhD project aims to design a scalable, programmable cluster architecture based on a Network-on-Chip tailored for future AI applications. The primary objective will be to design and optimize a NoC architecture capable of meeting the high demands of AI applications in terms of intensive computing and efficient data transfer between processing clusters.
The research will focus on the following key areas:
1. NoC Architecture Design: Developing a scalable and programmable NoC to effectively connect various AI processing clusters.
2. Performance and Energy Efficiency Optimization: Defining mechanisms to optimize system latency and energy consumption based on the nature of AI workloads.
3. Cluster Flexibility and Programmability: Proposing a modular and programmable architecture that dynamically allocates resources based on the specific needs of each AI application.
4. Experimental Evaluation: Implementing and testing prototypes of the proposed architecture to validate its performance on real-world use cases, such as image classification, object detection, and real-time data processing.
The outcomes of this research may contribute to the development of cutting-edge embedded systems and AI solutions optimized for the next generation of AI applications and algorithms.
The work performed during this thesis will be presented at international conferences and scientific journals. Certain results may be patented.
CORTEX: Container Orchestration for Real-Time, Embedded/edge, miXed-critical applications
This PhD proposal will develop a container orchestration scheme for real-time applications, deployed on a continuum of heterogeneous computing resources in the embedded-edge-cloud space, with a specific focus on applications that require real-time guarantees.
Applications, from autonomous vehicles, environment monitoring, or industrial automation, applications traditionally require high predictability with real-time guarantees, but they increasingly ask for more runtime flexibility as well as a minimization of their overall environmental footprint.
For these applications, a novel adaptive runtime strategy is required that can optimize dynamically at runtime the deployment of software payloads on hardware nodes, with a mixed-critical objective that combines real-time guarantees with the minimization of the environmental footprint.