



For energy reasons, future computers will have to use accelerators for both computation and memory access (GPUs, TPUs, NPUs, smart DMAs). AI applications have intensive computational requirements in terms of both computing power and memory throughput.
These accelerators are not based on a simple instruction set (ISA), they break the Von Neuman model: they require specialized code to be written manually.
Furthermore, it is difficult to compare the use of these accelerators with code using a non-specialized processor, as the initial source codes are very different.
HybroLang is a hardware-close programming language that allows programs to be written using all of a processor's computing capabilities, while also allowing code to be specialized based on data known at runtime.
The HybroGen compiler has already demonstrated its ability to program in-memory computing accelerators, as well as to optimize code on conventional CPUs by performing innovative optimizations.
This thesis proposes to extend the HybroLang language in order to
- facilitate the programming of AI applications by providing support for complex data: stencils, convolution, sparse computing
- enable code generation both on CPUs and with hardware accelerators currently under development at the CEA (sparse computing, in-memory computing, memory access)
- allow to benchmark different computing architectures with the same initial source code
Ideally, a candidate should have knowledge of computer architecture, programming language implementation, code optimization and compilation.

