



This PhD project aims to develop Hopfield-type associative neural networks that perform inference through energy-minimizing dynamics.
The goal is to exploit these dynamics for image denoising and reconstruction close to the sensor, under strict energy and latency constraints.
The network synapses will be implemented in ReRAM crossbar arrays, enabling analog in-memory matrix-vector operations.
The work will focus on architecture dimensioning while accounting for array size, weight quantization, device variability and endurance limits.
Reference models will be developed in PyTorch to evaluate alternative neural dynamics and hardware mapping strategies.
Patch-wise image denoising will serve as the main use case to quantify trade-offs between reconstruction quality, latency and energy consumption.
Particular attention will be paid to the robustness of the networks against hardware non-idealities such as noise, variability and memory drift.
The project will also investigate local on-chip learning mechanisms, allowing slow adaptation to changes in the sensor, scene or memory devices.
These learning rules must remain compatible with the endurance constraints of resistive memories.
Ultimately, the PhD should provide hardware-sizing guidelines and support the design of an experimental test vehicle.
The broader scientific objective is to demonstrate that dynamic associative inference can become an efficient, robust and low-power building block for edge AI.

