Adaptive and explainable Video Anomaly Detection

Video Anomaly Detection (VAD) aims to automatically identify unusual events in video that deviate from normal patterns. Existing methods often rely on One-Class or Weakly Supervised learning: the former uses only normal data for training, while the latter leverages video-level labels. Recent advances in Vision-Language Models (VLMs) and Large Language Models (LLMs) have improved both the performance and explainability of VAD systems. Despite progress on public benchmarks, challenges remain. Most methods are limited to a single domain, leading to performance drops when applied to new datasets with different anomaly definitions. Additionally, they assume all training data is available upfront, which is unrealistic for real-world deployment where models must adapt to new data over time. Few approaches explore multimodal adaptation using natural language rules to define normal and abnormal events, offering a more intuitive and flexible way to update VAD systems without needing new video samples.

This PhD research aims to develop adaptable Video Anomaly Detection methods capable of handling new domains or anomaly types using few video examples and/or textual rules.

The main lines of research will be the following:
• Cross-Domain Adaptation in VAD: improving robustness against domain gaps through Few-Shot adaptation;
• Continual Learning in VAD: continually enriching the model to deal with new types of anomalies;
• Multimodal Few-Shot Learning: facilitating the model adaptation process through rules in natural language.

AI-Driven Network Management with Large Language Models LLMs

The increasing complexity of heterogeneous networks (satellite, 5G, IoT, TSN) requires an evolution in network management. Intent-Based Networking (IBN), while advanced, still faces challenges in unambiguously translating high-level intentions into technical configurations. This work proposes to overcome this limitation by leveraging Large Language Models (LLMs) as a cognitive interface for complete and reliable automation.
This thesis aims to design and develop an IBN-LLM framework to create the cognitive brain of a closed control loop on the top of an SDN architecture. The work will focus on three major challenges: 1) developing a reliable semantic translator from natural language to network configurations; 2) designing a deterministic Verification Engine (via simulations or digital twins) to prevent LLM "hallucinations"; and 3) integrating real-time analysis capabilities (RAG) for Root Cause Analysis (RCA) and the proactive generation of optimization intents.
We anticipate the design of an IBN-LLM architecture integrated with SDN controllers, along with methodologies for the formal verification of configurations. The core contribution will be the creation of an LLM-based model capable of performing RCA and generating optimization intents in real-time. The validation of the approach will be ensured by a functional prototype (PoC), whose experimental evaluation will allow for the precise measurement of performance in terms of accuracy, latency, and resilience.

Hybrid Compression of Neural Networks for Embedded AI: Balancing Efficiency and Accuracy

Convolutional Neural Networks (CNNs) have become a cornerstone of computer vision, yet deploying them on embedded devices (robots, IoT systems, mobile hardware) remains challenging due to their large size and energy requirements. Model compression is a key solution to make these networks more efficient without severely impacting accuracy. Existing methods (such as weight quantization, low-rank factorization, and sparsity) show promising results but quickly reach their limits when used independently. This PhD will focus on designing a unified optimization framework that combines these techniques in a synergistic way. The work will involve both theoretical aspects (optimization methods, adaptive rank selection) and experimental validation (on benchmark CNNs like ResNet or MobileNet, and on embedded platforms such as Jetson, Raspberry Pi, and FPGA). An optional extension to transformer architectures will also be considered. The project benefits from complementary supervision: academic expertise in tensor decompositions and an industrial-oriented partner specialized in hardware-aware compression.

Top