



Embedded systems are increasingly used in critical infrastructures (e.g., energy production networks) and are therefore prime targets for malicious actors. The use of intrusion detection systems (IDS) that dynamically analyze the system's state is becoming necessary to detect an attack before its impacts become harmful.
The IDS that interest us are based on machine learning anomaly detection methods and allow learning the normal behavior of a system and raising an alert at the slightest deviation. However, the learning of normal behavior by the model is done only once beforehand on a static dataset, even though the embedded systems considered can evolve over time with updates affecting their nominal behavior or the addition of new behaviors deemed legitimate.
The subject of this thesis therefore focuses on studying re-learning mechanisms for anomaly detection models to update the model's knowledge of normal behavior without losing information about its prior knowledge. Other learning paradigms, such as reinforcement learning or federated learning, may also be studied to improve the performance of IDS and enable learning from the behavior of multiple systems.

