



This PhD topic lies within the field of symbolic artificial intelligence. Unlike approaches based on neural networks, these methods rely on explicit rules, often provided by experts or learned from limited data, making them interpretable but potentially imperfect.
The central problem is therefore the validation of fuzzy rule bases: the goal is to ensure that the rules produce consistent, useful, and reliable results. Existing methods use global metrics (overall system performance) and local metrics (the quality of each rule), but they do not sufficiently account for certain important specificities. For example, interactions between rules can strongly influence the final behavior.
The thesis proposes to develop a comprehensive and systematic approach to validate these rule bases, whether data is available or not. In particular, it aims to design new metrics capable of capturing these interactions, drawing inspiration, for example, from graph-based approaches (such as FinGrams or reputation systems).
The work will include the definition of a methodological framework, the proposal of new validation measures, as well as their implementation and experimental evaluation.
The expected outcomes are more precise tools for detecting problematic rules, and an overall improvement in the performance and reliability of fuzzy inference systems.

