The CEA welcomes 1,600 doctoral PhD students to its laboratories each year.
Thesis
Home / Thesis / Explainable observers and interpretable AI for superconducting accelerators and radioactive isotope identification
Explainable observers and interpretable AI for superconducting accelerators and radioactive isotope identification
Accelerators physicsCorpuscular physics and outer space
Abstract
GANIL’s SPIRAL1 and SPIRAL2 facilities produce complex data that remain hard to interpret. SPIRAL2 faces instabilities in its superconducting cavities, while SPIRAL1 requires reliable isotope identification under noisy conditions.
This PhD will develop observer-based interpretable AI, combining physics models and machine learning to detect, explain, and predict anomalies. By embedding causal reasoning and explainability tools such as SHAP and LIME, it aims to improve the reliability and transparency of accelerator operations.
Laboratory
Institut de recherche sur les lois fondamentales de l’univers
Département Grand Accélérateur National d’Ions Lourds
Nous utilisons des cookies pour vous garantir la meilleure expérience sur notre site web. Si vous continuez à utiliser ce site, nous supposerons que vous en êtes satisfait.OKNonPolitique de confidentialité