The position is related to safety assessment and assurance of AI (Artificial Intelligence)-based systems that used machine-learning components during operation time for performing autonomy functions. Currently, for non-AI system, the safety is assessed prior to the system deployment and the safety assessment results are compiled into a safety case that remains valid through system life. For novel systems integrating AI components, particularly the self-learners systems, such engineering and assurance approach are not applicable as the system can exhibit new behavior in front of unknown situations during operation.
The goal of the postdoc will be to define an engineering approach to perform accurate safety assessment of AI systems. A second objective is to define assurance case artefacts (claims, evidences, etc.) to obtain & preserve justified confidence in the safety of the system through its lifetime, particularly for AI system with operational learning. The approach will be implemented in an open-source framework that it will be evaluated on industry-relevant applications.
The position holder will join a research and development team in a highly stimulating environment with unique opportunities to develop a strong technical and research portfolio. He will be required to collaborate with LSEA academic & industry partners, to contribute and manage national & EU projects, to prepare and submit scientific material for publication, to provide guidance to PhD students.