About us
Espace utilisateur
Education
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Thesis
Home   /   Thesis   /   Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices

Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices

Artificial intelligence & Data intelligence Technological challenges

Abstract

The goal of this PhD thesis is to develop methods that enhance resource efficiency in federated learning (FL), with particular attention to the constraints and heterogeneity of client resources. The work will first focus on the classical client-server FL architecture, before extending the investigation to decentralised FL settings. The proposed methods will be studied in the context of both federated model training and distributed fine-tuning of large models, such as large language models (LLMs).

Laboratory

Département d’Instrumentation Numérique
Service Monitoring, Contrôle et Diagnostic
Laboratoire Instrumentation Intelligente, Distribuée et Embarquée
Paris-Saclay
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down