The CEA welcomes 1,600 doctoral PhD students to its laboratories each year.
Thesis
Home / Thesis / Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices
Enabling efficient federated learning and fine-tuning for heterogeneous and resource-constrained devices
Artificial intelligence & Data intelligenceTechnological challenges
Abstract
The goal of this PhD thesis is to develop methods that enhance resource efficiency in federated learning (FL), with particular attention to the constraints and heterogeneity of client resources. The work will first focus on the classical client-server FL architecture, before extending the investigation to decentralised FL settings. The proposed methods will be studied in the context of both federated model training and distributed fine-tuning of large models, such as large language models (LLMs).
Laboratory
Département d’Instrumentation Numérique
Service Monitoring, Contrôle et Diagnostic
Laboratoire Instrumentation Intelligente, Distribuée et Embarquée
Nous utilisons des cookies pour vous garantir la meilleure expérience sur notre site web. Si vous continuez à utiliser ce site, nous supposerons que vous en êtes satisfait.OKNonPolitique de confidentialité