About us
Espace utilisateur
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Home   /   Thesis   /   Privacy-preserving federated learning over vertically partitioned data from heterogeneous participants

Privacy-preserving federated learning over vertically partitioned data from heterogeneous participants

Artificial intelligence & Data intelligence Computer science and software Engineering sciences Technological challenges


Federated learning enables multiple participants to collaboratively train a global model, without sharing their data, but only model parameters are exchanged between the participants and the server. In vertical federated learning (VFL), datasets of the participants share similar samples, but have different features. For instance, companies and institutions from different fields own data with different features of overlapping samples collaborate to solve a machine learning task. Though data are private, VFL remains vulnerable to attacks such as label and feature inference attacks. Various privacy measures (e.g., differential privacy, homomorphic encryption) have been investigated to prevent privacy leakage. Choosing the appropriate measures is a challenging task as it depends on the VLF architecture and the desired level of privacy (e.g., local models, intermediate results, learned models). The variability of each participant’s system can also result in high latency and asynchronous updates, affecting training efficiency and model effectiveness.

The aim of this thesis is to propose methods to enable privacy-preserving VFL, taking into account the heterogeneity of the participants. First, the candidate will study the architectures of VFL models and the privacy measures to propose privacy-preserving protocols for VFL. Second, the candidate will investigate the impacts of the heterogeneity of the participants’ system such as computation and communication resources to devise solutions to render the VFL protocols robust to such heterogeneity. Third, the trade-offs among effectiveness, privacy, and efficiency in VFL will be explored to propose a practical framework for adjusting the protocols according to the requirements of a given machine learning problem.


Département d’Instrumentation Numérique
Service Monitoring, Contrôle et Diagnostic
Laboratoire Instrumentation Intelligente, Distribuée et Embarquée
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down