Autor/a

Chen, Xi

Altres autors/es

Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors

Velasco Esteban, Luis Domingo

Data de publicació

2026-01-28



Resum

Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy, but it suffers from substantial communication overhead and sensitivity to heterogeneous client data and participation. In this thesis, we systematically investigate the federated learning process from both server-side orchestration and client-side training perspectives, with particular emphasis on how client participation patterns and data heterogeneity influence convergence behavior and communication efficiency. To address the high communication cost inherent in FL, we propose FedDB, a distance-based client selection strategy. FedDB evaluates the relative contribution of individual clients using model distance metrics computed at the server and selectively regulates client participation across training rounds. Importantly, the proposed method operates entirely at the server side and requires no modification to client-side training procedures or model architectures. We evaluate FedDB across multiple learning scenarios, including convolutional neural network (CNN) training and large language model (LLM) fine-tuning. Experimental results demonstrate that FedDB achieves significant communication savings-reducing client participation frequency by up to 60% compared to baseline federated learning strategies-while maintaining competitive predictive performance and stable convergence. Overall, this work highlights the effectiveness of dynamic, distance-based client selection for communication-efficient federated learning in distributed system.

Tipus de document

Master thesis

Llengua

Anglès

Publicat per

Universitat Politècnica de Catalunya

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

Open Access

Aquest element apareix en la col·lecció o col·leccions següent(s)