Research Field

Federated Learning (FL) is a game-changing paradigm in distributed learning. It uses edge devices as clients to train a global model by leveraging the diversity of local datasets without sharing raw data. Unlike prior distributed approaches, FL enables a scalable, heterogeneous, privacy-preserving, and secure ecosystem for building comprehensive models. In practice, formal guarantees often rely on secure aggregation and/or differential privacy.

In FL, clients share their locally trained model updates with a server, which aggregates them at each communication round. The server then broadcasts the aggregated model as the initialization for the next round. This continues until a stopping criterion is met. Raw data always stays on-device, protecting client privacy. In effect, FL brings training to the data, bridging the gap between user-sensitive data and models that generalize across diverse sources. Microcontrollers (MCUs), SoCs, smartphones, tablets, PCs, wearables, and IoT/IIoT nodes can all serve as FL clients, making FL well suited for domains such as healthcare, human activity analysis, and vehicles, especially under embedded constraints (limited memory/compute, energy, and real-time requirements).

However, FL faces several key challenges: communication overhead (caused by unreliable, bandwidth-limited client-side links such as Wi-Fi/BLE/LPWAN), severe heterogeneity in client hardware and local data, and ongoing security and privacy risks. FL is vulnerable to attacks that infer information from trained models, including membership inference, gradient inversion, and adversarial attacks. Extensive research aims to improve robustness. Moreover, the inherent non-IID nature of distributed data can hurt performance, and many methods have been proposed to address this.

Hardware diversity also leads to straggler clients that delay the overall system. This can be mitigated with client-selection policies for each communication round, as proposed in prior work. In addition, edge devices connect over unreliable and bandwidth-restricted links, which complicates model uploads and downloads. Approaches such as quantization and pruning reduce payload sizes to alleviate these issues.

At the Chair of Embedded Systems, we are working to address these challenges. Our current focus is on reducing communication overhead and supporting resource-constrained devices by decreasing uplink/downlink payload size via model quantization and pruning, lowering client-side memory and compute requirements, and improving privacy. We have also developed our in-house FL framework, FedWork, to rapidly implement ideas and compare methods by configuring benchmarks via a simple XML file

Employee responsible

To Top