Author Name : P. Krishnamoorthy
Copyright: ©2025 | Pages: 34
DOI: 10.71443/9788197933608-13
Received: 24/10/2024 Accepted: 26/12/2024 Published: 17/02/2025
Federated Learning (FL) has emerged as a promising approach for decentralized and privacy-preserving anomaly detection across distributed systems, particularly in large-scale networks. The integration of FL for scalable and efficient anomaly detection, addressing key challenges such as network and communication constraints, edge device limitations, and the scalability of machine learning models. Emphasis is placed on optimizing model aggregation strategies, reducing communication overhead, and leveraging local training to enhance performance. The chapter explores advanced techniques like asynchronous updates, model compression, and hierarchical aggregation to overcome data synchronization issues. Additionally, it discusses dynamic federation strategies that adapt to system load and the importance of data management for improved scalability. By addressing these critical aspects, the chapter provides a comprehensive framework for implementing FL-based anomaly detection in real-world, resource-constrained environments. The combination of innovative methodologies and practical insights presented here paves the way for deploying FL in diverse applications, ranging from IoT systems to large-scale industrial networks, ensuring robust and efficient anomaly detection without compromising security or scalability.ÂÂÂÂ
Federated Learning (FL) has gained significant attention in recent years due to its ability to enable decentralized machine learning across distributed systems [1]. In traditional machine learning paradigms, data is collected and stored in a centralized location before training models [2]. With the advent of technologies such as IoT, edge computing, and industrial sensor networks, data is often distributed across numerous devices [3]. This creates challenges in terms of data privacy and security, particularly when dealing with sensitive or proprietary information [4]. FL addresses these challenges by allowing models to be trained collaboratively across edge devices without the need to share raw data [5]. This is particularly useful in applications like anomaly detection, where timely identification of outliers or system failures is critical to maintaining the integrity of large-scale systems [6].
Anomaly detection is a crucial task across various domains, including cybersecurity, industrial systems, healthcare, and finance [7]. Detecting deviations from normal behavior can help identify potential threats, faults, or inefficiencies [8]. In traditional anomaly detection methods, data from different sources is collected and centralized, which may expose the system to privacy risks and increased latency [9]. Federated Learning offers a decentralized approach to anomaly detection, ensuring that sensitive data remains on local devices while models are updated collaboratively [10]. As the number of devices in these systems grows, significant challenges arise, such as computational constraints, communication overhead, and difficulties in model synchronization [11]. These factors need to be carefully managed to ensure that FL-based anomaly detection remains effective and scalable in large-scale systems [12].