Peer Reviewed Chapter
Chapter Name : Integrating Edge Computing with Advanced Machine Learning Models in IoT

Author Name : Manas Ranjan Mohapatra

Copyright: © 2024 | Pages: 30

DOI: 10.71443/9788197282102-14

Received: WU Accepted: WU Published: WU

Abstract

The integration of edge computing with advanced machine learning models was transforming the landscape of IOT applications, driving significant advancements in real-time data processing and decision-making. This book chapter delves into the critical aspects of this integration, focusing on the optimization of machine learning models for edge environments, the development of edge-specific model architectures, and strategies for enhancing energy efficiency and reducing latency. Key areas of exploration include the deployment of lightweight models, leveraging hardware accelerators, and implementing flexible deployment strategies to address the constraints and requirements of edge devices. By examining the comparative performance of edge-specific versus traditional architectures, as well as benchmarking and evaluating model efficiency, this chapter provides a comprehensive framework for understanding and optimizing the interplay between edge computing and machine learning. The discussion was supported by practical case studies and real-world applications, offering valuable insights for researchers and practitioners seeking to enhance the capabilities and efficiency of edge-based IoT systems.

Introduction

The rapid expansion of the IOT has led to a significant increase in the volume and complexity of data generated by interconnected devices [1]. As IoT applications become more prevalent, the demand for real-time data processing and analysis has intensified [2]. Traditional cloud-based computing models, while robust, often encounter challenges related to latency, bandwidth limitations, and scalability when handling vast quantities of data [3]. In response, edge computing has emerged as a paradigm shift that brings computational resources closer to the data source [4]. This proximity reduces latency and alleviates bandwidth issues, enabling more efficient and immediate data processing [5]. Integrating advanced machine learning models with edge computing offers the potential to address these challenges by enabling on-device analytics and decision-making [6].

One of the key challenges in integrating machine learning with edge computing was optimizing models to function effectively within the constraints of edge devices [7,8]. Unlike cloud-based environments with abundant computational resources, edge devices often have limited processing power, memory, and storage [9-11]. To address these constraints, machine learning models must be optimized for efficiency [12]. This includes techniques such as model compression, which reduces the size of models without significantly impacting accuracy, and quantization, which lowers the precision of computations to save on computational resources [13-15]. These optimizations are critical for ensuring that models can operate effectively on resource-constrained edge devices while maintaining performance standards [16].