Smart D2D Resource Management in H-CRANs using Asynchronous Federated Reinforcement Learning
DOI:
https://doi.org/10.56042/ijpap.v64i1.22158Keywords:
Device-to-device (D2D) communication, Heterogeneous cloud radio access networks (H-CRANs), 5G-V2X Systems, Federated deep reinforcement learning (FDRL), Resource allocation, Privacy-preserving learningAbstract
Efficient mode selection and resource allocation remain critical challenges in Device-to-device (D2D) communication within heterogeneous cloud radio access networks (H-CRANs), especially in the context of 5G-Vehicle-to-everything (5G V2X) systems. Conventional centralized approaches suffer from high communication overhead, synchronization delays, sensitivity to independent and identically distributed (IID) data, and significant privacy concerns. To address these limitations, this paper proposes a novel framework that combines Asynchronous Federated Deep Reinforcement Learning (AF-DRL) with Federated Averaging. In the proposed approach, D2D agents learn optimal transmission policies locally without exchanging raw data and asynchronously transmit model updates to a central server. The server aggregates these updates to form a global model, which is redistributed to agents in an iterative manner. This decentralized learning paradigm ensures scalability, privacy preservation, and robustness against IID data. Extensive simulations validate the effectiveness of the framework, demonstrating a 20 % improvement in user satisfaction, 15 % improvement in resource utilization, 20 % reduction in latency, and 25 % improvement in throughput compared to state-of-the-art methods. These results highlight the potential of the proposed method for enhancing resource management in next-generation vehicular communication systems.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Indian Journal of Pure & Applied Physics (IJPAP)

This work is licensed under a Creative Commons Attribution 4.0 International License.