Reinforcement Learning-Based Secure Training for Adversarial Defense in Graph Neural Networks
Neurocomputing, 2025
Dongdong An, Yi Yang, Xin Gao, Hongda Qi, Yang Yang, Xin Ye, Maozhen Li, and Qin Zhao
论文简介图片

The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep Q-Learning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method's security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.

← 返回研究成果列表
SHNU Campus Image