Regret of Multi-Channel Bandit Game in Cognitive Radio Networks

The problem of how to evaluate the rate of convergence to Nash equilibrium solutions in the process of channel selection under incomplete information is studied. In this paper, the definition of regret is used to reflect the convergence rates of online algorithms. The process of selecting an idle ch...

Full description

Bibliographic Details
Main Authors: Ma Jun, Zhang Yonghong
Format: Article
Language:English
Published: EDP Sciences 2016-01-01
Series:MATEC Web of Conferences
Subjects:
Online Access:http://dx.doi.org/10.1051/matecconf/20165605002
id doaj-61d45cdbc32047df9143b509846f5505
record_format Article
spelling doaj-61d45cdbc32047df9143b509846f55052021-02-02T00:57:37ZengEDP SciencesMATEC Web of Conferences2261-236X2016-01-01560500210.1051/matecconf/20165605002matecconf_iccae2016_05002Regret of Multi-Channel Bandit Game in Cognitive Radio NetworksMa JunZhang YonghongThe problem of how to evaluate the rate of convergence to Nash equilibrium solutions in the process of channel selection under incomplete information is studied. In this paper, the definition of regret is used to reflect the convergence rates of online algorithms. The process of selecting an idle channel for each secondary user is modeled as a multi-channel bandit game. The definition of the maximal averaged regret is given. Two existing online learning algorithms are used to obtain the Nash equilibrium for each SU. The maximal averaged regrets are used to evaluate the performances of online algorithms. When there is a pure strategy Nash equilibrium in the multi-channel bandit game, the maximal averaged regrets are finite. A cooperation mechanism is also needed in the process of calculating the maximal averaged regrets. Simulation results show the maximal averaged regrets are finite and the online algorithm with greater convergence rate has less maximal averaged regrets.http://dx.doi.org/10.1051/matecconf/20165605002cognitive radio networksadversarial bandit problemcongestion gameonline learningdynamic spectrum access
collection DOAJ
language English
format Article
sources DOAJ
author Ma Jun
Zhang Yonghong
spellingShingle Ma Jun
Zhang Yonghong
Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
MATEC Web of Conferences
cognitive radio networks
adversarial bandit problem
congestion game
online learning
dynamic spectrum access
author_facet Ma Jun
Zhang Yonghong
author_sort Ma Jun
title Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
title_short Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
title_full Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
title_fullStr Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
title_full_unstemmed Regret of Multi-Channel Bandit Game in Cognitive Radio Networks
title_sort regret of multi-channel bandit game in cognitive radio networks
publisher EDP Sciences
series MATEC Web of Conferences
issn 2261-236X
publishDate 2016-01-01
description The problem of how to evaluate the rate of convergence to Nash equilibrium solutions in the process of channel selection under incomplete information is studied. In this paper, the definition of regret is used to reflect the convergence rates of online algorithms. The process of selecting an idle channel for each secondary user is modeled as a multi-channel bandit game. The definition of the maximal averaged regret is given. Two existing online learning algorithms are used to obtain the Nash equilibrium for each SU. The maximal averaged regrets are used to evaluate the performances of online algorithms. When there is a pure strategy Nash equilibrium in the multi-channel bandit game, the maximal averaged regrets are finite. A cooperation mechanism is also needed in the process of calculating the maximal averaged regrets. Simulation results show the maximal averaged regrets are finite and the online algorithm with greater convergence rate has less maximal averaged regrets.
topic cognitive radio networks
adversarial bandit problem
congestion game
online learning
dynamic spectrum access
url http://dx.doi.org/10.1051/matecconf/20165605002
work_keys_str_mv AT majun regretofmultichannelbanditgameincognitiveradionetworks
AT zhangyonghong regretofmultichannelbanditgameincognitiveradionetworks
_version_ 1724312609893646336