A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices

博士 === 國立交通大學 === 電子工程學系 電子研究所 === 104 === As smart devices become popular in nowadays community, the development of system-level power management policies is crucial for battery-powered embedded systems, such that low-utilized processors are slowed-down using dynamic voltage and frequency scaling (...

Full description

Bibliographic Details
Main Authors: Pan, Gung-Yu, 潘畊宇
Other Authors: Jou, Jing-Yang
Format: Others
Language:en_US
Published: 2015
Online Access:http://ndltd.ncl.edu.tw/handle/44112023323478151257
id ndltd-TW-104NCTU5428006
record_format oai_dc
spelling ndltd-TW-104NCTU54280062017-09-15T04:40:08Z http://ndltd.ncl.edu.tw/handle/44112023323478151257 A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices 在多核心智慧型裝置上結合雲端運算及機器學習演算法所實現的電源管理策略 Pan, Gung-Yu 潘畊宇 博士 國立交通大學 電子工程學系 電子研究所 104 As smart devices become popular in nowadays community, the development of system-level power management policies is crucial for battery-powered embedded systems, such that low-utilized processors are slowed-down using dynamic voltage and frequency scaling (DVFS) and idle components are turned off using dynamic power management (DPM). Due to the growing number of components and the divergent information of input contexts, the efficiency and the effectiveness should be extraordinarily considered when designing power management policies for future smart devices. Besides, the power managers should be adaptive to the environments and autonomous to the users. In this dissertation, a comprehensive power management policy is developed for future smart devices, such that the multiprocessors and components are energy-efficient, while the power managers are autonomous and light-weight. The proposed policy focuses on the multiprocessors first, and then extends to the whole system of smart devices. Due to the increasing number of cores in a system, the policy scalability has become critical when the searching space expands exponentially. Two highly scalable algorithms are proposed for multiprocessors. The DVFS-driven combinatorial algorithm first constructs an optimum mode combination table in pseudo-polynomial time, and then assigns to cores with the minimum transition cost in linear time. The DPM-driven learning engine exploits the multi-level paradigm to decide and update in linearithmic time, and raises the convergence rate by compressing redundant searching space. Compared with state-of-the-art policies, the combinatorial optimization policy achieves better performance for any given power budget with up to 125X speedup, and the multi-level reinforcement learning policy runs 53% faster and achieves 13.6% energy saving with only 2.7% latency penalty on average. The powerfulness of smart devices burdens the power manager with more input/output and larger searching space. Since most smart devices are connected to Internet, the sophisticated learning engine is offloaded to the cloud in order to lessen the overhead, and the training samples are shared between different devices to accelerate the learning process. As a result, when one thousand same-model devices are connected to the cloud, the proposed policy is able to converge within a few iterations. Besides, the measured overhead is only 0.01% of the system time when implemented as an Android App. The policy in this dissertation is not restricted to current systems but any future smart device connecting to Internet with more considerations, such as heterogeneous architectures, thermal and variation issues. Furthermore, this framework can be further applied to Internet-of-Things (IoT) and home automation in the near future. Jou, Jing-Yang Lai, Bo-Cheng 周景揚 賴伯承 2015 學位論文 ; thesis 125 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 博士 === 國立交通大學 === 電子工程學系 電子研究所 === 104 === As smart devices become popular in nowadays community, the development of system-level power management policies is crucial for battery-powered embedded systems, such that low-utilized processors are slowed-down using dynamic voltage and frequency scaling (DVFS) and idle components are turned off using dynamic power management (DPM). Due to the growing number of components and the divergent information of input contexts, the efficiency and the effectiveness should be extraordinarily considered when designing power management policies for future smart devices. Besides, the power managers should be adaptive to the environments and autonomous to the users. In this dissertation, a comprehensive power management policy is developed for future smart devices, such that the multiprocessors and components are energy-efficient, while the power managers are autonomous and light-weight. The proposed policy focuses on the multiprocessors first, and then extends to the whole system of smart devices. Due to the increasing number of cores in a system, the policy scalability has become critical when the searching space expands exponentially. Two highly scalable algorithms are proposed for multiprocessors. The DVFS-driven combinatorial algorithm first constructs an optimum mode combination table in pseudo-polynomial time, and then assigns to cores with the minimum transition cost in linear time. The DPM-driven learning engine exploits the multi-level paradigm to decide and update in linearithmic time, and raises the convergence rate by compressing redundant searching space. Compared with state-of-the-art policies, the combinatorial optimization policy achieves better performance for any given power budget with up to 125X speedup, and the multi-level reinforcement learning policy runs 53% faster and achieves 13.6% energy saving with only 2.7% latency penalty on average. The powerfulness of smart devices burdens the power manager with more input/output and larger searching space. Since most smart devices are connected to Internet, the sophisticated learning engine is offloaded to the cloud in order to lessen the overhead, and the training samples are shared between different devices to accelerate the learning process. As a result, when one thousand same-model devices are connected to the cloud, the proposed policy is able to converge within a few iterations. Besides, the measured overhead is only 0.01% of the system time when implemented as an Android App. The policy in this dissertation is not restricted to current systems but any future smart device connecting to Internet with more considerations, such as heterogeneous architectures, thermal and variation issues. Furthermore, this framework can be further applied to Internet-of-Things (IoT) and home automation in the near future.
author2 Jou, Jing-Yang
author_facet Jou, Jing-Yang
Pan, Gung-Yu
潘畊宇
author Pan, Gung-Yu
潘畊宇
spellingShingle Pan, Gung-Yu
潘畊宇
A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
author_sort Pan, Gung-Yu
title A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
title_short A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
title_full A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
title_fullStr A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
title_full_unstemmed A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices
title_sort learning-on-cloud power management policy for multiprocessor smart devices
publishDate 2015
url http://ndltd.ncl.edu.tw/handle/44112023323478151257
work_keys_str_mv AT pangungyu alearningoncloudpowermanagementpolicyformultiprocessorsmartdevices
AT pāngēngyǔ alearningoncloudpowermanagementpolicyformultiprocessorsmartdevices
AT pangungyu zàiduōhéxīnzhìhuìxíngzhuāngzhìshàngjiéhéyúnduānyùnsuànjíjīqìxuéxíyǎnsuànfǎsuǒshíxiàndediànyuánguǎnlǐcèlüè
AT pāngēngyǔ zàiduōhéxīnzhìhuìxíngzhuāngzhìshàngjiéhéyúnduānyùnsuànjíjīqìxuéxíyǎnsuànfǎsuǒshíxiàndediànyuánguǎnlǐcèlüè
AT pangungyu learningoncloudpowermanagementpolicyformultiprocessorsmartdevices
AT pāngēngyǔ learningoncloudpowermanagementpolicyformultiprocessorsmartdevices
_version_ 1718533732605886464