Analysis of Vector Code Offloading Framework in Heterogeneous Cloud and Edge Architectures

Smartphones are computationally constrained compared with server devices due to their size and limited battery-based power. Compute-intensive tasks are often offloaded from smartphones to high-performance computing opportunities provided by nearby high-end cloud and edge servers. ARM architectures d...

Full description

Bibliographic Details
Main Authors: Junaid Shuja, Saad Mustafa, Raja Wasim Ahmad, Sajjad A. Madani, Abdullah Gani, Muhammad Khurram Khan
Format: Article
Language:English
Published: IEEE 2017-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8107487/
Description
Summary:Smartphones are computationally constrained compared with server devices due to their size and limited battery-based power. Compute-intensive tasks are often offloaded from smartphones to high-performance computing opportunities provided by nearby high-end cloud and edge servers. ARM architectures dominate smartphones, while x86 dominate server devices. The difference in architectures requires dynamic binary translation (DBT) of compiled code migration, which increases the task execution time on the cloud servers. Multimedia applications contain a large number of vector instructions (single instruction multiple data) that are compute and resource intensive. Vector instructions optimize application execution by parallel processing multiple data points in a single instruction. However, DBT of vector instructions losses the parallelism and optimization due to vector-scalar translations. We present and analyze a framework for pre-compiled vector instruction translation and offloading in heterogeneous compute architectures that avoids the execution overhead of compiled code offloading. The framework maps and translates ARM vector intrinsics to x86 vector intrinsics such that an application programmed for ARM architecture can be executed on the x86 architecture without any modification. We analyze the code offloading framework with static code analysis to determine the optimal compilers and corresponding compilation parameters. Moreover, we analyze the overhead of the vector instruction translator and application profiler. Furthermore, the comparative analysis based on increasing computational sizes reveals that our framework provides 78.8% energy efficiency as compared with existing code translation and offloading frameworks.
ISSN:2169-3536