Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU

The utilization of optimization algorithms within engineering problems has had a major rise in recent years, which has led to the proliferation of a large number of new algorithms to solve optimization problems. In addition, the emergence of new parallelization techniques applicable to these algorit...

Full description

Bibliographic Details
Main Authors: H. Rico-Garcia, Jose-Luis Sanchez-Romero, A. Jimeno-Morenilla, H. Migallon-Gomis, H. Mora-Mora, R. V. Rao
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
GPU
Online Access:https://ieeexplore.ieee.org/document/8834768/
id doaj-0a8df63116ab4c31bbbf859aaf03b475
record_format Article
spelling doaj-0a8df63116ab4c31bbbf859aaf03b4752021-04-05T17:13:30ZengIEEEIEEE Access2169-35362019-01-01713382213383110.1109/ACCESS.2019.29410868834768Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPUH. Rico-Garcia0Jose-Luis Sanchez-Romero1https://orcid.org/0000-0001-8766-2813A. Jimeno-Morenilla2https://orcid.org/0000-0002-3789-6475H. Migallon-Gomis3H. Mora-Mora4R. V. Rao5https://orcid.org/0000-0002-9957-1086Department of Computer Technology, University of Alicante, Alicante, SpainDepartment of Computer Technology, University of Alicante, Alicante, SpainDepartment of Computer Technology, University of Alicante, Alicante, SpainDepartment of Computer Engineering, Miguel Hernández University, Elche, SpainDepartment of Computer Technology, University of Alicante, Alicante, SpainSardar Vallabhbhai National Institute of Technology, Surat, IndiaThe utilization of optimization algorithms within engineering problems has had a major rise in recent years, which has led to the proliferation of a large number of new algorithms to solve optimization problems. In addition, the emergence of new parallelization techniques applicable to these algorithms to improve their convergence time has made it a subject of study by many authors. Recently, two optimization algorithms have been developed: Teaching-Learning Based Optimization and Jaya. One of the main advantages of both algorithms over other optimization methods is that the former do not need to adjust specific parameters for the particular problem to which they are applied. In this paper, the parallel implementations of Teaching-Learning Based Optimization and Jaya are compared. The parallelization of both algorithms is performed using manycore GPU techniques. Different scenarios will be created involving functions frequently applied to the evaluation of optimization algorithms. Results will make it possible to compare both parallel algorithms with regard to the number of iterations and the time needed to perform them so as to obtain a predefined error level. The GPU resources occupation in each case will also be analyzed.https://ieeexplore.ieee.org/document/8834768/CUDAGPUJayaTLBOoptimizationparallelism
collection DOAJ
language English
format Article
sources DOAJ
author H. Rico-Garcia
Jose-Luis Sanchez-Romero
A. Jimeno-Morenilla
H. Migallon-Gomis
H. Mora-Mora
R. V. Rao
spellingShingle H. Rico-Garcia
Jose-Luis Sanchez-Romero
A. Jimeno-Morenilla
H. Migallon-Gomis
H. Mora-Mora
R. V. Rao
Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
IEEE Access
CUDA
GPU
Jaya
TLBO
optimization
parallelism
author_facet H. Rico-Garcia
Jose-Luis Sanchez-Romero
A. Jimeno-Morenilla
H. Migallon-Gomis
H. Mora-Mora
R. V. Rao
author_sort H. Rico-Garcia
title Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
title_short Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
title_full Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
title_fullStr Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
title_full_unstemmed Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU
title_sort comparison of high performance parallel implementations of tlbo and jaya optimization methods on manycore gpu
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2019-01-01
description The utilization of optimization algorithms within engineering problems has had a major rise in recent years, which has led to the proliferation of a large number of new algorithms to solve optimization problems. In addition, the emergence of new parallelization techniques applicable to these algorithms to improve their convergence time has made it a subject of study by many authors. Recently, two optimization algorithms have been developed: Teaching-Learning Based Optimization and Jaya. One of the main advantages of both algorithms over other optimization methods is that the former do not need to adjust specific parameters for the particular problem to which they are applied. In this paper, the parallel implementations of Teaching-Learning Based Optimization and Jaya are compared. The parallelization of both algorithms is performed using manycore GPU techniques. Different scenarios will be created involving functions frequently applied to the evaluation of optimization algorithms. Results will make it possible to compare both parallel algorithms with regard to the number of iterations and the time needed to perform them so as to obtain a predefined error level. The GPU resources occupation in each case will also be analyzed.
topic CUDA
GPU
Jaya
TLBO
optimization
parallelism
url https://ieeexplore.ieee.org/document/8834768/
work_keys_str_mv AT hricogarcia comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
AT joseluissanchezromero comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
AT ajimenomorenilla comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
AT hmigallongomis comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
AT hmoramora comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
AT rvrao comparisonofhighperformanceparallelimplementationsoftlboandjayaoptimizationmethodsonmanycoregpu
_version_ 1721539983460794368