Extending reference architecture of big data systems towards machine learning in edge computing environments

Abstract Background Augmented reality, computer vision and other (e.g. network functions, Internet-of-Things (IoT)) use cases can be realised in edge computing environments with machine learning (ML) techniques. For realisation of the use cases, it has to be understood how data is collected, stored,...

Full description

Bibliographic Details
Main Authors: P. Pääkkönen, D. Pakkala
Format: Article
Language:English
Published: SpringerOpen 2020-04-01
Series:Journal of Big Data
Subjects:
Online Access:http://link.springer.com/article/10.1186/s40537-020-00303-y
id doaj-7846e4e88dc448a9bbfdb1908250c184
record_format Article
spelling doaj-7846e4e88dc448a9bbfdb1908250c1842020-11-25T02:58:38ZengSpringerOpenJournal of Big Data2196-11152020-04-017112910.1186/s40537-020-00303-yExtending reference architecture of big data systems towards machine learning in edge computing environmentsP. Pääkkönen0D. Pakkala1VTT Technical Research Centre of FinlandVTT Technical Research Centre of FinlandAbstract Background Augmented reality, computer vision and other (e.g. network functions, Internet-of-Things (IoT)) use cases can be realised in edge computing environments with machine learning (ML) techniques. For realisation of the use cases, it has to be understood how data is collected, stored, processed, analysed, and visualised in big data systems. In order to provide services with low latency for end users, often utilisation of ML techniques has to be optimized. Also, software/service developers have to understand, how to develop and deploy ML models in edge computing environments. Therefore, architecture design of big data systems to edge computing environments may be challenging. Findings The contribution of this paper is reference architecture (RA) design of a big data system utilising ML techniques in edge computing environments. An earlier version of the RA has been extended based on 16 realised implementation architectures, which have been developed to edge/distributed computing environments. Also, deployment of architectural elements in different environments is described. Finally, a system view is provided of the software engineering aspects of ML model development and deployment. Conclusions The presented RA may facilitate concrete architecture design of use cases in edge computing environments. The value of RAs is reduction of development and maintenance costs of systems, reduction of risks, and facilitation of communication between different stakeholders.http://link.springer.com/article/10.1186/s40537-020-00303-yNeural networksArchiMateEdge computingDevOpsInferenceMachine learning
collection DOAJ
language English
format Article
sources DOAJ
author P. Pääkkönen
D. Pakkala
spellingShingle P. Pääkkönen
D. Pakkala
Extending reference architecture of big data systems towards machine learning in edge computing environments
Journal of Big Data
Neural networks
ArchiMate
Edge computing
DevOps
Inference
Machine learning
author_facet P. Pääkkönen
D. Pakkala
author_sort P. Pääkkönen
title Extending reference architecture of big data systems towards machine learning in edge computing environments
title_short Extending reference architecture of big data systems towards machine learning in edge computing environments
title_full Extending reference architecture of big data systems towards machine learning in edge computing environments
title_fullStr Extending reference architecture of big data systems towards machine learning in edge computing environments
title_full_unstemmed Extending reference architecture of big data systems towards machine learning in edge computing environments
title_sort extending reference architecture of big data systems towards machine learning in edge computing environments
publisher SpringerOpen
series Journal of Big Data
issn 2196-1115
publishDate 2020-04-01
description Abstract Background Augmented reality, computer vision and other (e.g. network functions, Internet-of-Things (IoT)) use cases can be realised in edge computing environments with machine learning (ML) techniques. For realisation of the use cases, it has to be understood how data is collected, stored, processed, analysed, and visualised in big data systems. In order to provide services with low latency for end users, often utilisation of ML techniques has to be optimized. Also, software/service developers have to understand, how to develop and deploy ML models in edge computing environments. Therefore, architecture design of big data systems to edge computing environments may be challenging. Findings The contribution of this paper is reference architecture (RA) design of a big data system utilising ML techniques in edge computing environments. An earlier version of the RA has been extended based on 16 realised implementation architectures, which have been developed to edge/distributed computing environments. Also, deployment of architectural elements in different environments is described. Finally, a system view is provided of the software engineering aspects of ML model development and deployment. Conclusions The presented RA may facilitate concrete architecture design of use cases in edge computing environments. The value of RAs is reduction of development and maintenance costs of systems, reduction of risks, and facilitation of communication between different stakeholders.
topic Neural networks
ArchiMate
Edge computing
DevOps
Inference
Machine learning
url http://link.springer.com/article/10.1186/s40537-020-00303-y
work_keys_str_mv AT ppaakkonen extendingreferencearchitectureofbigdatasystemstowardsmachinelearninginedgecomputingenvironments
AT dpakkala extendingreferencearchitectureofbigdatasystemstowardsmachinelearninginedgecomputingenvironments
_version_ 1724705878597173248