Testing coverage criteria for optimized deep belief network with search and rescue

Abstract A new data-driven programming model is defined by the deep learning (DL) that makes the internal structure of a created neuron system over a fixed of training data. DL testing structure only depends on the data labeling and manual group. Nowadays, a lot of coverage criteria have been develo...

Full description

Bibliographic Details
Main Authors: Kiran Jammalamadaka, Nikhat Parveen
Format: Article
Language:English
Published: SpringerOpen 2021-04-01
Series:Journal of Big Data
Subjects:
Online Access:https://doi.org/10.1186/s40537-021-00453-7
id doaj-4b9111da25514468b800164951462248
record_format Article
spelling doaj-4b9111da25514468b8001649514622482021-04-25T11:49:51ZengSpringerOpenJournal of Big Data2196-11152021-04-018112010.1186/s40537-021-00453-7Testing coverage criteria for optimized deep belief network with search and rescueKiran Jammalamadaka0Nikhat Parveen1Department of Computer Science and Engineering, Koneru Lakshmaiah Education FoundationDepartmentof Computer Science and Engineering, Koneru Lakshmaiah Education FoundationAbstract A new data-driven programming model is defined by the deep learning (DL) that makes the internal structure of a created neuron system over a fixed of training data. DL testing structure only depends on the data labeling and manual group. Nowadays, a lot of coverage criteria have been developed, but these criteria basically count the neurons' quantity whose activation during the implementation of a DL structure fulfilled certain properties. Also, existing criteria are not adequately fine-grained to capture delicate behaviors. This paper develops an optimized deep belief network (DBN) with a search and rescue (SAR) algorithm for testing coverage criteria. For an optimal selection of DBN structure, the SAR algorithm is introduced. The main objective is to test the DL structure using different criteria to enhance the coverage accuracy. The different coverage criteria such as KMNC, NBC, SNAC, TKNC, and TKNP are used for the testing of DBN. Using the generated test inputs, the criteria is validated and the developed criteria are capable to capture undesired behaviors in the DBN structure. The developed approach is implemented by Python platform using three standard datasets like MNIST, CIFAR-10, and ImageNet. For analysis, the developed approach is compared with the three LeNet models like LeNet-1, LeNet-4 and LeNet-5 for the MNIST dataset, the VGG-16, and ResNet-20 models for the CIFAR-10 dataset, and the VGG-19 and ResNet-50 models for the ImageNet dataset. These models are tested on the four adversarial test input generation approaches like BIM, JSMA, FGSM, and CW, and one DL testing method like DeepGauge to validate the efficiency of the suggested approach. The simulation results proved that the proposed approach obtained high coverage accuracy for each criterion on four adversarial test inputs and one DL testing method as compared to other models.https://doi.org/10.1186/s40537-021-00453-7Deep learningCoverage criteriaOptimization algorithmAdversarial exampleSoftware testing
collection DOAJ
language English
format Article
sources DOAJ
author Kiran Jammalamadaka
Nikhat Parveen
spellingShingle Kiran Jammalamadaka
Nikhat Parveen
Testing coverage criteria for optimized deep belief network with search and rescue
Journal of Big Data
Deep learning
Coverage criteria
Optimization algorithm
Adversarial example
Software testing
author_facet Kiran Jammalamadaka
Nikhat Parveen
author_sort Kiran Jammalamadaka
title Testing coverage criteria for optimized deep belief network with search and rescue
title_short Testing coverage criteria for optimized deep belief network with search and rescue
title_full Testing coverage criteria for optimized deep belief network with search and rescue
title_fullStr Testing coverage criteria for optimized deep belief network with search and rescue
title_full_unstemmed Testing coverage criteria for optimized deep belief network with search and rescue
title_sort testing coverage criteria for optimized deep belief network with search and rescue
publisher SpringerOpen
series Journal of Big Data
issn 2196-1115
publishDate 2021-04-01
description Abstract A new data-driven programming model is defined by the deep learning (DL) that makes the internal structure of a created neuron system over a fixed of training data. DL testing structure only depends on the data labeling and manual group. Nowadays, a lot of coverage criteria have been developed, but these criteria basically count the neurons' quantity whose activation during the implementation of a DL structure fulfilled certain properties. Also, existing criteria are not adequately fine-grained to capture delicate behaviors. This paper develops an optimized deep belief network (DBN) with a search and rescue (SAR) algorithm for testing coverage criteria. For an optimal selection of DBN structure, the SAR algorithm is introduced. The main objective is to test the DL structure using different criteria to enhance the coverage accuracy. The different coverage criteria such as KMNC, NBC, SNAC, TKNC, and TKNP are used for the testing of DBN. Using the generated test inputs, the criteria is validated and the developed criteria are capable to capture undesired behaviors in the DBN structure. The developed approach is implemented by Python platform using three standard datasets like MNIST, CIFAR-10, and ImageNet. For analysis, the developed approach is compared with the three LeNet models like LeNet-1, LeNet-4 and LeNet-5 for the MNIST dataset, the VGG-16, and ResNet-20 models for the CIFAR-10 dataset, and the VGG-19 and ResNet-50 models for the ImageNet dataset. These models are tested on the four adversarial test input generation approaches like BIM, JSMA, FGSM, and CW, and one DL testing method like DeepGauge to validate the efficiency of the suggested approach. The simulation results proved that the proposed approach obtained high coverage accuracy for each criterion on four adversarial test inputs and one DL testing method as compared to other models.
topic Deep learning
Coverage criteria
Optimization algorithm
Adversarial example
Software testing
url https://doi.org/10.1186/s40537-021-00453-7
work_keys_str_mv AT kiranjammalamadaka testingcoveragecriteriaforoptimizeddeepbeliefnetworkwithsearchandrescue
AT nikhatparveen testingcoveragecriteriaforoptimizeddeepbeliefnetworkwithsearchandrescue
_version_ 1721509343911739392