Reducing computational costs in deep learning on almost linearly separable training data

Previous research in deep learning indicates that iterations of the gradient descent, over separable data converge toward the L2 maximum margin solution. Even in the absence of explicit regularization, the decision boundary still changes even if the classification error on training is equal to zero....

Full description

Bibliographic Details
Main Author: Ilona Kulikovskikh
Format: Article
Language:English
Published: Samara National Research University 2020-04-01
Series:Компьютерная оптика
Subjects:
Online Access:http://computeroptics.smr.ru/KO/PDF/KO44-2/440219.pdf