Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision

abstract: Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such framework...

Full description

Bibliographic Details
Other Authors: Lohit, Suhas Anand (Author)
Format: Doctoral Thesis
Language:English
Published: 2019
Subjects:
Online Access:http://hdl.handle.net/2286/R.I.55542
id ndltd-asu.edu-item-55542
record_format oai_dc
spelling ndltd-asu.edu-item-555422020-01-15T03:01:08Z Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision abstract: Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks. Dissertation/Thesis Lohit, Suhas Anand (Author) Turaga, Pavan (Advisor) Spanias, Andreas (Committee member) Li, Baoxin (Committee member) Jayasuriya, Suren (Committee member) Arizona State University (Publisher) Electrical engineering Computer science Compressive sensing Computer vision Deep learning Differential geometry Machine learning Signal processing eng 169 pages Doctoral Dissertation Electrical Engineering 2019 Doctoral Dissertation http://hdl.handle.net/2286/R.I.55542 http://rightsstatements.org/vocab/InC/1.0/ 2019
collection NDLTD
language English
format Doctoral Thesis
sources NDLTD
topic Electrical engineering
Computer science
Compressive sensing
Computer vision
Deep learning
Differential geometry
Machine learning
Signal processing
spellingShingle Electrical engineering
Computer science
Compressive sensing
Computer vision
Deep learning
Differential geometry
Machine learning
Signal processing
Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
description abstract: Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks. === Dissertation/Thesis === Doctoral Dissertation Electrical Engineering 2019
author2 Lohit, Suhas Anand (Author)
author_facet Lohit, Suhas Anand (Author)
title Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
title_short Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
title_full Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
title_fullStr Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
title_full_unstemmed Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
title_sort building constraints, geometric invariants and interpretability in deep learning: applications in computational imaging and vision
publishDate 2019
url http://hdl.handle.net/2286/R.I.55542
_version_ 1719308506294648832