Compressing deep graph convolution network with multi-staged knowledge distillation.

Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems...

Full description

Bibliographic Details
Main Authors: Junghun Kim, Jinhong Jung, U Kang
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2021-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0256187