Summary: | In recent years, deep convolutional neural networks (CNN) have been widely used in computer vision and significantly improved the performance of image recognition tasks. Most works use softmax loss to supervise the training of CNN and then adopt the output of last layer as features. However, the discriminative capability of the softmax loss is limited. Here, the authors analyse and improve the softmax loss by manipulating the cosine value and input feature length. As the approach does not change the principle of the softmax loss, the network can easily be optimised by typical stochastic gradient descent. The MNIST handwritten digits dataset is employed to visualise the features learned by the improved softmax loss. The CASIA-WebFace and FER2013 training set are adopted to train deep CNN for face and expression recognition, respectively. Results on both the LFW dataset and FER2013 test set show that the proposed softmax loss can learn more discriminative features and achieve better performance.
|