Convolutional neural networks (CNNs) have
shown extraordinary performance in a number of
applications, but they are usually of heavy design
for the accuracy reason. Beyond compressing the
filters in CNNs, this paper focuses on the redundancy
in the feature maps derived from the large
number of filters in a layer. We propose to extract
intrinsic representation of the feature maps
and preserve the discriminability of the features.
Circulant matrix is employed to formulate the
feature map transformation, which only requires
O(d log d) computation complexity to embed a
d-dimensional feature map. The filter is then reconfigured
to establish the mapping from original
input to the new compact feature map, and
the resulting network can preserve intrinsic information
of the original network with significantly
fewer parameters, which not only decreases the
online memory for launching CNN but also accelerates
the computation speed. Experiments on
benchmark image datasets demonstrate the superiority
of the proposed algorithm over state-ofthe-art
methods.
No comments:
Post a Comment