Autor: |
Lin R, Li JCL, Zhou J, Huang B, Ran J, Wong N |
Jazyk: |
angličtina |
Zdroj: |
IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Nov 28; Vol. PP. Date of Electronic Publication: 2023 Nov 28. |
DOI: |
10.1109/TNNLS.2023.3333562 |
Abstrakt: |
Most deep neural networks (DNNs) consist fundamentally of convolutional and/or fully connected layers, wherein the linear transform can be cast as the product between a filter matrix and a data matrix obtained by arranging feature tensors into columns. Recently proposed deformable butterfly (DeBut) decomposes the filter matrix into generalized, butterfly-like factors, thus achieving network compression orthogonal to the traditional ways of pruning or low-rank decomposition. This work reveals an intimate link between DeBut and a systematic hierarchy of depthwise and pointwise convolutions, which explains the empirically good performance of DeBut layers. By developing an automated DeBut chain generator, we show for the first time the viability of homogenizing a DNN into all DeBut layers, thus achieving extreme sparsity and compression. Various examples and hardware benchmarks verify the advantages of All-DeBut networks. In particular, we show it is possible to compress a PointNet to 5% parameters with 5% accuracy drop, a record not achievable by other compression schemes. |
Databáze: |
MEDLINE |
Externí odkaz: |
|