Autor: |
Diana Resmerita, Rodrigo Cabral Farias, Lionel Fillatre, Benoît Dupont de Dinechin |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
SSP |
DOI: |
10.1109/ssp49050.2021.9513733 |
Popis: |
Deep neural networks need to be compressed due to their high memory requirements and computational complexity. Numerous compression methods have been proposed to solve this issue, but we still do not fully understand how the compression error will impact the neural networks. We take inspiration from the rate distortion theory to propose a new distortion function which measures the gap between the Bayes risk of a classifier before and after the compression. Since this distortion is not tractable, we derive a theoretical closed-form approximation when the last fully connected layer of a deep neural network is compressed with a uniform quantizer. This approximation provides insight into the relationship between the accuracy loss and some key characteristics of the neural network. Numerical simulations show that the approximation is reasonably accurate. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|