Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Flamich, Gergely"'
Current methods for compressing neural network weights, such as decomposition, pruning, quantization, and channel simulation, often overlook the inherent symmetries within these networks and thus waste bits on encoding redundant information. In this
Externí odkaz:
http://arxiv.org/abs/2410.01309
Relative entropy coding (REC) algorithms encode a random sample following a target distribution $Q$, using a coding distribution $P$ shared between the sender and receiver. Sadly, general REC algorithms suffer from prohibitive encoding times, at leas
Externí odkaz:
http://arxiv.org/abs/2405.12203
Autor:
Flamich, Gergely, Wells, Lennie
Channel simulation algorithms can efficiently encode random samples from a prescribed target distribution $Q$ and find applications in machine learning-based lossy data compression. However, algorithms that encode exact samples usually have random ru
Externí odkaz:
http://arxiv.org/abs/2405.04363
Autor:
Goc, Daniel, Flamich, Gergely
One-shot channel simulation has recently emerged as a promising alternative to quantization and entropy coding in machine-learning-based lossy data compression schemes. However, while there are several potential applications of channel simulation - l
Externí odkaz:
http://arxiv.org/abs/2401.16579
An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound. We investigate this issue by estimating data
Externí odkaz:
http://arxiv.org/abs/2310.20053
COMpression with Bayesian Implicit NEural Representations (COMBINER) is a recent data compression method that addresses a key inefficiency of previous Implicit Neural Representation (INR)-based approaches: it avoids quantization and enables direct op
Externí odkaz:
http://arxiv.org/abs/2309.17182
Relative entropy coding (REC) algorithms encode a sample from a target distribution $Q$ using a proposal distribution $P$ using as few bits as possible. Unlike entropy coding, REC does not assume discrete distributions or require quantisation. As suc
Externí odkaz:
http://arxiv.org/abs/2309.15746
This paper studies the qualitative behavior and robustness of two variants of Minimal Random Code Learning (MIRACLE) used to compress variational Bayesian neural networks. MIRACLE implements a powerful, conditionally Gaussian variational approximatio
Externí odkaz:
http://arxiv.org/abs/2307.07816
Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image. Based on this view, data can be compressed by overfitting a compact neural network to its
Externí odkaz:
http://arxiv.org/abs/2305.19185
Autor:
Flamich, Gergely
One-shot channel simulation is a fundamental data compression problem concerned with encoding a single sample from a target distribution $Q$ using a coding distribution $P$ using as few bits as possible on average. Algorithms that solve this problem
Externí odkaz:
http://arxiv.org/abs/2305.15313