Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Cazenavette, George"'
Publikováno v:
CVPR 2024
Due to the high potential for abuse of GenAI systems, the task of detecting synthetic images has recently become of great interest to the research community. Unfortunately, existing image-space detectors quickly become obsolete as new high-fidelity t
Externí odkaz:
http://arxiv.org/abs/2406.08603
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a model trained on this synthetic set will perform equally well as a model trained on the full, real dataset. Until now, no method of Dataset Distillation
Externí odkaz:
http://arxiv.org/abs/2310.05773
Autor:
Tewari, Ayush, Yin, Tianwei, Cazenavette, George, Rezchikov, Semon, Tenenbaum, Joshua B., Durand, Frédo, Freeman, William T., Sitzmann, Vincent
Denoising diffusion models are a powerful type of generative models used to capture complex distributions of real-world signals. However, their applicability is limited to scenarios where training samples are readily available, which is not always th
Externí odkaz:
http://arxiv.org/abs/2306.11719
Dataset Distillation aims to distill an entire dataset's knowledge into a few synthetic images. The idea is to synthesize a small number of synthetic data points that, when given to a learning algorithm as training data, result in a model approximati
Externí odkaz:
http://arxiv.org/abs/2305.01649
Dataset distillation is the task of synthesizing a small dataset such that a model trained on the synthetic set will match the test accuracy of the model trained on the full dataset. In this paper, we propose a new formulation that optimizes our dist
Externí odkaz:
http://arxiv.org/abs/2203.11932
While attention-based transformer networks achieve unparalleled success in nearly all language tasks, the large number of tokens (pixels) found in images coupled with the quadratic activation memory usage makes them prohibitive for problems in comput
Externí odkaz:
http://arxiv.org/abs/2105.14110
Autor:
Cazenavette, George, Lucey, Simon
Borrowing from the transformer models that revolutionized the field of natural language processing, self-supervised feature learning for visual tasks has also seen state-of-the-art success using these extremely deep, isotropic networks. However, the
Externí odkaz:
http://arxiv.org/abs/2105.14077
In comparison to classical shallow representation learning techniques, deep neural networks have achieved superior performance in nearly every application benchmark. But despite their clear empirical advantages, it is still not well understood what m
Externí odkaz:
http://arxiv.org/abs/2103.05804
Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise. While the underlying cause of this sensitivity is not well understood, theoretical analyses can be s
Externí odkaz:
http://arxiv.org/abs/2011.14427
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.