Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Hosseinzadeh, Rasa"'
Autor:
Vouitsis, Noël, Hosseinzadeh, Rasa, Ross, Brendan Leigh, Villecroze, Valentin, Gorti, Satya Krishna, Cresswell, Jesse C., Loaiza-Ganem, Gabriel
Although diffusion models can generate remarkably high-quality samples, they are intrinsically bottlenecked by their expensive iterative sampling procedure. Consistency models (CMs) have recently emerged as a promising diffusion model distillation me
Externí odkaz:
http://arxiv.org/abs/2411.08954
Autor:
Ross, Brendan Leigh, Kamkari, Hamidreza, Wu, Tongzi, Hosseinzadeh, Rasa, Liu, Zhaoyan, Stein, George, Cresswell, Jesse C., Loaiza-Ganem, Gabriel
As deep generative models have progressed, recent work has shown them to be capable of memorizing and reproducing training datapoints when deployed. These findings call into question the usability of generative models, especially in light of the lega
Externí odkaz:
http://arxiv.org/abs/2411.00113
Autor:
Ma, Junwei, Thomas, Valentin, Hosseinzadeh, Rasa, Kamkari, Hamidreza, Labach, Alex, Cresswell, Jesse C., Golestan, Keyvan, Yu, Guangwei, Volkovs, Maksims, Caterini, Anthony L.
The challenges faced by neural networks on tabular data are well-documented and have hampered the progress of tabular foundation models. Techniques leveraging in-context learning (ICL) have shown promise here, allowing for dynamic adaptation to unsee
Externí odkaz:
http://arxiv.org/abs/2410.18164
Autor:
Gorti, Satya Krishna, Gofman, Ilan, Liu, Zhaoyan, Wu, Jiapeng, Vouitsis, Noël, Yu, Guangwei, Cresswell, Jesse C., Hosseinzadeh, Rasa
Text-to-SQL generation enables non-experts to interact with databases via natural language. Recent advances rely on large closed-source models like GPT-4 that present challenges in accessibility, privacy, and latency. To address these issues, we focu
Externí odkaz:
http://arxiv.org/abs/2410.12916
Autor:
Thomas, Valentin, Ma, Junwei, Hosseinzadeh, Rasa, Golestan, Keyvan, Yu, Guangwei, Volkovs, Maksims, Caterini, Anthony
Tabular data is a pervasive modality spanning a wide range of domains, and the inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less
Externí odkaz:
http://arxiv.org/abs/2406.05207
Autor:
Kamkari, Hamidreza, Ross, Brendan Leigh, Hosseinzadeh, Rasa, Cresswell, Jesse C., Loaiza-Ganem, Gabriel
High-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum -- i.e. the dimension of the submanifold it belongs to -- is a longstanding problem. LID can be understood as the numbe
Externí odkaz:
http://arxiv.org/abs/2406.03537
Contrastive learning is a model pre-training technique by first creating similar views of the original data, and then encouraging the data and its corresponding views to be close in the embedding space. Contrastive learning has witnessed success in i
Externí odkaz:
http://arxiv.org/abs/2404.17489
Autor:
Loaiza-Ganem, Gabriel, Ross, Brendan Leigh, Hosseinzadeh, Rasa, Caterini, Anthony L., Cresswell, Jesse C.
In recent years there has been increased interest in understanding the interplay between deep generative models (DGMs) and the manifold hypothesis. Research in this area focuses on understanding the reasons why commonly-used DGMs succeed or fail at l
Externí odkaz:
http://arxiv.org/abs/2404.02954
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models
Autor:
Stein, George, Cresswell, Jesse C., Hosseinzadeh, Rasa, Sui, Yi, Ross, Brendan Leigh, Villecroze, Valentin, Liu, Zhaoyan, Caterini, Anthony L., Taylor, J. Eric T., Loaiza-Ganem, Gabriel
Publikováno v:
Thirty-seventh Conference on Neural Information Processing Systems (2023)
We systematically study a wide variety of generative models spanning semantically-diverse image datasets to understand and improve the feature extractors and metrics used to evaluate them. Using best practices in psychophysics, we measure human perce
Externí odkaz:
http://arxiv.org/abs/2306.04675
DiMS: Distilling Multiple Steps of Iterative Non-Autoregressive Transformers for Machine Translation
The computational benefits of iterative non-autoregressive transformers decrease as the number of decoding steps increases. As a remedy, we introduce Distill Multiple Steps (DiMS), a simple yet effective distillation technique to decrease the number
Externí odkaz:
http://arxiv.org/abs/2206.02999