Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Kim, Jungeum"'
We develop a multivariate posterior sampling procedure through deep generative quantile learning. Simulation proceeds implicitly through a push-forward mapping that can transform i.i.d. random vector samples from the posterior. We utilize Monge-Kanto
Externí odkaz:
http://arxiv.org/abs/2410.08378
This work is concerned with conformal prediction in contemporary applications (including generative AI) where a black-box model has been trained on data that are not accessible to the user. Mirroring split-conformal inference, we design a wrapper aro
Externí odkaz:
http://arxiv.org/abs/2408.08990
Autor:
Kim, Jungeum, Wang, Xiao
Nonlinear dimensional reduction with the manifold assumption, often called manifold learning, has proven its usefulness in a wide range of high-dimensional data analysis. The significant impact of t-SNE and UMAP has catalyzed intense research interes
Externí odkaz:
http://arxiv.org/abs/2406.08097
In generative models with obscured likelihood, Approximate Bayesian Computation (ABC) is often the tool of last resort for inference. However, ABC demands many prior parameter trials to keep only a small fraction that passes an acceptance test. To ac
Externí odkaz:
http://arxiv.org/abs/2404.10436
Autor:
Kim, Jungeum, Rockova, Veronika
The is no other model or hypothesis verification tool in Bayesian statistics that is as widely used as the Bayes factor. We focus on generative models that are likelihood-free and, therefore, render the computation of Bayes factors (marginal likeliho
Externí odkaz:
http://arxiv.org/abs/2312.05411
Autor:
Kim, Jungeum, Rockova, Veronika
The success of Bayesian inference with MCMC depends critically on Markov chains rapidly reaching the posterior distribution. Despite the plentitude of inferential theory for posteriors in Bayesian non-parametrics, convergence properties of MCMC algor
Externí odkaz:
http://arxiv.org/abs/2306.00126
Autor:
Kim, Jungeum, Wang, Xiao
The idea of robustness is central and critical to modern statistical analysis. However, despite the recent advances of deep neural networks (DNNs), many studies have shown that DNNs are vulnerable to adversarial attacks. Making imperceptible changes
Externí odkaz:
http://arxiv.org/abs/2205.10457
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Kim, Jungeum
In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in d
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::58d68b77ee5d7ee823b48fe752be3732
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.