Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Jin, Jikai"'
Autor:
Jin, Jikai, Syrgkanis, Vasilis
Average treatment effect estimation is the most central problem in causal inference with application to numerous disciplines. While many estimation strategies have been proposed in the literature, the statistical optimality of these methods has still
Externí odkaz:
http://arxiv.org/abs/2402.14264
Recent work by Power et al. (2022) highlighted a surprising "grokking" phenomenon in learning arithmetic tasks: a neural net first "memorizes" the training set, resulting in perfect training accuracy but near-random test accuracy, and after training
Externí odkaz:
http://arxiv.org/abs/2311.18817
Autor:
Jin, Jikai, Syrgkanis, Vasilis
We study causal representation learning, the task of recovering high-level latent variables and their causal relationships in the form of a causal graph from low-level observed data (such as text and images), assuming access to observations generated
Externí odkaz:
http://arxiv.org/abs/2311.12267
It is believed that Gradient Descent (GD) induces an implicit bias towards good generalization in training machine learning models. This paper provides a fine-grained analysis of the dynamics of GD for the matrix sensing problem, whose goal is to rec
Externí odkaz:
http://arxiv.org/abs/2301.11500
Learning mappings between infinite-dimensional function spaces has achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning.
Externí odkaz:
http://arxiv.org/abs/2209.14430
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, a
Externí odkaz:
http://arxiv.org/abs/2205.13863
Publikováno v:
In Energy Reports December 2024 12:2232-2243
Autor:
Jin, Jikai, Sra, Suvrit
We contribute to advancing the understanding of Riemannian accelerated gradient methods. In particular, we revisit Accelerated Hybrid Proximal Extragradient(A-HPE), a powerful framework for obtaining Euclidean accelerated methods \citep{monteiro2013a
Externí odkaz:
http://arxiv.org/abs/2111.02763
Distributionally robust optimization (DRO) is a widely-used approach to learn models that are robust against distribution shift. Compared with the standard optimization setting, the objective function in DRO is more difficult to optimize, and most of
Externí odkaz:
http://arxiv.org/abs/2110.12459
Autor:
Jin, Jikai
In recent years, the success of deep learning has inspired many researchers to study the optimization of general smooth non-convex functions. However, recent works have established pessimistic worst-case complexities for this class functions, which i
Externí odkaz:
http://arxiv.org/abs/2010.04937