Zobrazeno 1 - 10
of 45
pro vyhledávání: '"Bačkurs, Artūrs"'
Autor:
Backurs, Arturs
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and
Externí odkaz:
http://hdl.handle.net/1721.1/120376
Autor:
Backurs, Arturs
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
11
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 43-44).
We consider the prob
11
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 43-44).
We consider the prob
Externí odkaz:
http://hdl.handle.net/1721.1/91098
Many methods in differentially private model training rely on computing the similarity between a query point (such as public or synthetic data) and private data. We abstract out this common subroutine and study the following fundamental algorithmic p
Externí odkaz:
http://arxiv.org/abs/2403.08917
Autor:
Xie, Chulin, Lin, Zinan, Backurs, Arturs, Gopi, Sivakanth, Yu, Da, Inan, Huseyin A, Nori, Harsha, Jiang, Haotian, Zhang, Huishuai, Lee, Yin Tat, Li, Bo, Yekhanin, Sergey
Text data has become extremely valuable due to the emergence of machine learning algorithms that learn from it. A lot of high-quality text data generated in the real world is private and therefore cannot be shared or used freely due to privacy concer
Externí odkaz:
http://arxiv.org/abs/2403.01749
Autor:
Wu, Fan, Inan, Huseyin A., Backurs, Arturs, Chandrasekaran, Varun, Kulkarni, Janardhan, Sim, Robert
Positioned between pre-training and user deployment, aligning large language models (LLMs) through reinforcement learning (RL) has emerged as a prevailing strategy for training instruction following-models such as ChatGPT. In this work, we initiate t
Externí odkaz:
http://arxiv.org/abs/2310.16960
Autor:
He, Jiyan, Li, Xuechen, Yu, Da, Zhang, Huishuai, Kulkarni, Janardhan, Lee, Yin Tat, Backurs, Arturs, Yu, Nenghai, Bian, Jiang
Differentially private deep learning has recently witnessed advances in computational efficiency and privacy-utility trade-off. We explore whether further improvements along the two axes are possible and provide affirmative answers leveraging two ins
Externí odkaz:
http://arxiv.org/abs/2212.01539
We propose a synthetic reasoning task, LEGO (Learning Equality and Group Operations), that encapsulates the problem of following a chain of reasoning, and we study how the Transformer architectures learn this task. We pay special attention to data ef
Externí odkaz:
http://arxiv.org/abs/2206.04301
Autor:
Mireshghallah, Fatemehsadat, Backurs, Arturs, Inan, Huseyin A, Wutschitz, Lukas, Kulkarni, Janardhan
Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while sim
Externí odkaz:
http://arxiv.org/abs/2206.01838
Autor:
Yu, Da, Naik, Saurabh, Backurs, Arturs, Gopi, Sivakanth, Inan, Huseyin A., Kamath, Gautam, Kulkarni, Janardhan, Lee, Yin Tat, Manoel, Andre, Wutschitz, Lukas, Yekhanin, Sergey, Zhang, Huishuai
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks. We propose a meta-frame
Externí odkaz:
http://arxiv.org/abs/2110.06500
We study fast algorithms for computing fundamental properties of a positive semidefinite kernel matrix $K \in \mathbb{R}^{n \times n}$ corresponding to $n$ points $x_1,\ldots,x_n \in \mathbb{R}^d$. In particular, we consider estimating the sum of ker
Externí odkaz:
http://arxiv.org/abs/2102.08341