Zobrazeno 1 - 10
of 181
pro vyhledávání: '"A, Sivakanth"'
Matrix concentration inequalities, intimately connected to the Non-Commutative Khintchine inequality, have been an important tool in both applied and pure mathematics. We study tensor versions of these inequalities, and establish non-asymptotic inequ
Externí odkaz:
http://arxiv.org/abs/2411.10633
Autor:
Devanur, Nikhil R., Gopi, Sivakanth
In search and advertisement ranking, it is often required to simultaneously maximize multiple objectives. For example, the objectives can correspond to multiple intents of a search query, or in the context of advertising, they can be relevance and re
Externí odkaz:
http://arxiv.org/abs/2410.12139
We establish a connection between problems studied in rigidity theory and matroids arising from linear algebraic constructions like tensor products and symmetric products. A special case of this correspondence identifies the problem of giving a descr
Externí odkaz:
http://arxiv.org/abs/2405.00778
Autor:
Xie, Chulin, Lin, Zinan, Backurs, Arturs, Gopi, Sivakanth, Yu, Da, Inan, Huseyin A, Nori, Harsha, Jiang, Haotian, Zhang, Huishuai, Lee, Yin Tat, Li, Bo, Yekhanin, Sergey
Text data has become extremely valuable due to the emergence of machine learning algorithms that learn from it. A lot of high-quality text data generated in the real world is private and therefore cannot be shared or used freely due to privacy concer
Externí odkaz:
http://arxiv.org/abs/2403.01749
The recently-emerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal list-decoding guarantee
Externí odkaz:
http://arxiv.org/abs/2310.12898
The GM-MDS theorem, conjectured by Dau-Song-Dong-Yuen and proved by Lovett and Yildiz-Hassibi, shows that the generator matrices of Reed-Solomon codes can attain every possible configuration of zeros for an MDS code. The recently emerging theory of h
Externí odkaz:
http://arxiv.org/abs/2310.12888
Autor:
Tang, Xinyu, Shin, Richard, Inan, Huseyin A., Manoel, Andre, Mireshghallah, Fatemehsadat, Lin, Zinan, Gopi, Sivakanth, Kulkarni, Janardhan, Sim, Robert
We study the problem of in-context learning (ICL) with large language models (LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leak or regurgitate the private examples demonstrated in the prompt. We propose a novel algorithm
Externí odkaz:
http://arxiv.org/abs/2309.11765
Autor:
Gunasekar, Suriya, Zhang, Yi, Aneja, Jyoti, Mendes, Caio César Teodoro, Del Giorno, Allie, Gopi, Sivakanth, Javaheripi, Mojan, Kauffmann, Piero, de Rosa, Gustavo, Saarikivi, Olli, Salim, Adil, Shah, Shital, Behl, Harkirat Singh, Wang, Xin, Bubeck, Sébastien, Eldan, Ronen, Kalai, Adam Tauman, Lee, Yin Tat, Li, Yuanzhi
We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from
Externí odkaz:
http://arxiv.org/abs/2306.11644
Generating differentially private (DP) synthetic data that closely resembles the original private data is a scalable way to mitigate privacy concerns in the current data-driven world. In contrast to current practices that train customized models for
Externí odkaz:
http://arxiv.org/abs/2305.15560
Autor:
Yu, Da, Gopi, Sivakanth, Kulkarni, Janardhan, Lin, Zinan, Naik, Saurabh, Religa, Tomasz Lukasz, Yin, Jian, Zhang, Huishuai
Text prediction models, when used in applications like email clients or word processors, must protect user data privacy and adhere to model size constraints. These constraints are crucial to meet memory and inference time requirements, as well as to
Externí odkaz:
http://arxiv.org/abs/2305.13865