Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Dima Kuzmin"'
Autor:
Jon Effrat, Ayooluwakunmi Jeje, Moustafa Alzantot, Heng-Tze Cheng, Tameen Khan, Tushar Deepak Chandra, Ellie Ka-In Chio, Ajit Apte, Tarush Bali, Dima Kuzmin, Santiago Ontañón, Sukhdeep Sodhi, Allen Wu, Amol Wankhede, Senqiang Zhou, Harry Fung, Ankit Kumar, Ambarish Jash, Sarvjeet Singh, Pei Cao, Nitin Jindal
Publikováno v:
KDD
As more and more online search queries come from voice, automatic speech recognition becomes a key component to deliver relevant search results. Errors introduced by automatic speech recognition (ASR) lead to irrelevant search results returned to the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::abe885d12d0e1e2c88deda719a347baf
Autor:
Xiang Ma, Li Zhang, Tao Wu, Heng-Tze Cheng, Ritesh Agarwal, Yu Du, Steffen Rendle, Ankit Kumar, John Anderson, Sarvjeet Singh, Ed H. Chi, Ellie Ka-In Chio, Wen Li, Alex Soares, Pei Cao, Nitin Jindal, Dima Kuzmin, Tushar Deepak Chandra
Publikováno v:
CIKM
Many recent advances in neural information retrieval models, which predict top-K items given a query, learn directly from a large training set of (query, item) pairs. However, they are often insufficient when there are many previously unseen (query,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fa815879e4f2196fc8b175ff3dfa5177
Autor:
Dima Kuzmin, Manfred K. Warmuth
Publikováno v:
Machine Learning. 87:1-32
We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t?1 such that the total variance over a sequence of trials $\sum_{t=1}^{T} (\bo
Autor:
Dima Kuzmin, Manfred K. Warmuth
Publikováno v:
Warmuth, Manfred K.; & Kuzmin, Dima. (2010). Bayesian generalized probability calculus for density matrices. Machine Learning, 78(1), pp 63-101. doi: 10.1007/s10994-009-5133-7. Retrieved from: http://www.escholarship.org/uc/item/4hx9r95h
One of the main concepts in quantum physics is a density matrix, which is a symmetric positive definite matrix of trace one. Finite probability distributions can be seen as a special case when the density matrix is restricted to be diagonal. We devel
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2ce3b552151213f8254d565dbe66a448
http://arxiv.org/abs/0901.1273
http://arxiv.org/abs/0901.1273
Autor:
Manfred K. Warmuth, Dima Kuzmin
Publikováno v:
ICML
A number of updates for density matrices have been developed recently that are motivated by relative entropy minimization problems. The updates involve a softmin calculation based on matrix logs and matrix exponentials. We show that these updates can
Autor:
Dima Kuzmin, Manfred K. Warmuth
Publikováno v:
Learning Theory ISBN: 9783540352945
COLT
COLT
We design algorithms for two online variance minimization problems. Specifically, in every trial t our algorithms get a covariance matrix ${\mathcal{C}}_t$and try to select a parameter vector wtsuch that the total variance over a sequence of trials $
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::8e3f8136ab2725c0bf3a0a61f246b61a
https://doi.org/10.1007/11776420_38
https://doi.org/10.1007/11776420_38
Autor:
Manfred K. Warmuth, Dima Kuzmin
Publikováno v:
Learning Theory ISBN: 9783540265566
COLT
COLT
Maximum concept classes of VC dimension d over n domain points have size n C ≤d, and this is an upper bound on the size of any concept class of VC dimension d over n points. We give a compression scheme for any maximum class that represents each co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::57df2c2f6f26987bc5dbcbe22d78170b
https://doi.org/10.1007/11503415_40
https://doi.org/10.1007/11503415_40