Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Mike Schuster"'
Autor:
Mike Schuster, John T. Holden
Publikováno v:
Columbia Business Law Review. 2020
Video game streaming on sites like YouTube and Twitch is now a billion-dollar industry. Top streaming personalities make tens of millions of dollars annually, as viewership of video game play continues to expand. While video game companies’ control
Autor:
Greg S. Corrado, Mike Schuster, Fernanda B. Viégas, Zhifeng Chen, Melvin Johnson, Nikhil Thorat, Jeffrey Dean, Quoc V. Le, Macduff Hughes, Martin Wattenberg, Yonghui Wu, Maxim Krikun
Publikováno v:
Transactions of the Association for Computational Linguistics. 5:339-351
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial to
Autor:
Ashish Vaswani, Zhifeng Chen, Mia Xu Chen, Noam Shazeer, Llion Jones, Jakob Uszkoreit, Ankur Bapna, Mike Schuster, Macduff Hughes, Yonghui Wu, George Foster, Melvin Johnson, Niki Parmar, Lukasz Kaiser, Orhan Firat, Wolfgang Macherey
Publikováno v:
ACL (1)
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then out-performed by th
Autor:
Yannis Agiomvrgiannakis, Jonathan Shen, Ron Weiss, Zhifeng Chen, Rif A. Saurous, Rj Skerrv-Ryan, Navdeep Jaitly, Zongheng Yang, Mike Schuster, Ruoming Pang, Yu Zhang, Yonghui Wu, Yuxuan Wang
Publikováno v:
ICASSP
This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, fo
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b9e02821f484a25d7ba2af7e55a1e674
http://arxiv.org/abs/1712.05884
http://arxiv.org/abs/1712.05884
Autor:
Shiyin Kang, Zhen-Hua Ling, Helen Meng, Andrew W. Senior, Li Deng, Mike Schuster, Heiga Zen, Xiaojun Qian
Publikováno v:
IEEE Signal Processing Magazine. 32:35-52
Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) are the two most common types of acoustic models used in statistical parametric approaches for generating low-level speech waveforms from high-level symbolic inputs via intermediate acous
Autor:
Mike Schuster, Phillipp Koehn, Tomas Mikolov, Tony Robinson, Ciprian Chelba, Thorsten Brants, Qi Ge
Publikováno v:
INTERSPEECH
We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3bc492f160545fc21b1fee551867f368
http://arxiv.org/abs/1312.3005
http://arxiv.org/abs/1312.3005
Autor:
Mike Schuster
Publikováno v:
Computer Speech & Language. 14:47-77
This paper describes the details of a fast, memory-efficient one-pass stack decoder for efficient evaluation of the search space for large vocabulary continuous speech recognition. A modern, efficient search engine is not based on a single idea, but
Publikováno v:
Systems and Computers in Japan. 30:20-30
This paper describes a phoneme boundary estimation method based on bidirectional recurrent neural networks (BRNNs). Experimental results showed that the proposed method could estimate segment boundaries significantly better than an HMM or a multilaye
Autor:
Jackie Roberts, Ruth Schofield, Gina Browne, Carolyn Byrne, Mike Schuster, Barbara Brown, Amiram Gafni, Nancy Voorberg, Heather Hoxby, Susan Watt
Publikováno v:
Psychiatric Rehabilitation Journal. 22:368-380
Autor:
Kuldip K. Paliwal, Mike Schuster
Publikováno v:
IEEE Transactions on Signal Processing. 45:2673-2681
In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This