Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Shyam, Pranav"'
Autor:
Neelakantan, Arvind, Xu, Tao, Puri, Raul, Radford, Alec, Han, Jesse Michael, Tworek, Jerry, Yuan, Qiming, Tezak, Nikolas, Kim, Jong Wook, Hallacy, Chris, Heidecke, Johannes, Shyam, Pranav, Power, Boris, Nekoul, Tyna Eloundou, Sastry, Girish, Krueger, Gretchen, Schnurr, David, Such, Felipe Petroski, Hsu, Kenny, Thompson, Madeleine, Khan, Tabarak, Sherbakov, Toki, Jang, Joanne, Welinder, Peter, Weng, Lilian
Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architec
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5e11482b48953975702981670e027948
Autor:
Nichol, Alex, Dhariwal, Prafulla, Ramesh, Aditya, Shyam, Pranav, Mishkin, Pamela, McGrew, Bob, Sutskever, Ilya, Chen, Mark
Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b0e9a6645de04963b730176ef6acc934
http://arxiv.org/abs/2112.10741
http://arxiv.org/abs/2112.10741
Autor:
Han, Jesse Michael, Babuschkin, Igor, Edwards, Harrison, Neelakantan, Arvind, Xu, Tao, Polu, Stanislas, Ray, Alex, Shyam, Pranav, Ramesh, Aditya, Radford, Alec, Sutskever, Ilya
We show how to derive state-of-the-art unsupervised neural machine translation systems from generatively pre-trained language models. Our method consists of three steps: few-shot amplification, distillation, and backtranslation. We first use the zero
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4467ac49eceb755c656109735dc19a9f
http://arxiv.org/abs/2110.05448
http://arxiv.org/abs/2110.05448
Autor:
Brown, Tom B., Mann, Benjamin, Ryder, Nick, Subbiah, Melanie, Kaplan, Jared, Dhariwal, Prafulla, Neelakantan, Arvind, Shyam, Pranav, Sastry, Girish, Askell, Amanda, Agarwal, Sandhini, Herbert-Voss, Ariel, Krueger, Gretchen, Henighan, Tom, Child, Rewon, Ramesh, Aditya, Ziegler, Daniel M., Wu, Jeffrey, Winter, Clemens, Hesse, Christopher, Chen, Mark, Sigler, Eric, Litwin, Mateusz, Gray, Scott, Chess, Benjamin, Clark, Jack, Berner, Christopher, McCandlish, Sam, Radford, Alec, Sutskever, Ilya, Amodei, Dario
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-speci
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::158daba5ed53a680e5f0823389e0235f
Autor:
Srivastava, Rupesh Kumar, Shyam, Pranav, Mutz, Filipe, Jaśkowski, Wojciech, Schmidhuber, Jürgen
We develop Upside-Down Reinforcement Learning (UDRL), a method for learning to act using only supervised learning techniques. Unlike traditional algorithms, UDRL does not use reward prediction or search for an optimal policy. Instead, it trains agent
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::70deaac0f7ebdc0df6b5af76cc2dc3c7
Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6a3aa1eb0753d2912defc9017e43c5e8
Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4ac962091644c5a89489fe551ec5cb0f
http://arxiv.org/abs/1703.00767
http://arxiv.org/abs/1703.00767