Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Kuno, Noboru"'
Autor:
Huang, Qiuyuan, Wake, Naoki, Sarkar, Bidipta, Durante, Zane, Gong, Ran, Taori, Rohan, Noda, Yusuke, Terzopoulos, Demetri, Kuno, Noboru, Famoti, Ade, Llorens, Ashley, Langford, John, Vo, Hoi, Fei-Fei, Li, Ikeuchi, Katsu, Gao, Jianfeng
Recent advancements in large foundation models have remarkably enhanced our understanding of sensory information in open-world environments. In leveraging the power of foundation models, it is crucial for AI research to pivot away from excessive redu
Externí odkaz:
http://arxiv.org/abs/2403.00833
Autor:
Durante, Zane, Sarkar, Bidipta, Gong, Ran, Taori, Rohan, Noda, Yusuke, Tang, Paul, Adeli, Ehsan, Lakshmikanth, Shrinidhi Kowshika, Schulman, Kevin, Milstein, Arnold, Terzopoulos, Demetri, Famoti, Ade, Kuno, Noboru, Llorens, Ashley, Vo, Hoi, Ikeuchi, Katsu, Fei-Fei, Li, Gao, Jianfeng, Wake, Naoki, Huang, Qiuyuan
The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Mode
Externí odkaz:
http://arxiv.org/abs/2402.05929
Autor:
Guss, William H., Castro, Mario Ynocente, Devlin, Sam, Houghton, Brandon, Kuno, Noboru Sean, Loomis, Crissman, Milani, Stephanie, Mohanty, Sharada, Nakata, Keisuke, Salakhutdinov, Ruslan, Schulman, John, Shiroshita, Shinya, Topin, Nicholay, Ummadisingu, Avinash, Vinyals, Oriol
Although deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples, affording only a shrinking segment of the AI community access to their development. Resolutio
Externí odkaz:
http://arxiv.org/abs/2101.11071
Autor:
Milani, Stephanie, Topin, Nicholay, Houghton, Brandon, Guss, William H., Mohanty, Sharada P., Nakata, Keisuke, Vinyals, Oriol, Kuno, Noboru Sean
To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (
Externí odkaz:
http://arxiv.org/abs/2003.05012
Autor:
Guss, William H., Codel, Cayden, Hofmann, Katja, Houghton, Brandon, Kuno, Noboru, Milani, Stephanie, Mohanty, Sharada, Liebana, Diego Perez, Salakhutdinov, Ruslan, Topin, Nicholay, Veloso, Manuela, Wang, Phillip
Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing num
Externí odkaz:
http://arxiv.org/abs/1904.10079