Zobrazeno 1 - 10
of 110
pro vyhledávání: '"Wang, Jane X"'
Traditionally, cognitive and computer scientists have viewed intelligence solipsistically, as a property of unitary agents devoid of social context. Given the success of contemporary learning algorithms, we argue that the bottleneck in artificial int
Externí odkaz:
http://arxiv.org/abs/2405.15815
Autor:
SIMA Team, Raad, Maria Abi, Ahuja, Arun, Barros, Catarina, Besse, Frederic, Bolt, Andrew, Bolton, Adrian, Brownfield, Bethanie, Buttimore, Gavin, Cant, Max, Chakera, Sarah, Chan, Stephanie C. Y., Clune, Jeff, Collister, Adrian, Copeman, Vikki, Cullum, Alex, Dasgupta, Ishita, de Cesare, Dario, Di Trapani, Julia, Donchev, Yani, Dunleavy, Emma, Engelcke, Martin, Faulkner, Ryan, Garcia, Frankie, Gbadamosi, Charles, Gong, Zhitao, Gonzales, Lucy, Gupta, Kshitij, Gregor, Karol, Hallingstad, Arne Olav, Harley, Tim, Haves, Sam, Hill, Felix, Hirst, Ed, Hudson, Drew A., Hudson, Jony, Hughes-Fitt, Steph, Rezende, Danilo J., Jasarevic, Mimi, Kampis, Laura, Ke, Rosemary, Keck, Thomas, Kim, Junkyung, Knagg, Oscar, Kopparapu, Kavya, Lawton, Rory, Lampinen, Andrew, Legg, Shane, Lerchner, Alexander, Limont, Marjorie, Liu, Yulan, Loks-Thompson, Maria, Marino, Joseph, Cussons, Kathryn Martin, Matthey, Loic, Mcloughlin, Siobhan, Mendolicchio, Piermaria, Merzic, Hamza, Mitenkova, Anna, Moufarek, Alexandre, Oliveira, Valeria, Oliveira, Yanko, Openshaw, Hannah, Pan, Renke, Pappu, Aneesh, Platonov, Alex, Purkiss, Ollie, Reichert, David, Reid, John, Richemond, Pierre Harvey, Roberts, Tyson, Ruscoe, Giles, Elias, Jaume Sanchez, Sandars, Tasha, Sawyer, Daniel P., Scholtes, Tim, Simmons, Guy, Slater, Daniel, Soyer, Hubert, Strathmann, Heiko, Stys, Peter, Tam, Allison C., Teplyashin, Denis, Terzi, Tayfun, Vercelli, Davide, Vujatovic, Bojan, Wainwright, Marcus, Wang, Jane X., Wang, Zhengdong, Wierstra, Daan, Williams, Duncan, Wong, Nathaniel, York, Sarah, Young, Nick
Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order t
Externí odkaz:
http://arxiv.org/abs/2404.10179
Large language models (LLMs) have significantly advanced the field of artificial intelligence. Yet, evaluating them comprehensively remains challenging. We argue that this is partly due to the predominant focus on performance metrics in most benchmar
Externí odkaz:
http://arxiv.org/abs/2402.18225
What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However,
Externí odkaz:
http://arxiv.org/abs/2305.16183
Autor:
Coda-Forno, Julian, Binz, Marcel, Akata, Zeynep, Botvinick, Matthew, Wang, Jane X., Schulz, Eric
Large language models have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their success. I
Externí odkaz:
http://arxiv.org/abs/2305.12907
Autor:
Binz, Marcel, Dasgupta, Ishita, Jagadish, Akshay, Botvinick, Matthew, Wang, Jane X., Schulz, Eric
Meta-learning is a framework for learning learning algorithms through repeated interactions with an environment as opposed to designing them by hand. In recent years, this framework has established itself as a promising tool for building models of hu
Externí odkaz:
http://arxiv.org/abs/2304.06729
Autor:
Hagendorff, Thilo, Dasgupta, Ishita, Binz, Marcel, Chan, Stephanie C. Y., Lampinen, Andrew, Wang, Jane X., Akata, Zeynep, Schulz, Eric
Large language models (LLMs) show increasingly advanced emergent capabilities and are being incorporated across various societal domains. Understanding their behavior and reasoning abilities therefore holds significant importance. We argue that a fru
Externí odkaz:
http://arxiv.org/abs/2303.13988
Autor:
Chan, Stephanie C. Y., Santoro, Adam, Lampinen, Andrew K., Wang, Jane X., Singh, Aaditya, Richemond, Pierre H., McClelland, Jay, Hill, Felix
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that thi
Externí odkaz:
http://arxiv.org/abs/2205.05055
Autor:
Tam, Allison C., Rabinowitz, Neil C., Lampinen, Andrew K., Roy, Nicholas A., Chan, Stephanie C. Y., Strouse, DJ, Wang, Jane X., Banino, Andrea, Hill, Felix
Effective exploration is a challenge in reinforcement learning (RL). Novelty-based exploration methods can suffer in high-dimensional state spaces, such as continuous partially-observable 3D environments. We address this challenge by defining novelty
Externí odkaz:
http://arxiv.org/abs/2204.05080
Autor:
Lampinen, Andrew K., Dasgupta, Ishita, Chan, Stephanie C. Y., Matthewson, Kory, Tessler, Michael Henry, Creswell, Antonia, McClelland, James L., Wang, Jane X., Hill, Felix
Language Models (LMs) can perform new tasks by adapting to a few in-context examples. For humans, explanations that connect examples to task principles can improve learning. We therefore investigate whether explanations of few-shot examples can help
Externí odkaz:
http://arxiv.org/abs/2204.02329