Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Nam, Andrew J."'
What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However,
Externí odkaz:
http://arxiv.org/abs/2305.16183
Out-of-distribution generalization (OODG) is a longstanding challenge for neural networks. This challenge is quite apparent in tasks with well-defined variables and rules, where explicit use of the rules could solve problems independently of the part
Externí odkaz:
http://arxiv.org/abs/2210.03275
Large language models have recently shown promising progress in mathematical reasoning when fine-tuned with human-generated sequences walking through a sequence of solution steps. However, the solution sequences are not formally structured and the re
Externí odkaz:
http://arxiv.org/abs/2210.02615
Autor:
Nam, Andrew J., McClelland, James L.
Neural networks have long been used to model human intelligence, capturing elements of behavior and cognition, and their neural basis. Recent advancements in deep learning have enabled neural network models to reach and even surpass human levels of i
Externí odkaz:
http://arxiv.org/abs/2107.06994
Autor:
Nam AJ; Department of Psychology, Stanford University, Stanford, CA, USA., McClelland JL; Department of Psychology, Stanford University, Stanford, CA, USA.
Publikováno v:
Open mind : discoveries in cognitive science [Open Mind (Camb)] 2024 Mar 01; Vol. 8, pp. 148-176. Date of Electronic Publication: 2024 Mar 01 (Print Publication: 2024).