Zobrazeno 1 - 10
of 2 220
pro vyhledávání: '"P, Grigsby"'
Autor:
Grigsby, J. Elisenda, Lindsey, Kathryn
For any fixed feedforward ReLU neural network architecture, it is well-known that many different parameter settings can determine the same function. It is less well-known that the degree of this redundancy is inhomogeneous across parameter space. In
Externí odkaz:
http://arxiv.org/abs/2410.17191
Autor:
Jenniskens, Peter, Estrada, Paul R., Pilorz, Stuart, Gural, Peter S., Samuels, Dave, Rau, Steve, Abbott, Timothy M. C., Albers, Jim, Austin, Scott, Avner, Dan, Baggaley, Jack W., Beck, Tim, Blomquist, Solvay, Boyukata, Mustafa, Breukers, Martin, Cooney, Walt, Cooper, Tim, De Cicco, Marcelo, Devillepoix, Hadrien, Egland, Eric, Fahl, Elize, Gialluca, Megan, Grigsby, Bryant, Hanke, Toni, Harris, Barbara, Heathcote, Steve, Hemmelgarn, Samantha, Howell, Andy, Jehin, Emmanuel, Johannink, Carl, Juneau, Luke, Kisvarsanyi, Erika, Mey, Philip, Moskovitz, Nick, Odeh, Mohammad, Rachford, Brian, Rollinson, David, Scott, James M., Towner, Martin C., Unsalan, Ozan, van Wyk, Rynault, Wood, Jeff, Wray, James D., Pavao, C., Lauretta, Dante S.
Publikováno v:
Icarus, 2024
In the late stages of accretion leading up to the formation of planetesimals, particles grew to pebbles the size of 1-mm to tens of cm. That is the same size range that dominates the present-day comet mass loss. Meteoroids that size cause visible met
Externí odkaz:
http://arxiv.org/abs/2408.11945
The challenge of visual grounding and masking in multimodal machine translation (MMT) systems has encouraged varying approaches to the detection and selection of visually-grounded text tokens for masking. We introduce new methods for detection of vis
Externí odkaz:
http://arxiv.org/abs/2403.03075
While most current work in multimodal machine translation (MMT) uses the Multi30k dataset for training and evaluation, we find that the resulting models overfit to the Multi30k dataset to an extreme degree. Consequently, these models perform very bad
Externí odkaz:
http://arxiv.org/abs/2403.03045
A good evaluation framework should evaluate multimodal machine translation (MMT) models by measuring 1) their use of visual information to aid in the translation task and 2) their ability to translate complex sentences such as done for text-only mach
Externí odkaz:
http://arxiv.org/abs/2403.03014
Autor:
Grigsby, Travis, Richmond, Edward
In this paper, we give a formula for the number of permutations that avoid the split patterns $3|12$ and $23|1$ with respect to a position $r$. Such permutations count the number of Schubert varieties for which the projection map from the flag variet
Externí odkaz:
http://arxiv.org/abs/2402.17654
We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses sequence models to tackle the challenges of generalization, long-term memory, and meta-learning. Recent works have shown that off-policy learning can make in-context RL wit
Externí odkaz:
http://arxiv.org/abs/2310.09971
We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the learning efficiency and generalization of Transformer agents. Central to CEC is the placement of cross-episodic experiences into a Transformer's context, which forms the basis
Externí odkaz:
http://arxiv.org/abs/2310.08549
Autor:
Jonathan C. Ho, Erinn M. Grigsby, Arianna Damiani, Lucy Liang, Josep-Maria Balaguer, Sridula Kallakuri, Lilly W. Tang, Jessica Barrios-Martinez, Vahagn Karapetyan, Daryl Fields, Peter C. Gerszten, T. Kevin Hitchens, Theodora Constantine, Gregory M. Adams, Donald J. Crammond, Marco Capogrosso, Jorge A. Gonzalez-Martinez, Elvira Pirondini
Publikováno v:
Nature Communications, Vol 15, Iss 1, Pp 1-21 (2024)
Abstract Cerebral white matter lesions prevent cortico-spinal descending inputs from effectively activating spinal motoneurons, leading to loss of motor control. However, in most cases, the damage to cortico-spinal axons is incomplete offering a pote
Externí odkaz:
https://doaj.org/article/d4927e0f83af4c0dbcccba5823b0dc73
The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation? It is known that many different parameter settings c
Externí odkaz:
http://arxiv.org/abs/2306.06179