Zobrazeno 1 - 10
of 2 776
pro vyhledávání: '"Harrison James"'
Fractional gradient descent has been studied extensively, with a focus on its ability to extend traditional gradient descent methods by incorporating fractional-order derivatives. This approach allows for more flexibility in navigating complex optimi
Externí odkaz:
http://arxiv.org/abs/2411.14855
Hierarchical policies enable strong performance in many sequential decision-making problems, such as those with high-dimensional action spaces, those requiring long-horizon planning, and settings with sparse rewards. However, learning hierarchical po
Externí odkaz:
http://arxiv.org/abs/2410.07933
Autor:
Federici, Fabio, Reinke, Matthew L., Lipschultz, Bruce, Lovell, Jack J., Verhaegh, Kevin, Cowley, Cyd, Kryjak, Mike, Ryan, Peter, Thornton, Andrew J., Harrison, James R., Peterson, Byron J., Lomanowski, Bartosz, Lore, Jeremy D., Damizia, Yacopo
Plasma detachment involves interactions of the plasma with impurities and neutral particles, leading to significant losses of plasma power, momentum, and particles. Accurate mapping of plasma emissivity in the divertor and X-point region is essential
Externí odkaz:
http://arxiv.org/abs/2409.02837
Autor:
Nayak, Siddharth, Orozco, Adelmo Morrison, Have, Marina Ten, Thirumalai, Vittal, Zhang, Jackson, Chen, Darren, Kapoor, Aditya, Robinson, Eric, Gopalakrishnan, Karthik, Harrison, James, Ichter, Brian, Mahajan, Anuj, Balakrishnan, Hamsa
The ability of Language Models (LMs) to understand natural language makes them a powerful tool for parsing human instructions into task plans for autonomous robots. Unlike traditional planning methods that rely on domain-specific knowledge and handcr
Externí odkaz:
http://arxiv.org/abs/2407.10031
Autor:
Alice Laschuk Herlinger, Fábio Luís Lima Monteiro, Mirela D’arc, Filipe Romero Rebello Moreira, Harrison James Westgarth, Rafael Mello Galliez, Diana Mariani, Luciana Jesus da Costa, Luiz Gonzaga Paula de Almeida, Carolina Moreira Voloch, Covid19-UFRJ Workgroup, Adriana Suely de Oliveira Melo, Renato Santana de Aguiar, André Felipe Andrade dos Santos, Terezinha Marta Pereira Pinto Castiñeiras, Ana Tereza Ribeiro de Vasconcelos, Esaú Custódio João Filho, Claudia Caminha Escosteguy, Orlando da Costa Ferreira Junior, Amilcar Tanuri, Luiza Mendonça Higa
Publikováno v:
Memorias do Instituto Oswaldo Cruz, Vol 116 (2022)
BACKGROUND During routine Coronavirus disease 2019 (COVID-19) diagnosis, an unusually high viral load was detected by reverse transcription real-time polymerase chain reaction (RT-qPCR) in a nasopharyngeal swab sample collected from a patient with re
Externí odkaz:
https://doaj.org/article/08745818449044bc84b356d804c5848f
We introduce a deterministic variational formulation for training Bayesian last layer neural networks. This yields a sampling-free, single-pass model and loss that effectively improves uncertainty estimation. Our variational Bayesian last layer (VBLL
Externí odkaz:
http://arxiv.org/abs/2404.11599
Autor:
Greenhouse, Daniel, Bowman, Chris, Lipschultz, Bruce, Verhaegh, Kevin, Harrison, James, Fil, Alexandre
An integrated data analysis system based on Bayesian inference has been developed for application to data from multiple diagnostics over the two-dimensional cross-section of tokamak divertors. Tests of the divertor multi-instrument Bayesian analysis
Externí odkaz:
http://arxiv.org/abs/2403.12819
We study the robustness of deep reinforcement learning algorithms against distribution shifts within contextual multi-stage stochastic combinatorial optimization problems from the operations research domain. In this context, risk-sensitive algorithms
Externí odkaz:
http://arxiv.org/abs/2402.09992
A challenging problem in many modern machine learning tasks is to process weight-space features, i.e., to transform or extract information from the weights and gradients of a neural network. Recent works have developed promising weight-space models t
Externí odkaz:
http://arxiv.org/abs/2402.05232
Autor:
Singh, Avi, Co-Reyes, John D., Agarwal, Rishabh, Anand, Ankesh, Patil, Piyush, Garcia, Xavier, Liu, Peter J., Harrison, James, Lee, Jaehoon, Xu, Kelvin, Parisi, Aaron, Kumar, Abhishek, Alemi, Alex, Rizkowsky, Alex, Nova, Azade, Adlam, Ben, Bohnet, Bernd, Elsayed, Gamaleldin, Sedghi, Hanie, Mordatch, Igor, Simpson, Isabelle, Gur, Izzeddin, Snoek, Jasper, Pennington, Jeffrey, Hron, Jiri, Kenealy, Kathleen, Swersky, Kevin, Mahajan, Kshiteej, Culp, Laura, Xiao, Lechao, Bileschi, Maxwell L., Constant, Noah, Novak, Roman, Liu, Rosanne, Warkentin, Tris, Qian, Yundi, Bansal, Yamini, Dyer, Ethan, Neyshabur, Behnam, Sohl-Dickstein, Jascha, Fiedel, Noah
Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go bey
Externí odkaz:
http://arxiv.org/abs/2312.06585