Zobrazeno 1 - 10
of 2 651
pro vyhledávání: '"Havens P"'
Autor:
Awad, Ali, Saleem, Ashraf, Paheding, Sidike, Lucas, Evan, Al-Ratrout, Serein, Havens, Timothy C.
Underwater imagery often suffers from severe degradation that results in low visual quality and object detection performance. This work aims to evaluate state-of-the-art image enhancement models, investigate their impact on underwater object detectio
Externí odkaz:
http://arxiv.org/abs/2411.14626
Retrieval Augmented Generation (RAG) has emerged as a crucial technique for enhancing the accuracy of Large Language Models (LLMs) by incorporating external information. With the advent of LLMs that support increasingly longer context lengths, there
Externí odkaz:
http://arxiv.org/abs/2411.03538
In this paper, we propose an extension to Longformer Encoder-Decoder, a popular sparse transformer architecture. One common challenge with sparse transformers is that they can struggle with encoding of long range context, such as connections between
Externí odkaz:
http://arxiv.org/abs/2410.08971
In the current work we revisit the pair-potential recently proposed by Wang et al. (Phys. Chem. Chem. Phys. 10624, 22, 2020) as a well defined finite-range alternative to the widely used Lennard-Jones interaction model. The advantage of their propose
Externí odkaz:
http://arxiv.org/abs/2407.14688
Autor:
Biderman, Dan, Portes, Jacob, Ortiz, Jose Javier Gonzalez, Paul, Mansheej, Greengard, Philip, Jennings, Connor, King, Daniel, Havens, Sam, Chiley, Vitaliy, Frankle, Jonathan, Blakeney, Cody, Cunningham, John P.
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and f
Externí odkaz:
http://arxiv.org/abs/2405.09673
Autor:
Kevian, Darioush, Syed, Usman, Guo, Xingang, Havens, Aaron, Dullerud, Geir, Seiler, Peter, Qin, Lianhui, Hu, Bin
In this paper, we explore the capabilities of state-of-the-art large language models (LLMs) such as GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra in solving undergraduate-level control problems. Controls provides an interesting case study for LLM reason
Externí odkaz:
http://arxiv.org/abs/2404.03647
Autor:
Pauli, Patricia, Havens, Aaron, Araujo, Alexandre, Garg, Siddharth, Khorrami, Farshad, Allgöwer, Frank, Hu, Bin
Recently, semidefinite programming (SDP) techniques have shown great promise in providing accurate Lipschitz bounds for neural networks. Specifically, the LipSDP approach (Fazlyab et al., 2019) has received much attention and provides the least conse
Externí odkaz:
http://arxiv.org/abs/2401.14033
Autor:
Sethuraman, Advaith V., Sheppard, Anja, Bagoren, Onur, Pinnow, Christopher, Anderson, Jamey, Havens, Timothy C., Skinner, Katherine A.
Publikováno v:
The International Journal of Robotics Research. 2024;0(0)
Open-source benchmark datasets have been a critical component for advancing machine learning for robot perception in terrestrial applications. Benchmark datasets enable the widespread development of state-of-the-art machine learning methods, which re
Externí odkaz:
http://arxiv.org/abs/2401.14546
Autor:
Portes, Jacob, Trott, Alex, Havens, Sam, King, Daniel, Venigalla, Abhinav, Nadeem, Moin, Sardana, Nikhil, Khudia, Daya, Frankle, Jonathan
Publikováno v:
NeurIPS 2023
Although BERT-style encoder models are heavily used in NLP research, many researchers do not pretrain their own BERTs from scratch due to the high cost of training. In the past half-decade since BERT first rose to prominence, many advances have been
Externí odkaz:
http://arxiv.org/abs/2312.17482
Large Language Models are traditionally finetuned on large instruction datasets. However recent studies suggest that small, high-quality datasets can suffice for general purpose instruction following. This lack of consensus surrounding finetuning bes
Externí odkaz:
http://arxiv.org/abs/2311.13133