Zobrazeno 1 - 10
of 7 005
pro vyhledávání: '"P Dhingra"'
This paper presents an innovative method for predicting shape errors in 5-axis machining using graph neural networks. The graph structure is defined with nodes representing workpiece surface points and edges denoting the neighboring relationships. Th
Externí odkaz:
http://arxiv.org/abs/2412.10341
Regulatory documents are rich in nuanced terminology and specialized semantics. FRAG systems: Frozen retrieval-augmented generators utilizing pre-trained (or, frozen) components face consequent challenges with both retriever and answering performance
Externí odkaz:
http://arxiv.org/abs/2412.10313
Autor:
Khalighinejad, Ghazal, Scott, Sharon, Liu, Ollie, Anderson, Kelly L., Stureborg, Rickard, Tyagi, Aman, Dhingra, Bhuwan
Multimodal information extraction (MIE) is crucial for scientific literature, where valuable data is often spread across text, figures, and tables. In materials science, extracting structured information from research articles can accelerate the disc
Externí odkaz:
http://arxiv.org/abs/2410.20494
Autor:
Dhingra, Aviral
Gradient descent is a widely used iterative algorithm for finding local minima in multivariate functions. However, the final iterations often either overshoot the minima or make minimal progress, making it challenging to determine an optimal stopping
Externí odkaz:
http://arxiv.org/abs/2410.19448
Large Language Models (LLMs) are often augmented with external information as contexts, but this external information can sometimes be inaccurate or even intentionally misleading. We argue that robust LLMs should demonstrate situated faithfulness, dy
Externí odkaz:
http://arxiv.org/abs/2410.14675
We show that existing evaluations for fake news detection based on conventional sources, such as claims on fact-checking websites, result in an increasing accuracy over time for LLM-based detectors -- even after their knowledge cutoffs. This suggests
Externí odkaz:
http://arxiv.org/abs/2410.14651
Training-free embedding methods directly leverage pretrained large language models (LLMs) to embed text, bypassing the costly and complex procedure of contrastive learning. Previous training-free embedding methods have mainly focused on optimizing em
Externí odkaz:
http://arxiv.org/abs/2410.14635
Autor:
Ismayilzada, Mete, Circi, Defne, Sälevä, Jonne, Sirin, Hale, Köksal, Abdullatif, Dhingra, Bhuwan, Bosselut, Antoine, van der Plas, Lonneke, Ataman, Duygu
Large language models (LLMs) have demonstrated significant progress in various natural language generation and understanding tasks. However, their linguistic generalization capabilities remain questionable, raising doubts about whether these models l
Externí odkaz:
http://arxiv.org/abs/2410.12656
The study of low regularity Cauchy data for nonlinear dispersive PDEs has successfully been achieved using modulation spaces $M^{p,q}$ in recent years. In this paper, we study the inhomogeneous nonlinear Schr\"odinger equation (INLS) $$iu_t + \Delta
Externí odkaz:
http://arxiv.org/abs/2410.00869
Transformers have revolutionized deep learning and generative modeling to enable unprecedented advancements in natural language processing tasks and beyond. However, designing hardware accelerators for executing transformer models is challenging due
Externí odkaz:
http://arxiv.org/abs/2408.03397