Zobrazeno 1 - 10
of 1 946
pro vyhledávání: '"P., Hima"'
Autor:
Lalengmawia, Celestine, Zosiamliana, R., Lalroliana, Bernard, Hima, Lalhum, Gurung, Shivraj, Zuala, Lalhriat, Vanchhawng, Lalmuanpuia, Laref, Amel, Yvaz, A., Rai, D. P.
Pb-based perovskites are considered to be the most efficient materials for energy harvest. However, real-time application is limited because of their toxicity. As a result, lead-free perovskites that offer similar advantages are potential alternative
Externí odkaz:
http://arxiv.org/abs/2412.05395
Automating end-to-end Exploratory Data Analysis (AutoEDA) is a challenging open problem, often tackled through Reinforcement Learning (RL) by learning to predict a sequence of analysis operations (FILTER, GROUP, etc). Defining rewards for each operat
Externí odkaz:
http://arxiv.org/abs/2410.11276
Autor:
Ghrear, Majd, McLean, Alasdair G., Korandla, Hima B., Dastgiri, Ferdos, Spooner, Neil J. C., Vahsen, Sven E.
Detecting the topology and direction of low-energy nuclear and electronic recoils is broadly desirable in nuclear and particle physics, with applications in coherent elastic neutrino-nucleus scattering, astrophysical neutrino measurements, probing da
Externí odkaz:
http://arxiv.org/abs/2410.00048
Autor:
Wood, David, Lublinsky, Boris, Roytman, Alexy, Singh, Shivdeep, Adam, Constantin, Adebayo, Abdulhamid, An, Sungeun, Chang, Yuan Chi, Dang, Xuan-Hong, Desai, Nirmit, Dolfi, Michele, Emami-Gohari, Hajar, Eres, Revital, Goto, Takuya, Joshi, Dhiraj, Koyfman, Yan, Nassar, Mohammad, Patel, Hima, Selvam, Paramesvaran, Shah, Yousaf, Surendran, Saptha, Tsuzuku, Daiki, Zerfos, Petros, Daijavad, Shahrokh
Data preparation is the first and a very important step towards any Large Language Model (LLM) development. This paper introduces an easy-to-use, extensible, and scale-flexible open-source data preparation toolkit called Data Prep Kit (DPK). DPK is a
Externí odkaz:
http://arxiv.org/abs/2409.18164
Animals can accomplish many incredible behavioral feats across a wide range of operational environments and scales that current robots struggle to match. One explanation for this performance gap is the extraordinary properties of the biological mater
Externí odkaz:
http://arxiv.org/abs/2408.16069
Autor:
Stallone, Matt, Saxena, Vaibhav, Karlinsky, Leonid, McGinn, Bridget, Bula, Tim, Mishra, Mayank, Soria, Adriana Meza, Zhang, Gaoyuan, Prasad, Aditya, Shen, Yikang, Surendran, Saptha, Guttula, Shanmukha, Patel, Hima, Selvam, Parameswaran, Dang, Xuan-Hong, Koyfman, Yan, Sood, Atin, Feris, Rogerio, Desai, Nirmit, Cox, David D., Puri, Ruchir, Panda, Rameswar
This paper introduces long-context Granite code models that support effective context windows of up to 128K tokens. Our solution for scaling context length of Granite 3B/8B code models from 2K/4K to 128K consists of a light-weight continual pretraini
Externí odkaz:
http://arxiv.org/abs/2407.13739
Autor:
Abdelaziz, Ibrahim, Basu, Kinjal, Agarwal, Mayank, Kumaravel, Sadhana, Stallone, Matthew, Panda, Rameswar, Rizk, Yara, Bhargav, GP, Crouse, Maxwell, Gunasekara, Chulaka, Ikbal, Shajith, Joshi, Sachin, Karanam, Hima, Kumar, Vineet, Munawar, Asim, Neelam, Sumit, Raghu, Dinesh, Sharma, Udit, Soria, Adriana Meza, Sreedhar, Dheeraj, Venkateswaran, Praveen, Unuvar, Merve, Cox, David, Roukos, Salim, Lastras, Luis, Kapanipathi, Pavan
Large language models (LLMs) have recently shown tremendous promise in serving as the backbone to agentic systems, as demonstrated by their performance in multi-faceted, challenging benchmarks like SWE-Bench and Agent-Bench. However, to realize the t
Externí odkaz:
http://arxiv.org/abs/2407.00121
Autor:
Bhargav, G P Shrivatsa, Neelam, Sumit, Sharma, Udit, Ikbal, Shajith, Sreedhar, Dheeraj, Karanam, Hima, Joshi, Sachindra, Dhoolia, Pankaj, Garg, Dinesh, Croutwater, Kyle, Qi, Haode, Wayne, Eric, Murdock, J William
We present an approach to build Large Language Model (LLM) based slot-filling system to perform Dialogue State Tracking in conversational assistants serving across a wide variety of industry-grade applications. Key requirements of this system include
Externí odkaz:
http://arxiv.org/abs/2406.08848
Efficient processing of tabular data is important in various industries, especially when working with datasets containing a large number of columns. Large language models (LLMs) have demonstrated their ability on several tasks through carefully craft
Externí odkaz:
http://arxiv.org/abs/2405.05618
Autor:
Mishra, Mayank, Stallone, Matt, Zhang, Gaoyuan, Shen, Yikang, Prasad, Aditya, Soria, Adriana Meza, Merler, Michele, Selvam, Parameswaran, Surendran, Saptha, Singh, Shivdeep, Sethi, Manish, Dang, Xuan-Hong, Li, Pengyuan, Wu, Kun-Lung, Zawad, Syed, Coleman, Andrew, White, Matthew, Lewis, Mark, Pavuluri, Raju, Koyfman, Yan, Lublinsky, Boris, de Bayser, Maximilien, Abdelaziz, Ibrahim, Basu, Kinjal, Agarwal, Mayank, Zhou, Yi, Johnson, Chris, Goyal, Aanchal, Patel, Hima, Shah, Yousaf, Zerfos, Petros, Ludwig, Heiko, Munawar, Asim, Crouse, Maxwell, Kapanipathi, Pavan, Salaria, Shweta, Calio, Bob, Wen, Sophia, Seelam, Seetharami, Belgodere, Brian, Fonseca, Carlos, Singhee, Amith, Desai, Nirmit, Cox, David D., Puri, Ruchir, Panda, Rameswar
Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and LLM-based age
Externí odkaz:
http://arxiv.org/abs/2405.04324