Zobrazeno 1 - 10
of 175
pro vyhledávání: '"Patel, Hima"'
Autor:
Stallone, Matt, Saxena, Vaibhav, Karlinsky, Leonid, McGinn, Bridget, Bula, Tim, Mishra, Mayank, Soria, Adriana Meza, Zhang, Gaoyuan, Prasad, Aditya, Shen, Yikang, Surendran, Saptha, Guttula, Shanmukha, Patel, Hima, Selvam, Parameswaran, Dang, Xuan-Hong, Koyfman, Yan, Sood, Atin, Feris, Rogerio, Desai, Nirmit, Cox, David D., Puri, Ruchir, Panda, Rameswar
This paper introduces long-context Granite code models that support effective context windows of up to 128K tokens. Our solution for scaling context length of Granite 3B/8B code models from 2K/4K to 128K consists of a light-weight continual pretraini
Externí odkaz:
http://arxiv.org/abs/2407.13739
Efficient processing of tabular data is important in various industries, especially when working with datasets containing a large number of columns. Large language models (LLMs) have demonstrated their ability on several tasks through carefully craft
Externí odkaz:
http://arxiv.org/abs/2405.05618
Autor:
Mishra, Mayank, Stallone, Matt, Zhang, Gaoyuan, Shen, Yikang, Prasad, Aditya, Soria, Adriana Meza, Merler, Michele, Selvam, Parameswaran, Surendran, Saptha, Singh, Shivdeep, Sethi, Manish, Dang, Xuan-Hong, Li, Pengyuan, Wu, Kun-Lung, Zawad, Syed, Coleman, Andrew, White, Matthew, Lewis, Mark, Pavuluri, Raju, Koyfman, Yan, Lublinsky, Boris, de Bayser, Maximilien, Abdelaziz, Ibrahim, Basu, Kinjal, Agarwal, Mayank, Zhou, Yi, Johnson, Chris, Goyal, Aanchal, Patel, Hima, Shah, Yousaf, Zerfos, Petros, Ludwig, Heiko, Munawar, Asim, Crouse, Maxwell, Kapanipathi, Pavan, Salaria, Shweta, Calio, Bob, Wen, Sophia, Seelam, Seetharami, Belgodere, Brian, Fonseca, Carlos, Singhee, Amith, Desai, Nirmit, Cox, David D., Puri, Ruchir, Panda, Rameswar
Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and LLM-based age
Externí odkaz:
http://arxiv.org/abs/2405.04324
Autor:
Ganesan, Balaji, Pasha, Matheen Ahmed, Parkala, Srinivasa, Singh, Neeraj R, Mishra, Gayatri, Bhatia, Sumit, Patel, Hima, Naganna, Somashekar, Mehta, Sameep
Explaining neural model predictions to users requires creativity. Especially in enterprise applications, where there are costs associated with users' time, and their trust in the model predictions is critical for adoption. For link prediction in mast
Externí odkaz:
http://arxiv.org/abs/2403.09806
In mapping enterprise applications, data mapping remains a fundamental part of integration development, but its time consuming. An increasing number of applications lack naming standards, and nested field structures further add complexity for the int
Externí odkaz:
http://arxiv.org/abs/2307.03966
Autor:
Patel, Hima Milan
The use of BRAF and MEK inhibitors for patients with BRAF-mutant melanoma is limited as patients relapse on treatment as quickly as 6 months due to resistance. We used two approaches to treat resistant melanomas based on known and novel mechanisms of
In this paper, we investigate the effect of addressing difficult samples from a given text dataset on the downstream text classification task. We define difficult samples as being non-obvious cases for text classification by analysing them in the sem
Externí odkaz:
http://arxiv.org/abs/2302.06155
Autor:
Gupta, Nitin, Patel, Hima, Afzal, Shazia, Panwar, Naveen, Mittal, Ruhi Sharma, Guttula, Shanmukha, Jain, Abhinav, Nagalapatti, Lokesh, Mehta, Sameep, Hans, Sandeep, Lohia, Pranay, Aggarwal, Aniya, Saha, Diptikalyan
The quality of training data has a huge impact on the efficiency, accuracy and complexity of machine learning tasks. Various tools and techniques are available that assess data quality with respect to general cleaning and profiling checks. However th
Externí odkaz:
http://arxiv.org/abs/2108.05935
Publikováno v:
In Heliyon 15 May 2024 10(9)
Autor:
Patel, Hima, Upadhyay, R.V., Parekh, Kinnari, Reis, Dennys, Oliveira, Cristiano L.P., Figueiredo Neto, A.M.
Publikováno v:
In Journal of Magnetism and Magnetic Materials 15 January 2024 590