Zobrazeno 1 - 10
of 4 741
pro vyhledávání: '"Uhlig, P."'
Reinforcement Learning from Human Feedback (RLHF) and derivative techniques like Direct Preference Optimization (DPO) are task-alignment algorithms used to repurpose general, foundational models for specific tasks. We show that applying task-alignmen
Externí odkaz:
http://arxiv.org/abs/2409.17673
ISO/IEC 17000:2020 defines conformity assessment as an "activity to determine whether specified requirements relating to a product, process, system, person or body are fulfilled". JCGM (2012) establishes a framework for accounting for measurement unc
Externí odkaz:
http://arxiv.org/abs/2409.11912
Autor:
Gill, Sukhpal Singh, Golec, Muhammed, Hu, Jianmin, Xu, Minxian, Du, Junhui, Wu, Huaming, Walia, Guneet Kaur, Murugesan, Subramaniam Subramanian, Ali, Babar, Kumar, Mohit, Ye, Kejiang, Verma, Prabal, Kumar, Surendra, Cuadrado, Felix, Uhlig, Steve
Publikováno v:
Springer Cluster Computing, Volume 28, article number 18, pages 11953 - 11981, (2025)
Edge Artificial Intelligence (AI) incorporates a network of interconnected systems and devices that receive, cache, process, and analyze data in close communication with the location where the data is captured with AI technology. Recent advancements
Externí odkaz:
http://arxiv.org/abs/2407.04053
Autor:
Gill, Sukhpal Singh, Cetinkaya, Oktay, Marrone, Stefano, Claudino, Daniel, Haunschild, David, Schlote, Leon, Wu, Huaming, Ottaviani, Carlo, Liu, Xiaoyuan, Machupalli, Sree Pragna, Kaur, Kamalpreet, Arora, Priyansh, Liu, Ji, Farouk, Ahmed, Song, Houbing Herbert, Uhlig, Steve, Ramamohanarao, Kotagiri
The recent development of quantum computing, which uses entanglement, superposition, and other quantum fundamental concepts, can provide substantial processing advantages over traditional computing. These quantum features help solve many complex prob
Externí odkaz:
http://arxiv.org/abs/2403.02240
The quantum theory of atoms in molecules (QTAIM) gives access to well-defined local atomic energies. Due to their locality, these energies are potentially interesting in fitting atomistic machine learning models as they inform about physically releva
Externí odkaz:
http://arxiv.org/abs/2403.00377
Large language models have gained immense importance in recent years and have demonstrated outstanding results in solving various tasks. However, despite these achievements, many questions remain unanswered in the context of large language models. Be
Externí odkaz:
http://arxiv.org/abs/2401.10580
Autor:
Gill, Sukhpal Singh, Wu, Huaming, Patros, Panos, Ottaviani, Carlo, Arora, Priyansh, Pujol, Victor Casamayor, Haunschild, David, Parlikad, Ajith Kumar, Cetinkaya, Oktay, Lutfiyya, Hanan, Stankovski, Vlado, Li, Ruidong, Ding, Yuemin, Qadir, Junaid, Abraham, Ajith, Ghosh, Soumya K., Song, Houbing Herbert, Sakellariou, Rizos, Rana, Omer, Rodrigues, Joel J. P. C., Kanhere, Salil S., Dustdar, Schahram, Uhlig, Steve, Ramamohanarao, Kotagiri, Buyya, Rajkumar
Publikováno v:
Elsevier Telematics and Informatics Reports, Volume 13, March 2024
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technologic
Externí odkaz:
http://arxiv.org/abs/2401.02469
Autor:
Uhlig, Carsten, Uhlig, Steffen
In this paper, we propose a test procedure based on the LASSO methodology to test the global null hypothesis of no dependence between a response variable and $p$ predictors, where $n$ observations with $n < p$ are available. The proposed procedure is
Externí odkaz:
http://arxiv.org/abs/2307.16374
Autor:
Golec, Muhammed, Walia, Guneet Kaur, Kumar, Mohit, Cuadrado, Felix, Gill, Sukhpal Singh, Uhlig, Steve
Publikováno v:
ACM Computing Surveys 2024
Recently, academics and the corporate sector have paid attention to serverless computing, which enables dynamic scalability and an economic model. In serverless computing, users only pay for the time they actually use resources, enabling zero scaling
Externí odkaz:
http://arxiv.org/abs/2310.08437
The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fi
Externí odkaz:
http://arxiv.org/abs/2309.08590