Zobrazeno 1 - 10
of 14 424
pro vyhledávání: '"A, Gouda"'
Autor:
Kamizuka, Takafumi, Kawahara, Hajime, Ohsawa, Ryou, Kataza, Hirokazu, Kawata, Daisuke, Yamada, Yoshiyuki, Hirano, Teruyuki, Miyakawa, Kohei, Aizawa, Masataka, Omiya, Masashi, Yano, Taihei, Kano, Ryouhei, Wada, Takehiko, Löffler, Wolfgang, Biermann, Michael, Ramos, Pau, Isobe, Naoki, Usui, Fumihiko, Hattori, Kohei, Yoshioka, Satoshi, Tatekawa, Takayuki, Izumiura, Hideyuki, Fukui, Akihiko, Miyoshi, Makoto, Tatsumi, Daisuke, Gouda, Naoteru
Publikováno v:
Proc. SPIE, 13099, 130992D (2024)
JASMINE is a Japanese planned space mission that aims to reveal the formation history of our Galaxy and discover habitable exoEarths. For these objectives, the JASMINE satellite performs high-precision astrometric observations of the Galactic bulge a
Externí odkaz:
http://arxiv.org/abs/2410.03149
This paper addresses the problem of uplink transmit power optimization in distributed massive multiple-input multiple-output systems, where remote radio heads (RRHs) are equipped with 1-bit analog-to-digital converters (ADCs). First, in a scenario wh
Externí odkaz:
http://arxiv.org/abs/2407.15416
Autor:
Qian, Haifeng, Gonugondla, Sujan Kumar, Ha, Sungsoo, Shang, Mingyue, Gouda, Sanjay Krishna, Nallapati, Ramesh, Sengupta, Sudipta, Ma, Xiaofei, Deoras, Anoop
Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models. However, most existing implementations focus on generating a single sequence. Real-world generative AI applications often requir
Externí odkaz:
http://arxiv.org/abs/2404.15778
Autor:
Zhang, Yuhao, Wang, Shiqi, Qian, Haifeng, Wang, Zijian, Shang, Mingyue, Liu, Linbo, Gouda, Sanjay Krishna, Ray, Baishakhi, Ramanathan, Murali Krishna, Ma, Xiaofei, Deoras, Anoop
Code generation models are not robust to small perturbations, which often lead to incorrect generations and significantly degrade the performance of these models. Although improving the robustness of code generation models is crucial to enhancing use
Externí odkaz:
http://arxiv.org/abs/2405.01567
Foundation models are a strong trend in deep learning and computer vision. These models serve as a base for applications as they require minor or no further fine-tuning by developers to integrate into their applications. Foundation models for zero-sh
Externí odkaz:
http://arxiv.org/abs/2404.06277
We consider a cell-free massive multiple-input multiple-output system with multi-antenna access points (APs) and user equipments (UEs), where the UEs can be served in both the downlink (DL) and uplink (UL) within a resource block. We tackle the combi
Externí odkaz:
http://arxiv.org/abs/2404.03285
Autor:
Athiwaratkun, Ben, Gonugondla, Sujan Kumar, Gouda, Sanjay Krishna, Qian, Haifeng, Ding, Hantian, Sun, Qing, Wang, Jun, Guo, Jiacheng, Chen, Liangfu, Bhatia, Parminder, Nallapati, Ramesh, Sengupta, Sudipta, Xiang, Bing
This study introduces bifurcated attention, a method designed to enhance language model inference in shared-context batch decoding scenarios. Our approach addresses the challenge of redundant memory IO costs, a critical factor contributing to latency
Externí odkaz:
http://arxiv.org/abs/2403.08845
Autor:
Athiwaratkun, Ben, Wang, Shiqi, Shang, Mingyue, Tian, Yuchen, Wang, Zijian, Gonugondla, Sujan Kumar, Gouda, Sanjay Krishna, Kwiatowski, Rob, Nallapati, Ramesh, Xiang, Bing
Generative models, widely utilized in various applications, can often struggle with prompts corresponding to partial tokens. This struggle stems from tokenization, where partial tokens fall out of distribution during inference, leading to incorrect o
Externí odkaz:
http://arxiv.org/abs/2403.08688
Large Language Models are powerful tools for program synthesis and advanced auto-completion, but come with no guarantee that their output code is syntactically correct. This paper contributes an incremental parser that allows early rejection of synta
Externí odkaz:
http://arxiv.org/abs/2402.17988
Autor:
Gouda, Gopal Krushna, Tiwari, Binita
Publikováno v:
Journal of Organizational Effectiveness: People and Performance, 2023, Vol. 11, Issue 4, pp. 807-824.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/JOEPP-07-2023-0281