Zobrazeno 1 - 10
of 437
pro vyhledávání: '"A Mehdad"'
Publikováno v:
Cell Death Discovery, Vol 10, Iss 1, Pp 1-1 (2024)
Externí odkaz:
https://doaj.org/article/9541c818d6be46f2921bd1d819e3c2d6
Autor:
Jeon, Sungho, Yeh, Ching-Feng, Inan, Hakan, Hsu, Wei-Ning, Rungta, Rashi, Mehdad, Yashar, Bikel, Daniel
In this paper, we show that a simple self-supervised pre-trained audio model can achieve comparable inference efficiency to more complicated pre-trained models with speech transformer encoders. These speech transformers rely on mixing convolutional m
Externí odkaz:
http://arxiv.org/abs/2311.02772
Publikováno v:
Cell Death Discovery, Vol 2, Iss 1, Pp 1-10 (2016)
Abstract Proteasome inhibitors are emerging as a new class of chemopreventive agents and have gained huge importance as potential pharmacological tools in breast cancer treatment. Improved understanding of the role played by proteases and their speci
Externí odkaz:
https://doaj.org/article/8aff0109379f4e52837d1c29f2cca8c3
Autor:
Xiong, Wenhan, Liu, Jingyu, Molybog, Igor, Zhang, Hejia, Bhargava, Prajjwal, Hou, Rui, Martin, Louis, Rungta, Rashi, Sankararaman, Karthik Abinav, Oguz, Barlas, Khabsa, Madian, Fang, Han, Mehdad, Yashar, Narang, Sharan, Malik, Kshitiz, Fan, Angela, Bhosale, Shruti, Edunov, Sergey, Lewis, Mike, Wang, Sinong, Ma, Hao
We present a series of long-context LLMs that support effective context windows of up to 32,768 tokens. Our model series are built through continual pretraining from Llama 2 with longer training sequences and on a dataset where long texts are upsampl
Externí odkaz:
http://arxiv.org/abs/2309.16039
Autor:
Liu, Zechun, Oguz, Barlas, Zhao, Changsheng, Chang, Ernie, Stock, Pierre, Mehdad, Yashar, Shi, Yangyang, Krishnamoorthi, Raghuraman, Chandra, Vikas
Several post-training quantization methods have been applied to large language models (LLMs), and have been shown to perform well down to 8-bits. We find that these methods break down at lower bit precision, and investigate quantization aware trainin
Externí odkaz:
http://arxiv.org/abs/2305.17888
We propose a new two-stage pre-training framework for video-to-text generation tasks such as video captioning and video question answering: A generative encoder-decoder model is first jointly pre-trained on massive image-text data to learn fundamenta
Externí odkaz:
http://arxiv.org/abs/2305.03204
Autor:
Zala, Abhay, Cho, Jaemin, Kottur, Satwik, Chen, Xilun, Oğuz, Barlas, Mehdad, Yasher, Bansal, Mohit
There is growing interest in searching for information from large video corpora. Prior works have studied relevant tasks, such as text-based video retrieval, moment retrieval, video summarization, and video captioning in isolation, without an end-to-
Externí odkaz:
http://arxiv.org/abs/2303.16406
Autor:
Lin, Sheng-Chieh, Asai, Akari, Li, Minghan, Oguz, Barlas, Lin, Jimmy, Mehdad, Yashar, Yih, Wen-tau, Chen, Xilun
Various techniques have been developed in recent years to improve dense retrieval (DR), such as unsupervised contrastive learning and pseudo-query generation. Existing DRs, however, often suffer from effectiveness tradeoffs between supervised and zer
Externí odkaz:
http://arxiv.org/abs/2302.07452
Autor:
Wang, Borui, Feng, Chengcheng, Nair, Arjun, Mao, Madelyn, Desai, Jai, Celikyilmaz, Asli, Li, Haoran, Mehdad, Yashar, Radev, Dragomir
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boo
Externí odkaz:
http://arxiv.org/abs/2212.12652
Autor:
Ghoshal, Asish, Einolghozati, Arash, Arun, Ankit, Li, Haoran, Yu, Lili, Gor, Vera, Mehdad, Yashar, Yih, Scott Wen-tau, Celikyilmaz, Asli
Lack of factual correctness is an issue that still plagues state-of-the-art summarization systems despite their impressive progress on generating seemingly fluent summaries. In this paper, we show that factual inconsistency can be caused by irrelevan
Externí odkaz:
http://arxiv.org/abs/2212.09726