Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Jannai, Daniel"'
Autor:
Jamba Team, Lenz, Barak, Arazi, Alan, Bergman, Amir, Manevich, Avshalom, Peleg, Barak, Aviram, Ben, Almagor, Chen, Fridman, Clara, Padnos, Dan, Gissin, Daniel, Jannai, Daniel, Muhlgay, Dor, Zimberg, Dor, Gerber, Edden M, Dolev, Elad, Krakovsky, Eran, Safahi, Erez, Schwartz, Erez, Cohen, Gal, Shachaf, Gal, Rozenblum, Haim, Bata, Hofit, Blass, Ido, Magar, Inbal, Dalmedigos, Itay, Osin, Jhonathan, Fadlon, Julie, Rozman, Maria, Danos, Matan, Gokhman, Michael, Zusman, Mor, Gidron, Naama, Ratner, Nir, Gat, Noam, Rozen, Noam, Fried, Oded, Leshno, Ohad, Antverg, Omer, Abend, Omri, Lieber, Opher, Dagan, Or, Cohavi, Orit, Alon, Raz, Belson, Ro'i, Cohen, Roi, Gilad, Rom, Glozman, Roman, Lev, Shahar, Meirom, Shaked, Delbari, Tal, Ness, Tal, Asida, Tomer, Gal, Tom Ben, Braude, Tom, Pumerantz, Uriya, Cohen, Yehoshua, Belinkov, Yonatan, Globerson, Yuval, Levy, Yuval Peleg, Shoham, Yoav
We present Jamba-1.5, new instruction-tuned large language models based on our Jamba architecture. Jamba is a hybrid Transformer-Mamba mixture of experts architecture, providing high throughput and low memory usage across context lengths, while retai
Externí odkaz:
http://arxiv.org/abs/2408.12570
We present "Human or Not?", an online game inspired by the Turing test, that measures the capability of AI chatbots to mimic humans in dialog, and of humans to tell bots from other humans. Over the course of a month, the game was played by over 1.5 m
Externí odkaz:
http://arxiv.org/abs/2305.20010
Autor:
Levine, Yoav, Dalmedigos, Itay, Ram, Ori, Zeldes, Yoel, Jannai, Daniel, Muhlgay, Dor, Osin, Yoni, Lieber, Opher, Lenz, Barak, Shalev-Shwartz, Shai, Shashua, Amnon, Leyton-Brown, Kevin, Shoham, Yoav
Huge pretrained language models (LMs) have demonstrated surprisingly good zero-shot capabilities on a wide variety of tasks. This gives rise to the appealing vision of a single, versatile model with a wide range of functionalities across disparate ap
Externí odkaz:
http://arxiv.org/abs/2204.10019
Pretraining Neural Language Models (NLMs) over a large corpus involves chunking the text into training examples, which are contiguous text segments of sizes processable by the neural architecture. We highlight a bias introduced by this common practic
Externí odkaz:
http://arxiv.org/abs/2110.04541
After their successful debut in natural language processing, Transformer architectures are now becoming the de-facto standard in many domains. An obstacle for their deployment over new modalities is the architectural configuration: the optimal depth-
Externí odkaz:
http://arxiv.org/abs/2105.03928