Zobrazeno 1 - 10
of 21 624
pro vyhledávání: '"Biderman A"'
Autor:
RAVEH, DANIEL1 danra@tauex.tau.ac.il
Publikováno v:
Comparative Philosophy. Sep-Dec2023, Vol. 14 Issue 2, p105-118. 14p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Makauskaitė, Ugnė Marija1 ugnemakauskaite@gmail.com
Publikováno v:
Acta Academiae Artium Vilnensis. 2024, Issue 113, p207-235. 27p.
Autor:
Longpre, Shayne, Singh, Nikhil, Cherep, Manuel, Tiwary, Kushagra, Materzynska, Joanna, Brannon, William, Mahari, Robert, Dey, Manan, Hamdy, Mohammed, Saxena, Nayan, Anis, Ahmad Mustafa, Alghamdi, Emad A., Chien, Vu Minh, Obeng-Marnu, Naana, Yin, Da, Qian, Kun, Li, Yizhi, Liang, Minnie, Dinh, An, Mohanty, Shrestha, Mataciunas, Deividas, South, Tobin, Zhang, Jianguo, Lee, Ariel N., Lund, Campbell S., Klamm, Christopher, Sileo, Damien, Misra, Diganta, Shippole, Enrico, Klyman, Kevin, Miranda, Lester JV, Muennighoff, Niklas, Ye, Seonghyeon, Kim, Seungone, Gupta, Vipul, Sharma, Vivek, Zhou, Xuhui, Xiong, Caiming, Villa, Luis, Biderman, Stella, Pentland, Alex, Hooker, Sara, Kabbara, Jad
Progress in AI is driven largely by the scale and quality of training data. Despite this, there is a deficit of empirical analysis examining the attributes of well-established datasets beyond text. In this work we conduct the largest and first-of-its
Externí odkaz:
http://arxiv.org/abs/2412.17847
Autor:
Alam, Mohammad Mahmudul, Oberle, Alexander, Raff, Edward, Biderman, Stella, Oates, Tim, Holt, James
Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\mathbb{R}^d$ are `bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this bindin
Externí odkaz:
http://arxiv.org/abs/2410.22669
Autor:
Woolley, Scott
Publikováno v:
Forbes. 3/12/2007, Vol. 179 Issue 5, p64-68. 3p. 1 Color Photograph.
Autor:
Longpre, Shayne, Mahari, Robert, Lee, Ariel, Lund, Campbell, Oderinwale, Hamidah, Brannon, William, Saxena, Nayan, Obeng-Marnu, Naana, South, Tobin, Hunter, Cole, Klyman, Kevin, Klamm, Christopher, Schoelkopf, Hailey, Singh, Nikhil, Cherep, Manuel, Anis, Ahmad, Dinh, An, Chitongo, Caroline, Yin, Da, Sileo, Damien, Mataciunas, Deividas, Misra, Diganta, Alghamdi, Emad, Shippole, Enrico, Zhang, Jianguo, Materzynska, Joanna, Qian, Kun, Tiwary, Kush, Miranda, Lester, Dey, Manan, Liang, Minnie, Hamdy, Mohammed, Muennighoff, Niklas, Ye, Seonghyeon, Kim, Seungone, Mohanty, Shrestha, Gupta, Vipul, Sharma, Vivek, Chien, Vu Minh, Zhou, Xuhui, Li, Yizhi, Xiong, Caiming, Villa, Luis, Biderman, Stella, Li, Hanlin, Ippolito, Daphne, Hooker, Sara, Kabbara, Jad, Pentland, Sandy
General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent pro
Externí odkaz:
http://arxiv.org/abs/2407.14933
Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs' internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the ques
Externí odkaz:
http://arxiv.org/abs/2407.10827
State space models (SSMs) have shown remarkable empirical performance on many long sequence modeling tasks, but a theoretical understanding of these models is still lacking. In this work, we study the learning dynamics of linear SSMs to understand ho
Externí odkaz:
http://arxiv.org/abs/2407.07279