Zobrazeno 1 - 10
of 13 724
pro vyhledávání: '"Barlas, A."'
Autor:
Berges, Vincent-Pierre, Oğuz, Barlas, Haziza, Daniel, Yih, Wen-tau, Zettlemoyer, Luke, Ghosh, Gargi
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to s
Externí odkaz:
http://arxiv.org/abs/2412.09764
Non-analytic Bloch eigenstates at isolated band degeneracy points exhibit singular behavior in the quantum metric. Here, a description of superfluid weight for zero-energy flat bands in proximity to other high-energy bands is presented, where they to
Externí odkaz:
http://arxiv.org/abs/2407.14919
Autor:
Maltsev, Anna V., Barlas, Yasir Z., Hazan, Adina, Zhang, Rui, Ottolia, Michela, Goldhaber, Joshua I.
Biological systems, particularly the brain, are frequently analyzed as networks, conveying mechanistic insights into their function and pathophysiology. This is the first study of a functional network of cardiac tissue. We use calcium imaging to obta
Externí odkaz:
http://arxiv.org/abs/2405.15841
Autor:
Jiang, Guodong, Barlas, Yafis
The superfluid weight of an isolated flat band in multi-orbital superconductors contains contributions from the band's quantum metric and a lattice geometric term that depends on the orbital positions in the lattice. Since the superfluid weight is a
Externí odkaz:
http://arxiv.org/abs/2405.11260
Autor:
Lin, Sheng-Chieh, Gao, Luyu, Oguz, Barlas, Xiong, Wenhan, Lin, Jimmy, Yih, Wen-tau, Chen, Xilun
Alignment is a standard procedure to fine-tune pre-trained large language models (LLMs) to follow natural language instructions and serve as helpful AI assistants. We have observed, however, that the conventional alignment process fails to enhance th
Externí odkaz:
http://arxiv.org/abs/2405.01525
Autor:
Aguillard, D. P., Albahri, T., Allspach, D., Anisenkov, A., Badgley, K., Baeßler, S., Bailey, I., Bailey, L., Baranov, V. A., Barlas-Yucel, E., Barrett, T., Barzi, E., Bedeschi, F., Berz, M., Bhattacharya, M., Binney, H. P., Bloom, P., Bono, J., Bottalico, E., Bowcock, T., Braun, S., Bressler, M., Cantatore, G., Carey, R. M., Casey, B. C. K., Cauz, D., Chakraborty, R., Chapelain, A., Chappa, S., Charity, S., Chen, C., Cheng, M., Chislett, R., Chu, Z., Chupp, T. E., Claessens, C., Convery, M. E., Corrodi, S., Cotrozzi, L., Crnkovic, J. D., Dabagov, S., Debevec, P. T., Di Falco, S., Di Sciascio, G., Donati, S., Drendel, B., Driutti, A., Duginov, V. N., Eads, M., Edmonds, A., Esquivel, J., Farooq, M., Fatemi, R., Ferrari, C., Fertl, M., Fienberg, A. T., Fioretti, A., Flay, D., Foster, S. B., Friedsam, H., Froemming, N. S., Gabbanini, C., Gaines, I., Galati, M. D., Ganguly, S., Garcia, A., George, J., Gibbons, L. K., Gioiosa, A., Giovanetti, K. L., Girotti, P., Gohn, W., Goodenough, L., Gorringe, T., Grange, J., Grant, S., Gray, F., Haciomeroglu, S., Halewood-Leagas, T., Hampai, D., Han, F., Hempstead, J., Hertzog, D. W., Hesketh, G., Hess, E., Hibbert, A., Hodge, Z., Hong, K. W., Hong, R., Hu, T., Hu, Y., Iacovacci, M., Incagli, M., Kammel, P., Kargiantoulakis, M., Karuza, M., Kaspar, J., Kawall, D., Kelton, L., Keshavarzi, A., Kessler, D. S., Khaw, K. S., Khechadoorian, Z., Khomutov, N. V., Kiburg, B., Kiburg, M., Kim, O., Kinnaird, N., Kraegeloh, E., Krylov, V. A., Kuchinskiy, N. A., Labe, K. R., LaBounty, J., Lancaster, M., Lee, S., Li, B., Li, D., Li, L., Logashenko, I., Campos, A. Lorente, Lu, Z., Lucà, A., Lukicov, G., Lusiani, A., Lyon, A. L., MacCoy, B., Madrak, R., Makino, K., Mastroianni, S., Miller, J. P., Miozzi, S., Mitra, B., Morgan, J. P., Morse, W. M., Mott, J., Nath, A., Ng, J. K., Nguyen, H., Oksuzian, Y., Omarov, Z., Osofsky, R., Park, S., Pauletta, G., Piacentino, G. M., Pilato, R. N., Pitts, K. T., Plaster, B., Počanić, D., Pohlman, N., Polly, C. C., Price, J., Quinn, B., Qureshi, M. U. H., Ramachandran, S., Ramberg, E., Reimann, R., Roberts, B. L., Rubin, D. L., Sakurai, M., Santi, L., Schlesier, C., Schreckenberger, A., Semertzidis, Y. K., Shemyakin, D., Sorbara, M., Stapleton, J., Still, D., Stöckinger, D., Stoughton, C., Stratakis, D., Swanson, H. E., Sweetmore, G., Sweigart, D. A., Syphers, M. J., Tarazona, D. A., Teubner, T., Tewsley-Booth, A. E., Tishchenko, V., Tran, N. H., Turner, W., Valetov, E., Vasilkova, D., Venanzoni, G., Volnykh, V. P., Walton, T., Weisskopf, A., Welty-Rieger, L., Winter, P., Wu, Y., Yu, B., Yucel, M., Zeng, Y., Zhang, C.
We present details on a new measurement of the muon magnetic anomaly, $a_\mu = (g_\mu -2)/2$. The result is based on positive muon data taken at Fermilab's Muon Campus during the 2019 and 2020 accelerator runs. The measurement uses $3.1$ GeV$/c$ pola
Externí odkaz:
http://arxiv.org/abs/2402.15410
Autor:
Russell, B. Jordan, Schossler, Matheus, Balgley, Jesse, Kapoor, Yashika, Taniguchi, T., Watanabe, K., Seidel, Alexander, Barlas, Yafis, Henriksen, Erik A.
We perform infrared magneto-spectroscopy of Landau level (LL) transitions in dual-gated bilayer graphene. At $\nu=4$ when the zeroth LL (octet) is filled, two resonances are observed indicating the opening of a gap. At $\nu=0$ when the octet is half-
Externí odkaz:
http://arxiv.org/abs/2312.02489
The study explores the effectiveness of the Chain-of-Thought approach, known for its proficiency in language tasks by breaking them down into sub-tasks and intermediate steps, in improving vision-language tasks that demand sophisticated perception an
Externí odkaz:
http://arxiv.org/abs/2311.09193
In recent years, advances in the large-scale pretraining of language and text-to-image models have revolutionized the field of machine learning. Yet, integrating these two modalities into a single, robust model capable of generating seamless multimod
Externí odkaz:
http://arxiv.org/abs/2309.15564
Autor:
Xiong, Wenhan, Liu, Jingyu, Molybog, Igor, Zhang, Hejia, Bhargava, Prajjwal, Hou, Rui, Martin, Louis, Rungta, Rashi, Sankararaman, Karthik Abinav, Oguz, Barlas, Khabsa, Madian, Fang, Han, Mehdad, Yashar, Narang, Sharan, Malik, Kshitiz, Fan, Angela, Bhosale, Shruti, Edunov, Sergey, Lewis, Mike, Wang, Sinong, Ma, Hao
We present a series of long-context LLMs that support effective context windows of up to 32,768 tokens. Our model series are built through continual pretraining from Llama 2 with longer training sequences and on a dataset where long texts are upsampl
Externí odkaz:
http://arxiv.org/abs/2309.16039