Zobrazeno 1 - 10
of 172
pro vyhledávání: '"Luo, Xiaoliang"'
The impressive performance of large language models (LLMs) has led to their consideration as models of human language processing. Instead, we suggest that the success of LLMs arises from the flexibility of the transformer learning architecture. To ev
Externí odkaz:
http://arxiv.org/abs/2411.11061
Large language models (LLMs) have emerged as powerful tools in various domains. Recent studies have shown that LLMs can surpass humans in certain tasks, such as predicting the outcomes of neuroscience studies. What role does this leave for humans in
Externí odkaz:
http://arxiv.org/abs/2408.08083
Recently, large language models (LLMs) have outperformed human experts in predicting the results of neuroscience experiments (Luo et al., 2024). What is the basis for this performance? One possibility is that statistical patterns in that specific sci
Externí odkaz:
http://arxiv.org/abs/2405.09395
Autor:
Luo, Xiaoliang, Rechardt, Akilles, Sun, Guangzhi, Nejad, Kevin K., Yáñez, Felipe, Yilmaz, Bati, Lee, Kangjoo, Cohen, Alexandra O., Borghesani, Valentina, Pashkov, Anton, Marinazzo, Daniele, Nicholas, Jonathan, Salatiello, Alessandro, Sucholutsky, Ilia, Minervini, Pasquale, Razavi, Sepehr, Rocca, Roberta, Yusifov, Elkhan, Okalova, Tereza, Gu, Nianlong, Ferianc, Martin, Khona, Mikail, Patil, Kaustubh R., Lee, Pui-Shee, Mata, Rui, Myers, Nicholas E., Bizley, Jennifer K, Musslick, Sebastian, Bilgin, Isil Poyraz, Niso, Guiomar, Ales, Justin M., Gaebler, Michael, Murty, N Apurva Ratan, Loued-Khenissi, Leyla, Behler, Anna, Hall, Chloe M., Dafflon, Jessica, Bao, Sherry Dongqi, Love, Bradley C.
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could pot
Externí odkaz:
http://arxiv.org/abs/2403.03230
Autor:
Zhang, Yuanying, Liu, Fengyun, Wang, Xiaojing, Liang, Xiubing, Guo, Zheng, Luo, Xiaoliang, Deng, Jinjun, Zhang, Xingxu, Luo, Jian, Ma, Binghe
Publikováno v:
In Measurement January 2025 242 Part E
Autor:
Xu, Haobo, Wang, Wei, Yuan, Jiansong, Guo, Chao, Hu, Fenghuan, Yang, Weixian, Luo, Xiaoliang, Cui, Jingang, Qiao, Shubin, Wang, Juan
Publikováno v:
In Sleep Medicine April 2024 116:115-122
Top-down attention allows neural networks, both artificial and biological, to focus on the information most relevant for a given task. This is known to enhance performance in visual perception. But it remains unclear how attention brings about its pe
Externí odkaz:
http://arxiv.org/abs/2106.11339
Autor:
Dagaev, Nikolay, Roads, Brett D., Luo, Xiaoliang, Barry, Daniel N., Patil, Kaustubh R., Love, Bradley C.
Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep networks often fail to generalize to out-of-distribution (o.o.d.) samples. One cause for this shortcoming is that modern architectures
Externí odkaz:
http://arxiv.org/abs/2102.06406
Autor:
Chen, Yuan1 (AUTHOR) cherrychen_916@hotmail.com, Luo, Xiaoliang2 (AUTHOR)
Publikováno v:
NanoEthics. Apr2024, Vol. 18 Issue 1, p1-7. 7p.
Top-down attention allows people to focus on task-relevant visual information. Is the resulting perceptual boost task-dependent in naturalistic settings? We aim to answer this with a large-scale computational experiment. First, we design a collection
Externí odkaz:
http://arxiv.org/abs/2003.00882