Zobrazeno 1 - 10
of 1 135
pro vyhledávání: '"Yang, Zhilin"'
Autor:
Yang, Huanyu, Yang, Zhilin
In this article, we study the weak coupling limit of the following equation in $\mathbb{R}^2$: $$dX_t^\varepsilon=\frac{\hat{\lambda}}{\sqrt{\log\frac1\varepsilon}}\omega^\varepsilon(X_t^\varepsilon)dt+\nu dB_t,\quad X_0^\varepsilon=0. $$ Here $\omeg
Externí odkaz:
http://arxiv.org/abs/2405.05778
Autor:
Zheng, Qinkai, Xia, Xiao, Zou, Xu, Dong, Yuxiao, Wang, Shan, Xue, Yufei, Wang, Zihan, Shen, Lei, Wang, Andi, Li, Yang, Su, Teng, Yang, Zhilin, Tang, Jie
Large pre-trained code generation models, such as OpenAI Codex, can generate syntax- and function-correct code, making the coding of programmers more productive and our pursuit of artificial general intelligence closer. In this paper, we introduce Co
Externí odkaz:
http://arxiv.org/abs/2303.17568
Autor:
Ren Peiwen, Huang Zhuo, Luo Song, Liu Jia, Dong Xiaoxiang, Zhang Hua, Li Jianfeng, Yang Zhilin
Publikováno v:
Nanophotonics, Vol 13, Iss 18, Pp 3449-3456 (2024)
Quasi-bound states in the continuum (quasi-BICs) offer unique advantages in enhancing nonlinear optical processes and advancing the development of active optical devices. Here, the tunable robust quasi-BICs resonances are experimentally achieved thro
Externí odkaz:
https://doaj.org/article/39d7b1fb73554d4192972cd404b11aac
Autor:
Wang, Zhihao, Lin, Zongyu, Liu, Peiqi, ZHeng, Guidong, Wen, Junjie, Chen, Xianxin, Chen, Yujun, Yang, Zhilin
Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation. Many existing approaches are based on heuristics such as sample losses, which might not be flexible enough to
Externí odkaz:
http://arxiv.org/abs/2212.13767
Generative modeling has been the dominant approach for large-scale pretraining and zero-shot generalization. In this work, we challenge this convention by showing that discriminative approaches perform substantially better than generative ones on a l
Externí odkaz:
http://arxiv.org/abs/2211.08099
Natural language prompts have been shown to facilitate cross-task generalization for large language models. However, with no or limited labeled examples, the cross-task performance is highly sensitive to the choice of prompts, while selecting a high-
Externí odkaz:
http://arxiv.org/abs/2211.04668
Few-shot named entity recognition (NER) targets generalizing to unseen labels and/or domains with few labeled examples. Existing metric learning methods compute token-level similarities between query and support sets, but are not able to fully incorp
Externí odkaz:
http://arxiv.org/abs/2211.04337
Prompt-based techniques have demostrated great potential for improving the few-shot generalization of pretrained language models. However, their performance heavily relies on the manual design of prompts and thus requires a lot of human efforts. In t
Externí odkaz:
http://arxiv.org/abs/2210.17041
Publikováno v:
European Journal of Marketing, 2023, Vol. 57, Issue 11, pp. 2974-3004.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/EJM-03-2022-0145
Publikováno v:
In Journal of Water Process Engineering August 2024 65