Zobrazeno 1 - 10
of 793
pro vyhledávání: '"Yan Bowen"'
We propose a novel construction of the Floquet 3D toric code and Floquet $X$-cube code through the coupling of spin chains. This approach not only recovers the coupling layer construction on foliated lattices in three dimensions but also avoids the c
Externí odkaz:
http://arxiv.org/abs/2410.18265
The rapid development of Large Vision-Language Models (LVLMs) often comes with widespread hallucination issues, making cost-effective and comprehensive assessments increasingly vital. Current approaches mainly rely on costly annotations and are not c
Externí odkaz:
http://arxiv.org/abs/2409.13612
In this study,we investigate the characteristics of three-dimensional turbulent boundary layers influenced by transverse flow and pressure gradients. Our findings reveal that even without assuming an infinite sweep, a fully developed turbulent bounda
Externí odkaz:
http://arxiv.org/abs/2407.15469
The attachment-line boundary layer is critical in hypersonic flows because of its significant impact on heat transfer and aerodynamic performance. In this study, high-fidelity numerical simulations are conducted to analyze the subcritical roughness-i
Externí odkaz:
http://arxiv.org/abs/2407.15465
We systematically analyze the representability of toric code ground states by Restricted Boltzmann Machine with only local connections between hidden and visible neurons. This analysis is pivotal for evaluating the model's capability to represent div
Externí odkaz:
http://arxiv.org/abs/2407.01451
Large Language Models (LLMs) have demonstrated an impressive capability known as In-context Learning (ICL), which enables them to acquire knowledge from textual demonstrations without the need for parameter updates. However, many studies have highlig
Externí odkaz:
http://arxiv.org/abs/2406.01224
Large Language Models (LLMs) have demonstrated impressive capabilities for generalizing in unseen tasks. In the Named Entity Recognition (NER) task, recent advancements have seen the remarkable improvement of LLMs in a broad range of entity domains v
Externí odkaz:
http://arxiv.org/abs/2402.16602
Autor:
Tang, Zecheng, Zhou, Keyan, Li, Juntao, Ding, Yuyang, Wang, Pinzheng, Yan, Bowen, Hua, Rejie, Zhang, Min
Text detoxification aims to minimize the risk of language models producing toxic content. Existing detoxification methods of directly constraining the model output or further training the model on the non-toxic corpus fail to achieve a decent balance
Externí odkaz:
http://arxiv.org/abs/2308.08295
The Kitaev spin liquid model on honeycomb lattice offers an intriguing feature that encapsulates both Abelian and non-Abelian anyons. Recent studies suggest that the comprehensive phase diagram of possible generalized Kitaev model largely depends on
Externí odkaz:
http://arxiv.org/abs/2308.06835
Autor:
Yan, Bowen1 (AUTHOR), Zeng, Lin1 (AUTHOR), Lu, Yanyi1 (AUTHOR), Li, Min2 (AUTHOR), Lu, Weiping2 (AUTHOR), Zhou, Bangfu1 (AUTHOR), He, Qinghua1 (AUTHOR) heqinghua@tmmu.edu.cn
Publikováno v:
BMC Bioinformatics. 11/6/2024, Vol. 25 Issue 1, p1-21. 21p.