Zobrazeno 1 - 10
of 182
pro vyhledávání: '"Liu Xiaoze"'
Publikováno v:
Nanophotonics, Vol 12, Iss 16, Pp 3211-3216 (2023)
Fano resonance due to coupling of plasmon mode and Bragg modes is revealed without strong angular dependence based on Au nanoparticle on distributed Bragg reflectors (Au NPoDBRs). This Fano interference involves three-modes-coupling: the nanoparticle
Externí odkaz:
https://doaj.org/article/65f2b9943bbd44f2b81a2d6ca8c40df8
Autor:
Yu, Longxuan, Chen, Delin, Xiong, Siheng, Wu, Qingyang, Liu, Qingzhen, Li, Dawei, Chen, Zhikai, Liu, Xiaoze, Pan, Liangming
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world. While large language models (LLMs) can generate rationales for their outputs, their ability to reliably perform ca
Externí odkaz:
http://arxiv.org/abs/2410.16676
Autor:
Ma Xuezhi, Youngblood Nathan, Liu Xiaoze, Cheng Yan, Cunha Preston, Kudtarkar Kaushik, Wang Xiaomu, Lan Shoufeng
Publikováno v:
Nanophotonics, Vol 10, Iss 3, Pp 1031-1058 (2020)
A fascinating photonic platform with a small device scale, fast operating speed, as well as low energy consumption is two-dimensional (2D) materials, thanks to their in-plane crystalline structures and out-of-plane quantum confinement. The key to fur
Externí odkaz:
https://doaj.org/article/2fb74d112a5e4227baa89ca965f00060
Reinforcement learning with human feedback (RLHF) fine-tunes a pretrained large language model (LLM) using preference datasets, enabling the LLM to generate outputs that align with human preferences. Given the sensitive nature of these preference dat
Externí odkaz:
http://arxiv.org/abs/2407.03038
Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits. The legal landscape is struggling to
Externí odkaz:
http://arxiv.org/abs/2406.12975
Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications. Previous work elicits confidence from LLMs by direct or self-consistency prompt
Externí odkaz:
http://arxiv.org/abs/2405.20974
The advent of Large Language Models (LLMs) has significantly transformed the AI landscape, enhancing machine learning and AI capabilities. Factuality issue is a critical concern for LLMs, as they may generate factually incorrect responses. In this pa
Externí odkaz:
http://arxiv.org/abs/2404.00942
Autor:
Chen, Zhuo, Zhang, Yichi, Fang, Yin, Geng, Yuxia, Guo, Lingbing, Chen, Xiang, Li, Qian, Zhang, Wen, Chen, Jiaoyan, Zhu, Yushan, Li, Jiaqi, Liu, Xiaoze, Pan, Jeff Z., Zhang, Ningyu, Chen, Huajun
Knowledge Graphs (KGs) play a pivotal role in advancing various AI applications, with the semantic web community's exploration into multi-modal dimensions unlocking new avenues for innovation. In this survey, we carefully review over 300 articles, fo
Externí odkaz:
http://arxiv.org/abs/2402.05391
Autor:
Wang, Cunxiang, Liu, Xiaoze, Yue, Yuanhao, Tang, Xiangru, Zhang, Tianhang, Jiayang, Cheng, Yao, Yunzhi, Gao, Wenyang, Hu, Xuming, Qi, Zehan, Wang, Yidong, Yang, Linyi, Wang, Jindong, Xie, Xing, Zhang, Zheng, Zhang, Yue
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital. We define the Factuality Issue as the probability of
Externí odkaz:
http://arxiv.org/abs/2310.07521
The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive and unified KG. The majority of EA methods have primarily focused on the structural modality of KGs, la
Externí odkaz:
http://arxiv.org/abs/2310.05364