Zobrazeno 1 - 10
of 267
pro vyhledávání: '"Wang, Zerui"'
Publikováno v:
IEEE Transactions on Cloud Computing ( Volume: 12, Issue: 2, April-June 2024)
This article presents the design of an open-API-based explainable AI (XAI) service to provide feature contribution explanations for cloud AI services. Cloud AI services are widely used to develop domain-specific applications with precise learning met
Externí odkaz:
http://arxiv.org/abs/2411.03376
Autor:
Wang, Zerui, Liu, Yan
Transformer-based models have achieved state-of-the-art performance in various computer vision tasks, including image and video analysis. However, Transformer's complex architecture and black-box nature pose challenges for explainability, a crucial a
Externí odkaz:
http://arxiv.org/abs/2411.00630
Autor:
Yin, Yanzhen, Zhao, Zhichen, Xu, Junbo, Wang, Zerui, Zhou, Lei, Zhou, Zhou, Yin, Yu, Huang, Di, Zhong, Gang, Ni, Xiang, Wang, Zhanshan, Cheng, Xinbin, Zhu, Jingyuan, Ou, Qingdong, Jiang, Tao
Polaritonic crystals (PoCs) have experienced significant advancements through involving hyperbolic polaritons in anisotropic materials such as $\alpha$-MoO$_{\rm 3}$, offering a promising approach for nanoscale light control and improved light-matter
Externí odkaz:
http://arxiv.org/abs/2409.09782
Autor:
Xu, Haoran, Liu, Ziqian, Fu, Rong, Su, Zhongling, Wang, Zerui, Cai, Zheng, Pei, Zhilin, Zhang, Xingcheng
With the evolution of large language models, traditional Transformer models become computationally demanding for lengthy sequences due to the quadratic growth in computation with respect to the sequence length. Mamba, emerging as a groundbreaking arc
Externí odkaz:
http://arxiv.org/abs/2408.03865
Autor:
Duan, Jiangfei, Zhang, Shuo, Wang, Zerui, Jiang, Lijuan, Qu, Wenwen, Hu, Qinghao, Wang, Guoteng, Weng, Qizhen, Yan, Hang, Zhang, Xingcheng, Qiu, Xipeng, Lin, Dahua, Wen, Yonggang, Jin, Xin, Zhang, Tianwei, Sun, Peng
Large Language Models (LLMs) like GPT and LLaMA are revolutionizing the AI industry with their sophisticated capabilities. Training these models requires vast GPU clusters and significant computing time, posing major challenges in terms of scalabilit
Externí odkaz:
http://arxiv.org/abs/2407.20018
In this study, we propose the early adoption of Explainable AI (XAI) with a focus on three properties: Quality of explanation, the explanation summaries should be consistent across multiple XAI methods; Architectural Compatibility, for effective inte
Externí odkaz:
http://arxiv.org/abs/2403.16858
Autor:
Hu, Qinghao, Ye, Zhisheng, Wang, Zerui, Wang, Guoteng, Zhang, Meng, Chen, Qiaoling, Sun, Peng, Lin, Dahua, Wang, Xiaolin, Luo, Yingwei, Wen, Yonggang, Zhang, Tianwei
Large Language Models (LLMs) have presented impressive performance across several transformative tasks. However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs, often riddled with numerous challenges such as fr
Externí odkaz:
http://arxiv.org/abs/2403.07648
Autor:
Wang, Zerui, Liu, Yan
The opacity of AI models necessitates both validation and evaluation before their integration into services. To investigate these models, explainable AI (XAI) employs methods that elucidate the relationship between input features and output predictio
Externí odkaz:
http://arxiv.org/abs/2401.12261
Accurate segmentation of surgical instrument tip is an important task for enabling downstream applications in robotic surgery, such as surgical skill assessment, tool-tissue interaction and deformation modeling, as well as surgical autonomy. However,
Externí odkaz:
http://arxiv.org/abs/2309.00957
Autor:
Wang, Zerui1 (AUTHOR) zxw488@case.edu, Gilliland, Tricia1 (AUTHOR), Kim, Hyun Jo2 (AUTHOR), Gerasimenko, Maria1 (AUTHOR), Sajewski, Kailey3 (AUTHOR), Camacho, Manuel V.1 (AUTHOR), Bebek, Gurkan2,4 (AUTHOR), Chen, Shu G.5 (AUTHOR) sgchen@uabmc.edu, Gunzler, Steven A.3,6 (AUTHOR) Steven.Gunzler@uhhospitals.org, Kong, Qingzhong1,6,7 (AUTHOR) qxk2@case.edu
Publikováno v:
Acta Neuropathologica Communications. 10/22/2024, Vol. 12 Issue 1, p1-12. 12p.