Zobrazeno 1 - 10
of 1 237
pro vyhledávání: '"Zhang, Mian"'
Autor:
Song, Yunxiang, Hu, Yaowen, Zhu, Xinrui, Powell, Keith, Magalhães, Letícia, Ye, Fan, Warner, Hana, Lu, Shengyuan, Li, Xudong, Renaud, Dylan, Lippok, Norman, Zhu, Di, Vakoc, Benjamin, Zhang, Mian, Sinclair, Neil, Lončar, Marko
The rapid growth in artificial intelligence and modern communication systems demands innovative solutions for increased computational power and advanced signaling capabilities. Integrated photonics, leveraging the analog nature of electromagnetic wav
Externí odkaz:
http://arxiv.org/abs/2411.04395
Autor:
Hu, Yaowen, Song, Yunxiang, Zhu, Xinrui, Guo, Xiangwen, Lu, Shengyuan, Zhang, Qihang, He, Lingyan, Franken, C. A. A., Powell, Keith, Warner, Hana, Assumpcao, Daniel, Renaud, Dylan, Wang, Ying, Magalhães, Letícia, Rosborough, Victoria, Shams-Ansari, Amirhassan, Li, Xudong, Cheng, Rebecca, Luke, Kevin, Yang, Kiyoul, Barbastathis, George, Zhang, Mian, Zhu, Di, Johansson, Leif, Beling, Andreas, Sinclair, Neil, Loncar, Marko
Here we show a photonic computing accelerator utilizing a system-level thin-film lithium niobate circuit which overcomes this limitation. Leveraging the strong electro-optic (Pockels) effect and the scalability of this platform, we demonstrate photon
Externí odkaz:
http://arxiv.org/abs/2411.02734
Autor:
Zhang, Mian, Yang, Xianjun, Zhang, Xinlu, Labrum, Travis, Chiu, Jamie C., Eack, Shaun M., Fang, Fei, Wang, William Yang, Chen, Zhiyu Zoey
There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose
Externí odkaz:
http://arxiv.org/abs/2410.13218
Autor:
Zhou, Shuang, Xu, Zidu, Zhang, Mian, Xu, Chunpu, Guo, Yawen, Zhan, Zaifu, Ding, Sirui, Wang, Jiashuo, Xu, Kaishuai, Fang, Yi, Xia, Liqiao, Yeung, Jeremy, Zha, Daochen, Melton, Genevieve B., Lin, Mingquan, Zhang, Rui
Automatic disease diagnosis has become increasingly valuable in clinical practice. The advent of large language models (LLMs) has catalyzed a paradigm shift in artificial intelligence, with growing evidence supporting the efficacy of LLMs in diagnost
Externí odkaz:
http://arxiv.org/abs/2409.00097
While large language models (LLMs) have been thoroughly evaluated for deductive and inductive reasoning, their proficiency in abductive reasoning and holistic rule learning in interactive environments remains less explored. We introduce RULEARN, a no
Externí odkaz:
http://arxiv.org/abs/2408.10455
Autor:
He, Yang, Cheng, Long, Wang, Heming, Zhang, Yu, Meade, Roy, Vahala, Kerry, Zhang, Mian, Li, Jiang
Optical frequency division based on bulk or fiber optics provides unprecedented spectral purity for microwave oscillators. To extend the applications of this approach, the big challenges are to develop miniaturized optical frequency division oscillat
Externí odkaz:
http://arxiv.org/abs/2402.16229
One critical issue for chat systems is to stay consistent about preferences, opinions, beliefs and facts of itself, which has been shown a difficult problem. In this work, we study methods to assess and bolster utterance consistency of chat systems.
Externí odkaz:
http://arxiv.org/abs/2401.10353
Aspect Sentiment Triplet Extraction (ASTE) aims to extract the triplet of an aspect term, an opinion term, and their corresponding sentiment polarity from the review texts. Due to the complexity of language and the existence of multiple aspect terms
Externí odkaz:
http://arxiv.org/abs/2306.10042
Frequency comb generation via synchronous pumped $\chi^{(3)}$ resonator on thin-film lithium niobate
Autor:
Cheng, Rebecca, Yu, Mengjie, Shams-Ansari, Amirhassan, Hu, Yaowen, Reimer, Christian, Zhang, Mian, Lončar, Marko
Resonator-based optical frequency comb generation is an enabling technology for a myriad of applications ranging from communications to precision spectroscopy. These frequency combs can be generated in nonlinear resonators driven using either continu
Externí odkaz:
http://arxiv.org/abs/2304.12878
Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes. How
Externí odkaz:
http://arxiv.org/abs/2301.13683