Zobrazeno 1 - 10
of 1 489
pro vyhledávání: '"Liao, An Ting"'
Intra-system entanglement occurs between non-separable modes within the same system. For optical systems, the various degrees of freedom of light represent different modes, and the potential use of light to create higher dimensional classical entangl
Externí odkaz:
http://arxiv.org/abs/2412.16896
Autor:
Martin-Hernandez, Rodrigo, Gui, Guan, Plaja, Luis, Kapteyn, Henry K., Murnane, Margaret M., Liao, Chen-Ting, Porras, Miguel A., Hernandez-Garcia, Carlos
Spatiotemporal optical vortices (STOV) are space-time structured light pulses with a unique topology that couples spatial and temporal domains and carry transverse orbital angular momentum (OAM). Up to now, their generation has been limited to the vi
Externí odkaz:
http://arxiv.org/abs/2412.01716
Autor:
Lee, Chia-Ming, Cheng, Ching-Heng, Lin, Yu-Fan, Cheng, Yi-Ching, Liao, Wo-Ting, Hsu, Chih-Chung, Yang, Fu-En, Wang, Yu-Chiang Frank
Recent developments in All-in-One (AiO) RGB image restoration and prompt learning have enabled the representation of distinct degradations through prompts, allowing degraded images to be effectively addressed by a single restoration model. However, t
Externí odkaz:
http://arxiv.org/abs/2411.15922
In real-world applications with Large Language Models (LLMs), external retrieval mechanisms - such as Search-Augmented Generation (SAG), tool utilization, and Retrieval-Augmented Generation (RAG) - are often employed to enhance the quality of augment
Externí odkaz:
http://arxiv.org/abs/2409.12558
Autor:
Hsu, Chan-Jan, Chen, Yi-Chang, Liao, Feng-Ting, Ho, Pei-Chen, Wang, Yu-Hsiang, Hsu, Po-Chun, Shiu, Da-shan
We introduce "Generative Fusion Decoding" (GFD), a novel shallow fusion framework, utilized to integrate Large Language Models (LLMs) into multi-modal text recognition systems such as automatic speech recognition (ASR) and optical character recogniti
Externí odkaz:
http://arxiv.org/abs/2405.14259
Autor:
Tanksalvala, Michael, Porter, Christina L., Esashi, Yuka, Wang, Bin, Jenkins, Nicholas W., Zhang, Zhe, Miley, Galen P., Knobloch, Joshua L., McBennett, Brendan, Horiguchi, Naoto, Yazdi, Sadegh, Zhou, Jihan, Jacobs, Matthew N., Bevis, Charles S., Karl Jr., Robert M., Johnsen, Peter, Ren, David, Waller, Laura, Adams, Daniel E., Cousin, Seth L., Liao, Chen-Ting, Miao, Jianwei, Gerrity, Michael, Kapteyn, Henry C., Murnane, Margaret M.
Publikováno v:
Science Advances 7(5), eabd9667 (2021)
Next-generation nano and quantum devices have increasingly complex 3D structure. As the dimensions of these devices shrink to the nanoscale, their performance is often governed by interface quality or precise chemical or dopant composition. Here we p
Externí odkaz:
http://arxiv.org/abs/2404.02170
Breeze-7B is an open-source language model based on Mistral-7B, designed to address the need for improved language comprehension and chatbot-oriented capabilities in Traditional Chinese. This technical report provides an overview of the additional pr
Externí odkaz:
http://arxiv.org/abs/2403.02712
Autor:
Lu, Xingyuan, Zou, Ji, Pham, Minh, Rana, Arjun, Liao, Chen-Ting, Subramanian, Emma Cating, Wu, Xuefei, Lo, Yuan Hung, Bevis, Charles S., Karl Jr, Robert M., Lepadatu, Serban, Yu, Young-Sang, Tserkovnyak, Yaroslav, Russell, Thomas P., Shapiro, David A., Kapteyn, Henry C., Murnane, Margaret M., Streubel, Robert, Miao, Jianwei
We use soft x-ray vector-ptychographic tomography to determine the three-dimensional magnetization field in superparamagnetic nanoparticles self-assembled at the liquid-liquid interface and reveal the magnetic order induced by layered structure. The
Externí odkaz:
http://arxiv.org/abs/2401.01284
The evaluation of large language models is an essential task in the field of language understanding and generation. As language models continue to advance, the need for effective benchmarks to assess their performance has become imperative. In the co
Externí odkaz:
http://arxiv.org/abs/2309.08448
In this work, we propose a method to create domain-sensitive speech recognition models that utilize textual domain information by conditioning its generation on a given text prompt. This is accomplished by fine-tuning a pre-trained, end-to-end model
Externí odkaz:
http://arxiv.org/abs/2307.10274