Zobrazeno 1 - 10
of 9 935
pro vyhledávání: '"LIU, Qin"'
The electron dynamics and SiO2 etching profile evolution in capacitively coupled Ar/CHF3 plasmas driven by sawtooth-waveforms are investigated based on a one-dimensional fluid/Monte-Carlo (MC) model coupled with an etching profile evolution model. Th
Externí odkaz:
http://arxiv.org/abs/2411.07839
Autor:
Liu, Qin, Wang, Jianfeng, Yang, Zhengyuan, Li, Linjie, Lin, Kevin, Niethammer, Marc, Wang, Lijuan
Semi-supervised video object segmentation (VOS) has been largely driven by space-time memory (STM) networks, which store past frame features in a spatiotemporal memory to segment the current frame via softmax attention. However, STM networks face mem
Externí odkaz:
http://arxiv.org/abs/2411.02818
Existing preference alignment is a one-size-fits-all alignment mechanism, where the part of the large language model (LLM) parametric knowledge with non-preferred features is uniformly blocked to all the users. However, this part of knowledge can be
Externí odkaz:
http://arxiv.org/abs/2410.14676
Autor:
Wang, Bin, Choudhuri, Anwesa, Zheng, Meng, Gao, Zhongpai, Planche, Benjamin, Deng, Andong, Liu, Qin, Chen, Terrence, Bagci, Ulas, Wu, Ziyan
Interactive segmentation aims to accurately segment target objects with minimal user interactions. However, current methods often fail to accurately separate target objects from the background, due to a limited understanding of order, the relative de
Externí odkaz:
http://arxiv.org/abs/2410.12214
Autor:
Liu, Qin, Shang, Chao, Liu, Ling, Pappas, Nikolaos, Ma, Jie, John, Neha Anna, Doss, Srikanth, Marquez, Lluis, Ballesteros, Miguel, Benajiba, Yassine
The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integration of the vision module compared to its LLM backbone. We investigate this phenomenon, dubbed as ''safety alignment degradation'' in this paper, and
Externí odkaz:
http://arxiv.org/abs/2410.09047
Large Vision-Language Models (LVLMs) have demonstrated impressive capabilities for capturing and reasoning over multimodal inputs. However, these models are prone to parametric knowledge conflicts, which arise from inconsistencies of represented know
Externí odkaz:
http://arxiv.org/abs/2410.03659
The advancement of Large Language Models (LLMs) has significantly impacted various domains, including Web search, healthcare, and software development. However, as these models scale, they become more vulnerable to cybersecurity risks, particularly b
Externí odkaz:
http://arxiv.org/abs/2409.19993
Retrieval Augmented Generation (RAG) improves large language models (LMs) by incorporating non-parametric knowledge through evidence retrieval from external sources. However, it often struggles to filter out inconsistent and irrelevant information th
Externí odkaz:
http://arxiv.org/abs/2409.12468
Large language models (LLMs) have acquired the ability to handle longer context lengths and understand nuances in text, expanding their dialogue capabilities beyond a single utterance. A popular user-facing application of LLMs is the multi-turn chat
Externí odkaz:
http://arxiv.org/abs/2407.04151
Autor:
Du, Jiangshu, Wang, Yibo, Zhao, Wenting, Deng, Zhongfen, Liu, Shuaiqi, Lou, Renze, Zou, Henry Peng, Venkit, Pranav Narayanan, Zhang, Nan, Srinath, Mukund, Zhang, Haoran Ranran, Gupta, Vipul, Li, Yinghui, Li, Tao, Wang, Fei, Liu, Qin, Liu, Tianlin, Gao, Pengzhi, Xia, Congying, Xing, Chen, Cheng, Jiayang, Wang, Zhaowei, Su, Ying, Shah, Raj Sanjay, Guo, Ruohao, Gu, Jing, Li, Haoran, Wei, Kangda, Wang, Zihao, Cheng, Lu, Ranathunga, Surangika, Fang, Meng, Fu, Jie, Liu, Fei, Huang, Ruihong, Blanco, Eduardo, Cao, Yixin, Zhang, Rui, Yu, Philip S., Yin, Wenpeng
This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many rout
Externí odkaz:
http://arxiv.org/abs/2406.16253