Zobrazeno 1 - 10
of 6 540
pro vyhledávání: '"WANG, YIFEI"'
Despite multimodal sentiment analysis being a fertile research ground that merits further investigation, current approaches take up high annotation cost and suffer from label ambiguity, non-amicable to high-quality labeled data acquisition. Furthermo
Externí odkaz:
http://arxiv.org/abs/2412.09784
Autor:
Ye, Nanyang, Sun, Qiao, Wang, Yifei, Yang, Liujia, Zhou, Jundong, Wang, Lei, Yang, Guang-Zhong, Wang, Xinbing, Zhou, Chenghu, Ren, Wei, Gu, Leilei, Wu, Huaqiang, Gu, Qinying
Analog computing using non-volatile memristors has emerged as a promising solution for energy-efficient deep learning. New materials, like perovskites-based memristors are recently attractive due to their cost-effectiveness, energy efficiency and fle
Externí odkaz:
http://arxiv.org/abs/2412.02779
Contrastive learning has been a leading paradigm for self-supervised learning, but it is widely observed that it comes at the price of sacrificing useful features (\eg colors) by being invariant to data augmentations. Given this limitation, there has
Externí odkaz:
http://arxiv.org/abs/2411.06508
Enhancing node-level Out-Of-Distribution (OOD) generalization on graphs remains a crucial area of research. In this paper, we develop a Structural Causal Model (SCM) to theoretically dissect the performance of two prominent invariant learning methods
Externí odkaz:
http://arxiv.org/abs/2411.02847
Autor:
Fang, Lizhe, Wang, Yifei, Liu, Zhaoyang, Zhang, Chenheng, Jegelka, Stefanie, Gao, Jinyang, Ding, Bolin, Wang, Yisen
Handling long-context inputs is crucial for large language models (LLMs) in tasks such as extended conversations, document summarization, and many-shot in-context learning. While recent approaches have extended the context windows of LLMs and employe
Externí odkaz:
http://arxiv.org/abs/2410.23771
Deep learning models often suffer from a lack of interpretability due to polysemanticity, where individual neurons are activated by multiple unrelated semantics, resulting in unclear attributions of model behavior. Recent advances in monosemanticity,
Externí odkaz:
http://arxiv.org/abs/2410.21331
In this work, we explore the mechanism of in-context learning (ICL) on out-of-distribution (OOD) tasks that were not encountered during training. To achieve this, we conduct synthetic experiments where the objective is to learn OOD mathematical funct
Externí odkaz:
http://arxiv.org/abs/2410.09695
Autor:
Wang, Yifei
Graphene, a monolayer of carbon atoms, has gained prominence to augment existing chip-scale photonic and optoelectronic applications, especially for sensing in optical radiation, owing to its distinctive electrical properties and bandgap as well as i
Externí odkaz:
http://hdl.handle.net/10919/109686
Autor:
Wang, Yifei
This dissertation consists of three chapters on understanding the opportunities and challenges of using individualized data in marketing, in the context of mobile economy, retailing, and education. The first chapter investigates the impact of market
Externí odkaz:
https://hdl.handle.net/1721.1/155845
With the advancements of Large Language Models (LLMs), an increasing number of open-source software projects are using LLMs as their core functional component. Although research and practice on LLMs are capturing considerable interest, no dedicated s
Externí odkaz:
http://arxiv.org/abs/2409.16559