Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Zeng, Belinda"'
Autor:
Chen, Changyou, Ding, Han, Sisman, Bunyamin, Xu, Yi, Xie, Ouye, Yao, Benjamin Z., Tran, Son Dinh, Zeng, Belinda
Diffusion-based generative modeling has been achieving state-of-the-art results on various generation tasks. Most diffusion models, however, are limited to a single-generation modeling. Can we generalize diffusion models with the ability of multi-mod
Externí odkaz:
http://arxiv.org/abs/2407.17571
Autor:
Zheng, Da, Song, Xiang, Zhu, Qi, Zhang, Jian, Vasiloudis, Theodore, Ma, Runjie, Zhang, Houyu, Wang, Zichen, Adeshina, Soji, Nisa, Israt, Mottini, Alejandro, Cui, Qingjun, Rangwala, Huzefa, Zeng, Belinda, Faloutsos, Christos, Karypis, George
Publikováno v:
KDD 2024
Graph machine learning (GML) is effective in many business applications. However, making GML easy to use and applicable to industry applications with massive datasets remain challenging. We developed GraphStorm, which provides an end-to-end solution
Externí odkaz:
http://arxiv.org/abs/2406.06022
Autor:
Rizve, Mamshad Nayeem, Fei, Fan, Unnikrishnan, Jayakrishnan, Tran, Son, Yao, Benjamin Z., Zeng, Belinda, Shah, Mubarak, Chilimbi, Trishul
In this paper, we propose VidLA, an approach for video-language alignment at scale. There are two major limitations of previous video-language alignment approaches. First, they do not capture both short-range and long-range temporal dependencies and
Externí odkaz:
http://arxiv.org/abs/2403.14870
Autor:
He, Yifei, Zhou, Shiji, Zhang, Guojun, Yun, Hyokun, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul, Zhao, Han
Multi-task learning (MTL) considers learning a joint model for multiple tasks by optimizing a convex combination of all task losses. To solve the optimization problem, existing methods use an adaptive weight updating scheme, where task weights are dy
Externí odkaz:
http://arxiv.org/abs/2402.02009
Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models. Existing literature, e.g., \cite{kim2020adversarial}, empirically observe that the downstream tasks can inher
Externí odkaz:
http://arxiv.org/abs/2401.15248
Autor:
Liu, Zixuan, Hiranandani, Gaurush, Qian, Kun, Huang, Eddie W., Xu, Yi, Zeng, Belinda, Subbian, Karthik, Wang, Sheng
Developing text mining approaches to mine aspects from customer reviews has been well-studied due to its importance in understanding customer needs and product attributes. In contrast, it remains unclear how to predict the future emerging aspects of
Externí odkaz:
http://arxiv.org/abs/2310.04865
Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Autor:
Xie, Han, Zheng, Da, Ma, Jun, Zhang, Houyu, Ioannidis, Vassilis N., Song, Xiang, Ping, Qing, Wang, Sheng, Yang, Carl, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul
Model pre-training on large text corpora has been demonstrated effective for various downstream applications in the NLP domain. In the graph mining domain, a similar analogy can be drawn for pre-training graph models on large graphs in the hope of be
Externí odkaz:
http://arxiv.org/abs/2306.02592
Autor:
Jiang, Qian, Chen, Changyou, Zhao, Han, Chen, Liqun, Ping, Qing, Tran, Son Dinh, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul
Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to exactly match each other in the latent space. Yet it remains an open question
Externí odkaz:
http://arxiv.org/abs/2303.05952
Autor:
Ioannidis, Vassilis N., Song, Xiang, Zheng, Da, Zhang, Houyu, Ma, Jun, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul, Karypis, George
Can we combine heterogenous graph structure with text to learn high-quality semantic and behavioural representations? Graph neural networks (GNN)s encode numerical node attributes and graph structure to achieve impressive performance in a variety of
Externí odkaz:
http://arxiv.org/abs/2206.10781
Autor:
Sun, Xiaodi, Rajagopalan, Sunny, Nigam, Priyanka, Lu, Weiyi, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul
Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classifi
Externí odkaz:
http://arxiv.org/abs/2206.02982