Zobrazeno 1 - 10
of 3 795
pro vyhledávání: '"Meng,Tao"'
Text representation learning is significant as the cornerstone of natural language processing. In recent years, graph contrastive learning (GCL) has been widely used in text representation learning due to its ability to represent and capture complex
Externí odkaz:
http://arxiv.org/abs/2412.11652
With the recent advances in computer vision, age estimation has significantly improved in overall accuracy. However, owing to the most common methods do not take into account the class imbalance problem in age estimation datasets, they suffer from a
Externí odkaz:
http://arxiv.org/abs/2412.11450
Reinforcement learning from human feedback (RLHF) has been crucial in aligning large language models (LLMs) with human values. Traditionally, RLHF involves generating responses to a query and using a reward model to assign a reward to the entire resp
Externí odkaz:
http://arxiv.org/abs/2412.02685
Multimodal emotion recognition in conversation (MERC) refers to identifying and classifying human emotional states by combining data from multiple different modalities (e.g., audio, images, text, video, etc.). Most existing multimodal emotion recogni
Externí odkaz:
http://arxiv.org/abs/2412.02935
Multimodal Emotion Recognition in Conversations (MERC) aims to classify utterance emotions using textual, auditory, and visual modal features. Most existing MERC methods assume each utterance has complete modalities, overlooking the common issue of i
Externí odkaz:
http://arxiv.org/abs/2411.19822
Graph contrastive learning has been successfully applied in text classification due to its remarkable ability for self-supervised node representation learning. However, explicit graph augmentations may lead to a loss of semantics in the contrastive v
Externí odkaz:
http://arxiv.org/abs/2411.16787
Entity alignment is crucial for merging knowledge across knowledge graphs, as it matches entities with identical semantics. The standard method matches these entities based on their embedding similarities using semi-supervised learning. However, dive
Externí odkaz:
http://arxiv.org/abs/2410.20733
Graph contrastive learning (GCL) has been widely applied to text classification tasks due to its ability to generate self-supervised signals from unlabeled data, thus facilitating model training. However, existing GCL-based text classification method
Externí odkaz:
http://arxiv.org/abs/2410.18130
Multi-modal entity alignment (MMEA) is essential for enhancing knowledge graphs and improving information retrieval and question-answering systems. Existing methods often focus on integrating modalities through their complementarity but overlook the
Externí odkaz:
http://arxiv.org/abs/2410.14584
Autor:
Meng, Tao, Mehrabi, Ninareh, Goyal, Palash, Ramakrishna, Anil, Galstyan, Aram, Zemel, Richard, Chang, Kai-Wei, Gupta, Rahul, Peris, Charith
We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control. Given a training corpus and control criteria formulated as a sequence-level constraint on model outputs, our method fine-tunes the LLM on the
Externí odkaz:
http://arxiv.org/abs/2410.05559