Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning
Autor: | Padhi, Trilok, Kursuncu, Ugur, Kumar, Yaman, Shalin, Valerie L., Fronczek, Lane Peterson |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | IEEE International Conference on Big Data 2024 (IEEE BigData 2024) |
Druh dokumentu: | Working Paper |
Popis: | The digital landscape continually evolves with multimodality, enriching the online experience for users. Creators and marketers aim to weave subtle contextual cues from various modalities into congruent content to engage users with a harmonious message. This interplay of multimodal cues is often a crucial factor in attracting users' attention. However, this richness of multimodality presents a challenge to computational modeling, as the semantic contextual cues spanning across modalities need to be unified to capture the true holistic meaning of the multimodal content. This contextual meaning is critical in attracting user engagement as it conveys the intended message of the brand or the organization. In this work, we incorporate external commonsense knowledge from knowledge graphs to enhance the representation of multimodal data using compact Visual Language Models (VLMs) and predict the success of multi-modal crowdfunding campaigns. Our results show that external knowledge commonsense bridges the semantic gap between text and image modalities, and the enhanced knowledge-infused representations improve the predictive performance of models for campaign success upon the baselines without knowledge. Our findings highlight the significance of contextual congruence in online multimodal content for engaging and successful crowdfunding campaigns. Comment: Accepted at IEEE International Conference on Big Data 2024 (IEEE BigData 2024) |
Databáze: | arXiv |
Externí odkaz: |