Zobrazeno 1 - 10
of 667
pro vyhledávání: '"Wu, Xintao"'
The widespread popularity of Large Language Models (LLMs), partly due to their unique ability to perform in-context learning, has also brought to light the importance of ethical and safety considerations when deploying these pre-trained models. In th
Externí odkaz:
http://arxiv.org/abs/2406.12038
Vision language models (VLMs) have recently emerged and gained the spotlight for their ability to comprehend the dual modality of image and textual data. VLMs such as LLaVA, ChatGPT-4, and Gemini have recently shown impressive performance on tasks su
Externí odkaz:
http://arxiv.org/abs/2405.00876
Diffusion probabilistic models (DPMs) have become the state-of-the-art in high-quality image generation. However, DPMs have an arbitrary noisy latent space with no interpretable or controllable semantics. Although there has been significant research
Externí odkaz:
http://arxiv.org/abs/2404.17735
Autor:
Edemacu, Kennedy, Wu, Xintao
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a wide range of general natural language processing (NLP) tasks. Researchers have observed a direct correlation between the performance of these models and their
Externí odkaz:
http://arxiv.org/abs/2404.06001
Correctly classifying brain tumors is imperative to the prompt and accurate treatment of a patient. While several classification algorithms based on classical image processing or deep learning methods have been proposed to rapidly classify tumors in
Externí odkaz:
http://arxiv.org/abs/2403.10698
In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks by conditioning on demonstrations of question-answer pairs and it has been shown to have comparable performance to costly model retraining and fine-tuning. Recently,
Externí odkaz:
http://arxiv.org/abs/2403.05681
Recently, large language models (LLMs) have taken the spotlight in natural language processing. Further, integrating LLMs with vision enables the users to explore emergent abilities with multimodal data. Visual language models (VLMs), such as LLaVA,
Externí odkaz:
http://arxiv.org/abs/2402.14162
The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner's objective is to progressively acquire new tasks as they arrive over time, while also guarant
Externí odkaz:
http://arxiv.org/abs/2402.12319
Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstratio
Externí odkaz:
http://arxiv.org/abs/2402.11750
Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains. In r
Externí odkaz:
http://arxiv.org/abs/2402.01327