Zobrazeno 1 - 10
of 404
pro vyhledávání: '"CHEN Kaifeng"'
Autor:
Stone, Austin, Soltau, Hagen, Geirhos, Robert, Yi, Xi, Xia, Ye, Cao, Bingyi, Chen, Kaifeng, Ogale, Abhijit, Shlens, Jonathon
Visual imagery does not consist of solitary objects, but instead reflects the composition of a multitude of fluid concepts. While there have been great advances in visual representation learning, such advances have focused on building better represen
Externí odkaz:
http://arxiv.org/abs/2412.15396
Learning from noisy data has become essential for adapting deep learning models to real-world applications. Traditional methods often involve first evaluating the noise and then applying strategies such as discarding noisy samples, re-weighting, or r
Externí odkaz:
http://arxiv.org/abs/2411.11924
Autor:
Maninis, Kevis-Kokitsi, Chen, Kaifeng, Ghosh, Soham, Karpur, Arjun, Chen, Koert, Xia, Ye, Cao, Bingyi, Salz, Daniel, Han, Guangxing, Dlabal, Jan, Gnanapragasam, Dan, Seyedhosseini, Mojtaba, Zhou, Howard, Araujo, Andre
While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding tasks. For this reason, self-supervised image-only pretrain
Externí odkaz:
http://arxiv.org/abs/2410.16512
Universal image representations are critical in enabling real-world fine-grained and instance-level recognition applications, where objects and entities from any domain must be identified at large scale. Despite recent advances, existing methods fail
Externí odkaz:
http://arxiv.org/abs/2406.08332
The design optimization of ship hull form based on hydrodynamics theory and simulation-based design (SBD) technologies generally considers ship performance and energy efficiency performance as the design objective, which plays an important role in sm
Externí odkaz:
http://arxiv.org/abs/2403.05832
We introduce SynCLR, a novel approach for learning visual representations exclusively from synthetic images and synthetic captions, without any real data. We synthesize a large dataset of image captions using LLMs, then use an off-the-shelf text-to-i
Externí odkaz:
http://arxiv.org/abs/2312.17742
Recent significant advances in text-to-image models unlock the possibility of training vision systems using synthetic images, potentially overcoming the difficulty of collecting curated data at scale. It is unclear, however, how these models behave a
Externí odkaz:
http://arxiv.org/abs/2312.04567
Autor:
Chen, Kaifeng, Salz, Daniel, Chang, Huiwen, Sohn, Kihyuk, Krishnan, Dilip, Seyedhosseini, Mojtaba
Training visual embeddings with labeled data supervision has been the de facto setup for representation learning in computer vision. Inspired by recent success of adopting masked image modeling (MIM) in self-supervised representation learning, we pro
Externí odkaz:
http://arxiv.org/abs/2312.00950
Autor:
Devvrit, Kudugunta, Sneha, Kusupati, Aditya, Dettmers, Tim, Chen, Kaifeng, Dhillon, Inderjit, Tsvetkov, Yulia, Hajishirzi, Hannaneh, Kakade, Sham, Farhadi, Ali, Jain, Prateek
Foundation models are applied in a broad spectrum of settings with different inference constraints, from massive multi-accelerator clusters to resource-constrained standalone mobile devices. However, the substantial costs associated with training the
Externí odkaz:
http://arxiv.org/abs/2310.07707
Autor:
Ypsilantis, Nikolaos-Antonios, Chen, Kaifeng, Cao, Bingyi, Lipovský, Mário, Dogan-Schönberger, Pelin, Makosa, Grzegorz, Bluntschli, Boris, Seyedhosseini, Mojtaba, Chum, Ondřej, Araujo, André
Fine-grained and instance-level recognition methods are commonly trained and evaluated on specific domains, in a model per domain scenario. Such an approach, however, is impractical in real large-scale applications. In this work, we address the probl
Externí odkaz:
http://arxiv.org/abs/2309.01858