Zobrazeno 1 - 10
of 50
pro vyhledávání: '"Fuxman, Ariel"'
Recent studies show that pretraining a deep neural network with fine-grained labeled data, followed by fine-tuning on coarse-labeled data for downstream tasks, often yields better generalization than pretraining with coarse-labeled data. While there
Externí odkaz:
http://arxiv.org/abs/2410.23129
Autor:
Toubal, Imad Eddine, Avinash, Aditya, Alldrin, Neil Gordon, Dlabal, Jan, Zhou, Wenlei, Luo, Enming, Stretcu, Otilia, Xiong, Hao, Lu, Chun-Ta, Zhou, Howard, Krishna, Ranjay, Fuxman, Ariel, Duerig, Tom
From content moderation to wildlife conservation, the number of applications that require models to recognize nuanced or subjective visual concepts is growing. Traditionally, developing classifiers for such concepts requires substantial manual effort
Externí odkaz:
http://arxiv.org/abs/2403.02626
Autor:
Qiao, Wei, Dogra, Tushar, Stretcu, Otilia, Lyu, Yu-Han, Fang, Tiantian, Kwon, Dongjin, Lu, Chun-Ta, Luo, Enming, Wang, Yuan, Chia, Chih-Chun, Fuxman, Ariel, Wang, Fangzhou, Krishna, Ranjay, Tek, Mehmet
Large language models (LLMs) are powerful tools for content moderation, but their inference costs and latency make them prohibitive for casual use on large datasets, such as the Google Ads repository. This study proposes a method for scaling up LLM r
Externí odkaz:
http://arxiv.org/abs/2402.14590
Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models
Autor:
Hu, Yushi, Stretcu, Otilia, Lu, Chun-Ta, Viswanathan, Krishnamurthy, Hata, Kenji, Luo, Enming, Krishna, Ranjay, Fuxman, Ariel
Solving complex visual tasks such as "Who invented the musical instrument on the right?" involves a composition of skills: understanding space, recognizing instruments, and also retrieving prior knowledge. Recent work shows promise by decomposing suc
Externí odkaz:
http://arxiv.org/abs/2312.03052
In this paper, we study how the granularity of pretraining labels affects the generalization of deep neural networks in image classification tasks. We focus on the "fine-to-coarse" transfer learning setting, where the pretraining label space is more
Externí odkaz:
http://arxiv.org/abs/2303.16887
Autor:
Stretcu, Otilia, Vendrow, Edward, Hata, Kenji, Viswanathan, Krishnamurthy, Ferrari, Vittorio, Tavakkol, Sasan, Zhou, Wenlei, Avinash, Aditya, Luo, Enming, Alldrin, Neil Gordon, Bateni, MohammadHossein, Berger, Gabriel, Bunner, Andrew, Lu, Chun-Ta, Rey, Javier A, DeSalvo, Giulia, Krishna, Ranjay, Fuxman, Ariel
The application of computer vision to nuanced subjective use cases is growing. While crowdsourcing has served the vision community well for most objective tasks (such as labeling a "zebra"), it now falters on tasks where there is substantial subjecti
Externí odkaz:
http://arxiv.org/abs/2302.12948
Autor:
Stimberg, Florian, Chakrabarti, Ayan, Lu, Chun-Ta, Hazimeh, Hussein, Stretcu, Otilia, Qiao, Wei, Liu, Yintao, Kaya, Merve, Rashtchian, Cyrus, Fuxman, Ariel, Tek, Mehmet, Gowal, Sven
Automated content filtering and moderation is an important tool that allows online platforms to build striving user communities that facilitate cooperation and prevent abuse. Unfortunately, resourceful actors try to bypass automated filters in a bid
Externí odkaz:
http://arxiv.org/abs/2301.12993
Autor:
Lu, Chun-Ta, Zeng, Yun, Juan, Da-Cheng, Fan, Yicheng, Li, Zhe, Dlabal, Jan, Chen, Yi-Ting, Gopalan, Arjun, Heydon, Allan, Ferng, Chun-Sung, Miyara, Reah, Fuxman, Ariel, Peng, Futang, Li, Zhen, Duerig, Tom, Tomkins, Andrew
In this work, we propose CARLS, a novel framework for augmenting the capacity of existing deep learning frameworks by enabling multiple components -- model trainers, knowledge makers and knowledge banks -- to concertedly work together in an asynchron
Externí odkaz:
http://arxiv.org/abs/2105.12849
Autor:
Fuxman, Ariel Damian.
Thesis (Ph. D.)--University of Toronto, 2007.
Source: Dissertation Abstracts International, Volume: 68-06, Section: B, page: 3892.
Source: Dissertation Abstracts International, Volume: 68-06, Section: B, page: 3892.