Zobrazeno 1 - 10
of 521
pro vyhledávání: '"LARSON, MARTHA"'
Autor:
Liang, Mingliang, Larson, Martha
Vision Language Models (VLMs) can be trained more efficiently if training sets can be reduced in size. Recent work has shown the benefits of masking text during VLM training using a variety of approaches: truncation, random masking, block masking and
Externí odkaz:
http://arxiv.org/abs/2412.16148
Autor:
Liang, Mingliang, Larson, Martha
We propose Word-Frequency-based Image-Text Pair Pruning (WFPP), a novel data pruning method that improves the efficiency of VLMs. Unlike MetaCLIP, our method does not need metadata for pruning, but selects text-image pairs to prune based on the conte
Externí odkaz:
http://arxiv.org/abs/2410.10879
Publikováno v:
Pages: 21-25, Proc. 4th Symposium on Security and Privacy in Speech Communication (SPSC) at Interspeech 2024
Speech recordings are being more frequently used to detect and monitor disease, leading to privacy concerns. Beyond cryptography, protection of speech can be addressed by approaches, such as perturbation, disentanglement, and re-synthesis, that elimi
Externí odkaz:
http://arxiv.org/abs/2409.16106
Autor:
Liang, Mingliang, Larson, Martha
We introduce Gaussian masking for Language-Image Pre-Training (GLIP) a novel, straightforward, and effective technique for masking image patches during pre-training of a vision-language model. GLIP builds on Fast Language-Image Pre-Training (FLIP), w
Externí odkaz:
http://arxiv.org/abs/2403.15837
Autor:
Pleiter, Bart, Tajalli, Behrad, Koffas, Stefanos, Abad, Gorka, Xu, Jing, Larson, Martha, Picek, Stjepan
Deep Neural Networks (DNNs) have shown great promise in various domains. Alongside these developments, vulnerabilities associated with DNN training, such as backdoor attacks, are a significant concern. These attacks involve the subtle insertion of tr
Externí odkaz:
http://arxiv.org/abs/2311.07550
We investigate an attack on a machine learning model that predicts whether a person or household will relocate in the next two years, i.e., a propensity-to-move classifier. The attack assumes that the attacker can query the model to obtain prediction
Externí odkaz:
http://arxiv.org/abs/2310.08775
Publikováno v:
2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023)
Recent research has proposed approaches that modify speech to defend against gender inference attacks. The goal of these protection algorithms is to control the availability of information about a speaker's gender, a privacy-sensitive attribute. Curr
Externí odkaz:
http://arxiv.org/abs/2306.17700
Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training. Current research adopts the belief that practical and effective approaches to countering PAPs do not exist. In this paper, we argue that it
Externí odkaz:
http://arxiv.org/abs/2301.13838
We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator. The key novelty of ShortcutGen is the use of a randomly-initialized discriminator, which provides spurious
Externí odkaz:
http://arxiv.org/abs/2211.01086