Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Kaya, Yigitcan"'
Autor:
Kaya, Yigitcan, Chen, Yizheng, Saha, Shoumik, Pierazzi, Fabio, Cavallaro, Lorenzo, Wagner, David, Dumitras, Tudor
Machine learning is widely used for malware detection in practice. Prior behavior-based detectors most commonly rely on traces of programs executed in controlled sandboxes. However, sandbox traces are unavailable to the last line of defense offered b
Externí odkaz:
http://arxiv.org/abs/2405.06124
Machine Learning (ML) models have been utilized for malware detection for over two decades. Consequently, this ignited an ongoing arms race between malware authors and antivirus systems, compelling researchers to propose defenses for malware-detectio
Externí odkaz:
http://arxiv.org/abs/2303.13372
Quantization is a popular technique that $transforms$ the parameter representation of a neural network from floating-point numbers into lower-precision ones ($e.g.$, 8-bit integers). It reduces the memory footprint and the computational cost at infer
Externí odkaz:
http://arxiv.org/abs/2110.13541
Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in $input$-$adaptive$ multi-exit architectures, such as MSDNets or S
Externí odkaz:
http://arxiv.org/abs/2010.02432
Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior w
Externí odkaz:
http://arxiv.org/abs/2006.05336
Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that focus on specific scenarios, e.g., indiscriminate or targeted, have enabled defenses for the corresponding subset of known attacks. Yet, this introduces an in
Externí odkaz:
http://arxiv.org/abs/2002.11497
New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers top-performing architectures as intellectual property and devotes considerable computational resourc
Externí odkaz:
http://arxiv.org/abs/2002.06776
Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits
Externí odkaz:
http://arxiv.org/abs/1906.01017
We characterize a prevalent weakness of deep neural networks (DNNs)---overthinking---which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the f
Externí odkaz:
http://arxiv.org/abs/1810.07052
Autor:
Hong, Sanghyun, Davinroy, Michael, Kaya, Yiǧitcan, Locke, Stuart Nevans, Rackow, Ian, Kulda, Kevin, Dachman-Soled, Dana, Dumitraş, Tudor
Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary's capability to conduct black-box attacks against the model. This paper presents the first in-depth se
Externí odkaz:
http://arxiv.org/abs/1810.03487