Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Sajedi, Ahmad"'
Graph distillation has emerged as a solution for reducing large graph datasets to smaller, more manageable, and informative ones. Existing methods primarily target node classification, involve computationally intensive processes, and fail to capture
Externí odkaz:
http://arxiv.org/abs/2408.16871
Autor:
Li, Zekai, Guo, Ziyao, Zhao, Wangbo, Zhang, Tianle, Cheng, Zhi-Qi, Khaki, Samir, Zhang, Kaipeng, Sajedi, Ahmad, Plataniotis, Konstantinos N, Wang, Kai, You, Yang
Dataset Distillation aims to compress a large dataset into a significantly more compact, synthetic one without compromising the performance of the trained models. To achieve this, existing methods use the agent model to extract information from the t
Externí odkaz:
http://arxiv.org/abs/2408.03360
Autor:
Khaki, Samir, Sajedi, Ahmad, Wang, Kai, Liu, Lucy Z., Lawryshyn, Yuri A., Plataniotis, Konstantinos N.
Recent works in dataset distillation seek to minimize training expenses by generating a condensed synthetic dataset that encapsulates the information present in a larger real dataset. These approaches ultimately aim to attain test accuracy levels aki
Externí odkaz:
http://arxiv.org/abs/2405.01373
Multi-label image classification presents a challenging task in many domains, including computer vision and medical imaging. Recent advancements have introduced graph-based and transformer-based methods to improve performance and capture label depend
Externí odkaz:
http://arxiv.org/abs/2401.01448
Autor:
Sajedi, Ahmad, Khaki, Samir, Amjadian, Ehsan, Liu, Lucy Z., Lawryshyn, Yuri A., Plataniotis, Konstantinos N.
Publikováno v:
booktitle = Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) month = October year = 2023 pages = 17097-17107
Researchers have long tried to minimize training costs in deep learning while maintaining strong generalization across diverse datasets. Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that co
Externí odkaz:
http://arxiv.org/abs/2310.00093
Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues such as the inherent imbalance of positive/negative samples. Recent advan
Externí odkaz:
http://arxiv.org/abs/2307.03967
This paper presents a new distance metric to compare two continuous probability density functions. The main advantage of this metric is that, unlike other statistical measurements, it can provide an analytic, closed-form expression for a mixture of G
Externí odkaz:
http://arxiv.org/abs/2306.07309
This work introduces a novel knowledge distillation framework for classification tasks where information on existing subclasses is available and taken into consideration. In classification tasks with a small number of classes or binary detection, the
Externí odkaz:
http://arxiv.org/abs/2207.08063
This work introduces a novel knowledge distillation framework for classification tasks where information on existing subclasses is available and taken into consideration. In classification tasks with a small number of classes or binary detection (two
Externí odkaz:
http://arxiv.org/abs/2109.05587
Deploying deep Convolutional Neural Networks (CNNs) is impacted by their memory footprint and speed requirements, which mainly come from convolution. Widely-used convolution algorithms, im2col and MEC, produce a lowered matrix from an activation map
Externí odkaz:
http://arxiv.org/abs/2104.08314