Zobrazeno 1 - 10
of 94
pro vyhledávání: '"MEYER, BRETT H."'
Autor:
Tayaranian, Mohammadreza, Mozafari, Seyyed Hasan, Meyer, Brett H., Clark, James J., Gross, Warren J.
Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks.
Externí odkaz:
http://arxiv.org/abs/2407.08887
We present SSS3D, a fast multi-objective NAS framework designed to find computationally efficient 3D semantic scene segmentation networks. It uses RandLA-Net, an off-the-shelf point-based network, as a super-network to enable weight sharing and reduc
Externí odkaz:
http://arxiv.org/abs/2304.11207
We present FMAS, a fast multi-objective neural architecture search framework for semantic segmentation. FMAS subsamples the structure and pre-trained parameters of DeepLabV3+, without fine-tuning, dramatically reducing training time during search. To
Externí odkaz:
http://arxiv.org/abs/2303.16322
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the
Externí odkaz:
http://arxiv.org/abs/2212.12965
Knowledge distillation (KD) is an effective tool for compressing deep classification models for edge devices. However, the performance of KD is affected by the large capacity gap between the teacher and student networks. Recent methods have resorted
Externí odkaz:
http://arxiv.org/abs/2209.07606
Autor:
Vucetic, Danilo, Tayaranian, Mohammadreza, Ziaeefard, Maryam, Clark, James J., Meyer, Brett H., Gross, Warren J.
Fine-tuning BERT-based models is resource-intensive in memory, computation, and time. While many prior works aim to improve inference efficiency via compression techniques, e.g., pruning, these works do not explicitly address the computational challe
Externí odkaz:
http://arxiv.org/abs/2208.02070
Autor:
Vucetic, Danilo, Tayaranian, Mohammadreza, Ziaeefard, Maryam, Clark, James J., Meyer, Brett H., Gross, Warren J.
Resource-constrained devices are increasingly the deployment targets of machine learning applications. Static models, however, do not always suffice for dynamic environments. On-device training of models allows for quick adaptability to new scenarios
Externí odkaz:
http://arxiv.org/abs/2205.01541
Publikováno v:
ACM Transactions on Embedded Computing Systems, Volume 20, Issue 6, 2021 November
Runtime monitoring plays a key role in the assurance of modern intelligent cyber-physical systems, which are frequently data-intensive and safety-critical. While graph queries can serve as an expressive yet formally precise specification language to
Externí odkaz:
http://arxiv.org/abs/2102.03116
Autoregressive neural network models have been used successfully for sequence generation, feature extraction, and hypothesis scoring. This paper presents yet another use for these models: allocating more computation to more difficult inputs. In our m
Externí odkaz:
http://arxiv.org/abs/2006.01659
Publikováno v:
In Energy and AI October 2023 14