Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Chin Ting Wu"'
Autor:
Inci, Ahmet, Virupaksha, Siri Garudanagiri, Jain, Aman, Chin, Ting-Wu, Thallam, Venkata Vivek, Ding, Ruizhou, Marculescu, Diana
As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied precision or quantization levels, and model compression techniques, there is a need for design sp
Externí odkaz:
http://arxiv.org/abs/2206.15463
Machine learning (ML) has entered the mobile era where an enormous number of ML models are deployed on edge devices. However, running common ML models on edge devices continuously may generate excessive heat from the computation, forcing the device t
Externí odkaz:
http://arxiv.org/abs/2206.10849
Autor:
Chin-Ting Wu, Amy Spallone, Sherry Cantu, Todd Treangen, William Shropshire, Micah Bhatti, Israel Glover, Xiaojun Liu, Samuel Shelburne, Awdhesh Kalia
Publikováno v:
Antimicrobial Stewardship & Healthcare Epidemiology, Vol 4, Pp s106-s106 (2024)
Background: Current epidemiological methods have limitations in identifying transmission of bacteria causing healthcare-associated infections (HAIs). Recent whole genome sequencing (WGS) studies found that genetically related strains can cause HAIs w
Externí odkaz:
https://doaj.org/article/6bdcaafb38224010a4f465a94667aaaf
Autor:
Sung-Ching Pan, Kuan-Yin Lin, Ying-Chieh Liu, Chin-Ting Wu, Ling Ting, Shu-Yuan Ho, Yu-Shan Huang, Yee-Chun Chen, Jia-Horng Kao
Publikováno v:
Journal of the Formosan Medical Association, Vol 123, Iss 1, Pp 45-54 (2024)
Background: The role of environmental contamination in COVID-19 transmission within hospitals is still of interest due to the significant impact of outbreaks globally. However, there is a scarcity of data regarding the utilization of environmental sa
Externí odkaz:
https://doaj.org/article/81528607c25d42c2ae9fe845a375b166
Optimizing the channel counts for different layers of a CNN has shown great promise in improving the efficiency of CNNs at test-time. However, these methods often introduce large computational overhead (e.g., an additional 2x FLOPs of standard traini
Externí odkaz:
http://arxiv.org/abs/2104.13255
Weight quantization for deep ConvNets has shown promising results for applications such as image classification and semantic segmentation and is especially important for applications where memory storage is limited. However, when aiming for quantizat
Externí odkaz:
http://arxiv.org/abs/2008.09916
Slimmable neural networks provide a flexible trade-off front between prediction error and computational requirement (such as the number of floating-point operations or FLOPs) with the same storage requirement as a single model. They are useful for re
Externí odkaz:
http://arxiv.org/abs/2007.11752
Fine-tuning through knowledge transfer from a pre-trained model on a large-scale dataset is a widely spread approach to effectively build models on small-scale datasets. In this work, we show that a recent adversarial attack designed for transfer lea
Externí odkaz:
http://arxiv.org/abs/2002.02998
Pruning convolutional filters has demonstrated its effectiveness in compressing ConvNets. Prior art in filter pruning requires users to specify a target model complexity (e.g., model size or FLOP count) for the resulting architecture. However, determ
Externí odkaz:
http://arxiv.org/abs/1904.12368
To improve the throughput and energy efficiency of Deep Neural Networks (DNNs) on customized hardware, lightweight neural networks constrain the weights of DNNs to be a limited combination (denoted as $k\in\{1,2\}$) of powers of 2. In such networks,
Externí odkaz:
http://arxiv.org/abs/1904.02835