Zobrazeno 1 - 10
of 559
pro vyhledávání: '"Chen Yu Hua"'
Publikováno v:
Australasian Orthodontic Journal, Vol 39, Iss 1, Pp 96-108 (2023)
This case report describes the lingual orthodontic treatment of a 28-year-old female patient who presented with a bimaxillary protrusion malocclusion, a hyperdivergent facial pattern, mentalis strain, and a ‘gummy’ smile. To achieve favourable oc
Externí odkaz:
https://doaj.org/article/e3a64a7c389d4b7ba649def6c8de6dea
Autor:
Yeh, Yen-Tung, Chen, Yu-Hua, Cheng, Yuan-Chiao, Wu, Jui-Te, Fu, Jun-Jie, Yeh, Yi-Fan, Yang, Yi-Hsuan
Neural network models for guitar amplifier emulation, while being effective, often demand high computational cost and lack interpretability. Drawing ideas from physical amplifier design, this paper aims to address these issues with a new differentiab
Externí odkaz:
http://arxiv.org/abs/2408.11405
Autor:
Chen, Yu-Hua, Yeh, Yen-Tung, Cheng, Yuan-Chiao, Wu, Jui-Te, Ho, Yu-Hsiang, Jang, Jyh-Shing Roger, Yang, Yi-Hsuan
Replicating analog device circuits through neural audio effect modeling has garnered increasing interest in recent years. Existing work has predominantly focused on a one-to-one emulation strategy, modeling specific devices individually. In this pape
Externí odkaz:
http://arxiv.org/abs/2407.10646
Autor:
Chen, Yu-Hua, Choi, Woosung, Liao, Wei-Hsiang, Martínez-Ramírez, Marco, Cheuk, Kin Wai, Mitsufuji, Yuki, Jang, Jyh-Shing Roger, Yang, Yi-Hsuan
Recent years have seen increasing interest in applying deep learning methods to the modeling of guitar amplifiers or effect pedals. Existing methods are mainly based on the supervised approach, requiring temporally-aligned data pairs of unprocessed a
Externí odkaz:
http://arxiv.org/abs/2406.15751
Publikováno v:
EvoMUSART: International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar) 2023
Recently, symbolic music generation with deep learning techniques has witnessed steady improvements. Most works on this topic focus on MIDI representations, but less attention has been paid to symbolic music generation using guitar tablatures (tabs)
Externí odkaz:
http://arxiv.org/abs/2302.05393
In this paper, we propose a new dataset named EGDB, that con-tains transcriptions of the electric guitar performance of 240 tab-latures rendered with different tones. Moreover, we benchmark theperformance of two well-known transcription models propos
Externí odkaz:
http://arxiv.org/abs/2202.09907
Publikováno v:
In Dyes and Pigments August 2024 227
Autor:
Sun, Guo, Zhang, Ren-Wei-Yang, Chen, Xu-Yang, Chen, Yu-Hua, Zou, Liang-Hua, Zhang, Jian, Li, Ping-Gui, Wang, Kai, Hu, Zhi-Gang
Publikováno v:
In Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy 15 November 2024 321
Due to advances in deep learning, the performance of automatic beat and downbeat tracking in musical audio signals has seen great improvement in recent years. In training such deep learning based models, data augmentation has been found an important
Externí odkaz:
http://arxiv.org/abs/2106.08703
Autor:
Wan, Hao, Yang, Yan-di, Zhang, Qi, Chen, Yu-hua, Hu, Xi-min, Huang, Yan-xia, Shang, Lei, Xiong, Kun
Publikováno v:
In Heliyon 15 January 2024 10(1)