Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Nattapong Kurpukdee"'
Autor:
Nattapong Kurpukdee, Kwanchiva Thangthai, Vataya Chunwijitra, Patcharika Chootrakool, Sawit Kasuriya
Publikováno v:
2022 25th Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA).
Publikováno v:
JCSSE
We proposed a new automatic speech recognition (ASR) service architecture that is extendable to medium-scale ASR service and more flexible than the previous architecture. Improvement aims to substitute the distributed processing approach with an asyn
Publikováno v:
2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON).
Due to the various channel distortions and the limited real call center data, the simulation data is an essential resource to train an appropriate acoustic model for automatic call transcription. In this work, in case of in-domain telephony data are
Publikováno v:
2019 14th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP).
The performance of voice activity detection (VAD) is drastically degraded when observed speech signals are from unseen noisy environments. In this paper, we propose denoisingbased VAD to cope with the unseen noises. The proposed VAD system mainly con
Autor:
Phuttapong Sertsi, Surasak Boonkla, Vataya Chunwijitra, Nattapong Kurpukdee, Sawit Kasuriya, Sila Chunwijitra
Publikováno v:
O-COCOSDA
Implementation of automatic speech recognition (ASR) system to the real scenarios has been discovered many difficulties in two main topics: processing time and resource demands. These obstructions are such big issues in deploying ASR system. This pap
Autor:
Takao Kobayashi, Nattapong Kurpukdee, Tomoki Koriyama, Sawit Kasuriya, Chai Wutiwiwatchai, Poonlap Lamsrichan
Publikováno v:
APSIPA
In this paper, we propose a speech emotion recognition technique using convolutional long short-term memory (LSTM) recurrent neural network (ConvLSTM-RNN) as a phoneme-based feature extractor from raw input speech signal. In the proposed technique, C
Autor:
Vataya Chunwijitra, Surasak Boonkla, Phuttapong Sertsi, Nattapong Kurpukdee, Chai Wutiwiwatchai
Publikováno v:
APSIPA
Voice activity detection (VAD) used for classifying speech/non-speech sections of a speech signal still suffers from noisy environments. In this paper, we cooperate the modulation spectrum (MS) and the long short-term memory recurrent neural network
Autor:
Poonlap Lamsrichan, Vataya Chunwijitra, Sawit Kasuriya, Chai Wutiwiwatchai, Nattapong Kurpukdee
Publikováno v:
2017 8th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES).
In this paper, efficiency comparison of Support Vector Machines (SVM) and Binary Support Vector Machines (BSVM) techniques in utterance-based emotion recognition is studied. Acoustic features including energy, Mel-frequency cepstral coefficients (MFC
Autor:
Chai Wutiwiwatchai, Nattapong Kurpukdee, Vataya Chunwijitra, Phuttapong Sertsi, Sila Chunwijitra, Ananlada Chotimongkol
Publikováno v:
2015 International Computer Science and Engineering Conference (ICSEC).
This paper presents an improvement of a distributed Thai speech recognizer, aiming to enhance system response time as measured by a real-time factor (RTF) for a better user experience. The system is designed based on a collaborative multi-agents and
Autor:
Sumonmas Thatphithakkul, Nattapong Kurpukdee, Vataya Chunwijitra, Ananlada Chotimongkol, Chai Wutiwiwatchai
Publikováno v:
O-COCOSDA/CASLRE
We explore the use of social media data to reduce the effort in developing a conversational speech corpus. The LOTUSSOC corpus is created by recording Twitter messages through a mobile application. In the first phase, which took around one month, 172