A Crowdsourced Open-Source Kazakh Speech Corpus and Initial Speech Recognition Baseline
Autor: | Mukhamet Nurpeiissov, Yerbolat Khassanov, Saida Mussakhojayeva, Alen Adiyev, Almas Mirzakhmetov, Huseyin Atakan Varol |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Data collection Computer Science - Computation and Language Character (computing) Computer science media_common.quotation_subject Speech recognition Word error rate Speech corpus Kazakh language.human_language Computer Science - Sound Audio and Speech Processing (eess.AS) Test set language FOS: Electrical engineering electronic engineering information engineering Preprocessor Quality (business) Computation and Language (cs.CL) media_common Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | EACL |
DOI: | 10.48550/arxiv.2009.10334 |
Popis: | We present an open-source speech corpus for the Kazakh language. The Kazakh speech corpus (KSC) contains around 332 hours of transcribed audio comprising over 153,000 utterances spoken by participants from different regions and age groups, as well as both genders. It was carefully inspected by native Kazakh speakers to ensure high quality. The KSC is the largest publicly available database developed to advance various Kazakh speech and language processing applications. In this paper, we first describe the data collection and preprocessing procedures followed by a description of the database specifications. We also share our experience and challenges faced during the database construction, which might benefit other researchers planning to build a speech corpus for a low-resource language. To demonstrate the reliability of the database, we performed preliminary speech recognition experiments. The experimental results imply that the quality of audio and transcripts is promising (2.8% character error rate and 8.7% word error rate on the test set). To enable experiment reproducibility and ease the corpus usage, we also released an ESPnet recipe for our speech recognition models. Comment: 10 pages, 5 figures, 4 tables, accepted by EACL2021 |
Databáze: | OpenAIRE |
Externí odkaz: |