Zobrazeno 1 - 10
of 43
pro vyhledávání: '"Lin, Yuke"'
This paper proposes a novel Sequence-to-Sequence Neural Diarization (SSND) framework to perform online and offline speaker diarization. It is developed from the sequence-to-sequence architecture of our previous target-speaker voice activity detection
Externí odkaz:
http://arxiv.org/abs/2411.13849
In this paper, we provide a large audio-visual speaker recognition dataset, VoxBlink2, which includes approximately 10M utterances with videos from 110K+ speakers in the wild. This dataset represents a significant expansion over the VoxBlink dataset,
Externí odkaz:
http://arxiv.org/abs/2407.11510
Autor:
Li, Ze, Lin, Yuke, Yao, Tian, Suo, Hongbin, Zhang, Pengyuan, Ren, Yanzhen, Cai, Zexin, Nishizaki, Hiromitsu, Li, Ming
Voice conversion (VC) systems can transform audio to mimic another speaker's voice, thereby attacking speaker verification (SV) systems. However, ongoing studies on source speaker verification (SSV) are hindered by limited data availability and metho
Externí odkaz:
http://arxiv.org/abs/2406.04951
This work aims to promote Chinese opera research in both musical and speech domains, with a primary focus on overcoming the data limitations. We introduce KunquDB, a relatively large-scale, well-annotated audio-visual dataset comprising 339 speakers
Externí odkaz:
http://arxiv.org/abs/2403.13356
Multi-objective Progressive Clustering for Semi-supervised Domain Adaptation in Speaker Verification
Utilizing the pseudo-labeling algorithm with large-scale unlabeled data becomes crucial for semi-supervised domain adaptation in speaker verification tasks. In this paper, we propose a novel pseudo-labeling method named Multi-objective Progressive Cl
Externí odkaz:
http://arxiv.org/abs/2310.04760
It is widely acknowledged that discriminative representation for speaker verification can be extracted from verbal speech. However, how much speaker information that non-verbal vocalization carries is still a puzzle. This paper explores speaker verif
Externí odkaz:
http://arxiv.org/abs/2309.14109
This paper is the system description of the DKU-MSXF System for the track1, track2 and track3 of the VoxCeleb Speaker Recognition Challenge 2023 (VoxSRC-23). For Track 1, we utilize a network structure based on ResNet for training. By constructing a
Externí odkaz:
http://arxiv.org/abs/2308.08766
This paper describes the DKU-MSXF submission to track 4 of the VoxCeleb Speaker Recognition Challenge 2023 (VoxSRC-23). Our system pipeline contains voice activity detection, clustering-based diarization, overlapped speech detection, and target-speak
Externí odkaz:
http://arxiv.org/abs/2308.07595
In this paper, we introduce a large-scale and high-quality audio-visual speaker verification dataset, named VoxBlink. We propose an innovative and robust automatic audio-visual data mining pipeline to curate this dataset, which contains 1.45M utteran
Externí odkaz:
http://arxiv.org/abs/2308.07056
The success of automatic speaker verification shows that discriminative speaker representations can be extracted from neutral speech. However, as a kind of non-verbal voice, laughter should also carry speaker information intuitively. Thus, this paper
Externí odkaz:
http://arxiv.org/abs/2210.16028