Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Sasan Asadiabadi"'
Publikováno v:
HKIE Transactions. 30:1-14
Worldwide there are plenty of aged Reinforced Concrete (RC) buildings in need of thorough inspections. Cracks, delamination, stains, leakages, debonding and moisture ingressions are common defects found in RC structures. Such problems are typically d
Publikováno v:
Proceedings, Vol 27, Iss 1, p 18 (2019)
In Hong Kong, there is great abundancy of aged buildings and infrastructures for which a re-assessment of the current status is needed. Water exfiltrations/infiltrations, deteriorating insulations, thermal bridges and regions of failure are among the
Externí odkaz:
https://doaj.org/article/33a839b6d2df481da83fadce462a9b70
Autor:
Engin Erzin, Sasan Asadiabadi
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Recent advances in real-time Magnetic Resonance Imaging (rtMRI) provide an invaluable tool to study speech articulation. In this paper, we present an effective deep learning approach for supervised detection and tracking of vocal tract contours in a
Publikováno v:
MMSP
2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)
2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)
In human-to-computer interaction, facial animation in synchrony with affective speech can deliver more naturalistic conversational agents. In this paper, we present a two-stage deep learning approach for affective speech driven facial shape animation
Automatic Vocal Tractlandmark Tracking in Rtmri Using Fully Convolutional Networks and Kalman Filter
Autor:
Sasan Asadiabadi, Engin Erzin
Publikováno v:
ICASSP
2020 IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP)
2020 IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP)
Vocal tract (VT) contour detection in real time MRI is a pre-stage to many speech production related applications such as articulatory analysis and synthesis. In this work, we present an algorithm for robust detection of keypoints on the vocal tract
Publikováno v:
2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
APSIPA
APSIPA
In this paper we present a deep learning multimodal approach for speech driven generation of face animations. Training a speaker independent model, capable of generating different emotions of the speaker, is crucial for realistic animations. Unlike t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::17ea49fc6ec2c41926af0712fe67e363
http://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/8562
http://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/8562
Autor:
Engin Erzin, Sasan Asadiabadi
Publikováno v:
2018 IEEE Workshop on Spoken Language Technology (SLT)
SLT
SLT
In this paper we present a data driven vocal tract area function (VTAF) estimation using Deep Neural Networks (DNN). We approach the VTAF estimation problem based on sequence to sequence learning neural networks, where regression over a sliding windo
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::044ee45ebcf5effac1c078402c51b2df
http://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/8568
http://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/8568
Autor:
Engin Erzin, Sasan Asadiabadi
Publikováno v:
INTERSPEECH