Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Trần Quốc Huy"'
Autor:
Hyder, Syed Waleed, Usama, Muhammad, Zafar, Anas, Naufil, Muhammad, Fateh, Fawad Javed, Konin, Andrey, Zia, M. Zeeshan, Tran, Quoc-Huy
This paper presents a 2D skeleton-based action segmentation method with applications in fine-grained human activity recognition. In contrast with state-of-the-art methods which directly take sequences of 3D skeleton coordinates as inputs and apply Gr
Externí odkaz:
http://arxiv.org/abs/2309.06462
Autor:
Phan Anh Tú, Trần Quốc Huy
Publikováno v:
Tạp chí Khoa học Đại học Cần Thơ, Iss 48 (2017)
Mục tiêu của nghiên cứu nhằm phân tích các yếu tố ảnh hưởng đến ý định khởi nghiệp kinh doanh của 166 sinh viên trường Đại học Kỹ thuật-Công nghệ Cần Thơ. Mở rộng lý thuyết hành vi kế ho
Externí odkaz:
https://doaj.org/article/3108c69e23654f7ab2ab9147ea651319
Autor:
Tran, Quoc-Huy, Mehmood, Ahmed, Ahmed, Muhammad, Naufil, Muhammad, Zafar, Anas, Konin, Andrey, Zia, M. Zeeshan
This paper presents an unsupervised transformer-based framework for temporal activity segmentation which leverages not only frame-level cues but also segment-level cues. This is in contrast with previous methods which often rely on frame-level inform
Externí odkaz:
http://arxiv.org/abs/2305.19478
Autor:
Tran, Quoc-Huy, Ahmed, Muhammad, Popattia, Murad, Ahmed, M. Hassan, Konin, Andrey, Zia, M. Zeeshan
This paper presents a self-supervised temporal video alignment framework which is useful for several fine-grained human activity understanding applications. In contrast with the state-of-the-art method of CASA, where sequences of 3D skeleton coordina
Externí odkaz:
http://arxiv.org/abs/2305.19480
We present a novel method for few-shot video classification, which performs appearance and temporal alignments. In particular, given a pair of query and support videos, we conduct appearance alignment via frame-level feature matching to achieve the a
Externí odkaz:
http://arxiv.org/abs/2207.10785
Autor:
Khan, Hamza, Haresh, Sanjay, Ahmed, Awais, Siddiqui, Shakeeb, Konin, Andrey, Zia, M. Zeeshan, Tran, Quoc-Huy
We introduce a novel approach for temporal activity segmentation with timestamp supervision. Our main contribution is a graph convolutional network, which is learned in an end-to-end manner to exploit both frame features and connections between neigh
Externí odkaz:
http://arxiv.org/abs/2206.15031
In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifi
Externí odkaz:
http://arxiv.org/abs/2206.04679
We present a novel approach for unsupervised activity segmentation which uses video frame clustering as a pretext task and simultaneously performs representation learning and online clustering. This is in contrast with prior works where representatio
Externí odkaz:
http://arxiv.org/abs/2105.13353
Autor:
Haresh, Sanjay, Kumar, Sateesh, Coskun, Huseyin, Syed, Shahram Najam, Konin, Andrey, Zia, Muhammad Zeeshan, Tran, Quoc-Huy
We present a self-supervised approach for learning video representations using temporal video alignment as a pretext task, while exploiting both frame-level and video-level information. We leverage a novel combination of temporal alignment loss and t
Externí odkaz:
http://arxiv.org/abs/2103.17260
Autor:
Zhuang, Bingbing, Tran, Quoc-Huy
In this paper, we derive a new differential homography that can account for the scanline-varying camera poses in Rolling Shutter (RS) cameras, and demonstrate its application to carry out RS-aware image stitching and rectification at one stroke. Desp
Externí odkaz:
http://arxiv.org/abs/2008.09229