Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Jianxiong Yin"'
Autor:
Haozhi Cao, Yuecong Xu, Kezhi Mao, Lihua Xie, Jianxiong Yin, Simon See, Qianwen Xu, Jianfei Yang
Publikováno v:
IEEE Transactions on Cybernetics. :1-13
Publikováno v:
Communications in Computer and Information Science ISBN: 9789811605741
The task of action recognition in dark videos is useful in various scenarios, e.g., night surveillance and self-driving at night. Though progress has been made in action recognition task for videos in normal illumination, few have studied action reco
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::3510e277e926c2c81892278e3d00cf61
https://doi.org/10.1007/978-981-16-0575-8_6
https://doi.org/10.1007/978-981-16-0575-8_6
Temporal feature extraction is an essential technique in video-based action recognition. Key points have been utilized in skeleton-based action recognition methods but they require costly key point annotation. In this paper, we propose a novel tempor
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e50e41ac601080647ac84acdf4ac223e
Publikováno v:
ACM Multimedia
MLModelCI provides multimedia researchers and developers with a one-stop platform for efficient machine learning (ML) services. The system leverages DevOps techniques to optimize, test, and manage models. It also containerizes and deploys these optim
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d6356bb2fd84c724ebd6abc6796ff140
https://hdl.handle.net/10356/152994
https://hdl.handle.net/10356/152994
Autor:
Simon See, Frédéric Pinel, Sébastien Varrette, Pascal Bouvry, Emmanuel Kieffer, Jianxiong Yin, Christian Hundt
Publikováno v:
Communications in Computer and Information Science ISBN: 9783030419127
OLA
OLA
We present a procedure for the design of a Deep Neural Net- work (DNN) that estimates the execution time for training a deep neural network per batch on GPU accelerators. The estimator is destined to be embedded in the scheduler of a shared GPU infra
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e888251743617a8eb3de159fce4504f5
https://doi.org/10.1007/978-3-030-41913-4_2
https://doi.org/10.1007/978-3-030-41913-4_2
Long-range spatiotemporal dependencies capturing plays an essential role in improving video features for action recognition. The non-local block inspired by the non-local means is designed to address this challenge and have shown excellent performanc
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1d060510fe9da75b31596af23f144c2e
Publikováno v:
Expert Systems with Applications. 178:114829
Temporal feature extraction is an important issue in video-based action recognition. Optical flow is a popular method to extract temporal feature, which produces excellent performance thanks to its capacity of capturing pixel-level correlation inform
Autor:
Lei Ma, Hongxu Chen, Jianxiong Yin, Simon See, Felix Juefei-Xu, Jianjun Zhao, Bo Li, Xiaofei Xie, Minhui Xue, Yang Liu
Publikováno v:
ISSTA
The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as autonomous driving. Similar to traditional software, DNNs could exhibit incorrect behaviors, caused by hidden defe
Autor:
Jianxiong Yin
Publikováno v:
VLSI-DAT
With the huge breakthrough of Deep Learning in terms of performance, robustness and scalability on a few fundamental machine learning tasks, we've seen rapid growth of AI Processors adoption in many industries in both production and research scenario
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783030322250
MICCAI (6)
MICCAI (6)
Lesion detection from computed tomography (CT) scans is challenging compared to natural object detection because of two major reasons: small lesion size and small inter-class variation. Firstly, the lesions usually only occupy a small region in the C
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b9766e949acca7abf0981e09fc121379
https://doi.org/10.1007/978-3-030-32226-7_21
https://doi.org/10.1007/978-3-030-32226-7_21