Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Chun Chet Ng"'
Autor:
Po-Hao Hsu, Che-Tsung Lin, Chun Chet Ng, Jie Long Kew, Mei Yih Tan, Shang-Hong Lai, Chee Seng Chan, Christopher Zach
Deep learning-based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3ce59f5729ac298edb7a06b804692517
Autor:
Akmalul Khairi Bin Nazaruddin, Chun Chet Ng, Yuliang Liu, Chee Seng Chan, Yeong Khang Lee, Xinyu Wang, Yipeng Sun, Lixin Fan, Lianwen Jin
Publikováno v:
Document Analysis and Recognition – ICDAR 2021 ISBN: 9783030863364
ICDAR (4)
ICDAR (4)
With hundreds of thousands of electronic chip components are being manufactured every day, chip manufacturers have seen an increasing demand in seeking a more efficient and effective way of inspecting the quality of printed texts on chip components.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b1bb284d4025480ef198ebcf721dcfdf
https://doi.org/10.1007/978-3-030-86337-1_44
https://doi.org/10.1007/978-3-030-86337-1_44
Autor:
Anton van den Hengel, Liangwei Wang, Chun Chet Ng, Canjie Luo, Chee Seng Chan, Lianwen Jin, Xinyu Wang, Yuliang Liu, Chunhua Shen
Publikováno v:
CVPR
Visual Question Answering (VQA) methods have made incredible progress, but suffer from a failure to generalize. This is visible in the fact that they are vulnerable to learning coincidental correlations in the data rather than deeper relations betwee
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::00d0b870e68f298457eda5e29c3ef9b8
Autor:
Junyu Han, Chee-Kheng Chng, Chee Seng Chan, Canjie Luo, Ni Zihan, Jingtuo Liu, Yipeng Sun, Yuliang Liu, Errui Ding, Chun Chet Ng, Lianwen Jin, Dimosthenis Karatzas
Publikováno v:
ICDAR
Robust text reading from street view images provides valuable information for various applications. Performance improvement of existing methods in such a challenging scenario heavily relies on the amount of fully annotated training data, which is cos
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8ccc294bcd8dfac69416302f92ab826b
Autor:
Jingtuo Liu, Chee-Kheng Chng, Chee Seng Chan, Shuaitao Zhang, Junyu Han, Ni Zihan, ChuanMing Fang, Yipeng Sun, Dimosthenis Karatzas, Yuliang Liu, Canjie Luo, Chun Chet Ng, Lianwen Jin, Errui Ding
Publikováno v:
ICDAR
This paper reports the ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (RRC-ArT) that consists of three major challenges: i) scene text detection, ii) scene text recognition, and iii) scene text spotting. A total of 78 submissions from 46
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6db385ca68d637bc79d1a07d82f76d9f
Publikováno v:
ACM Multimedia
How does a pre-trained Convolution Neural Network (CNN) model perform on beauty and personal care items (i.e Perfect-500K) This is the question we attempt to answer in this paper by adopting several well known deep learning models pre-trained on Imag