Zobrazeno 1 - 10
of 114
pro vyhledávání: '"Tae-Hyoung Park"'
Autor:
Taek-Lim Kim, Tae-Hyoung Park
Publikováno v:
Remote Sensing, Vol 16, Iss 13, p 2287 (2024)
Cameras and LiDAR sensors have been used in sensor fusion for robust object detection in autonomous driving. Object detection networks for autonomous driving are often trained again by adding or changing datasets aimed at robust performance. Repeat t
Externí odkaz:
https://doaj.org/article/2368314e9ba44198b7352d1c7ef3455f
Autor:
Saba Arshad, Tae-Hyoung Park
Publikováno v:
Sensors, Vol 24, Iss 3, p 906 (2024)
Robust visual place recognition (VPR) enables mobile robots to identify previously visited locations. For this purpose, the extracted visual information and place matching method plays a significant role. In this paper, we critically review the exist
Externí odkaz:
https://doaj.org/article/60f4f372c65947f68256d2512c53d707
Publikováno v:
Sensors, Vol 23, Iss 21, p 8808 (2023)
Sensors on autonomous vehicles have inherent physical constraints. To address these limitations, several studies have been conducted to enhance sensing capabilities by establishing wireless communication between infrastructure and autonomous vehicles
Externí odkaz:
https://doaj.org/article/91fd0f71a17e420dbdad0d355acfaf03
Autor:
Jae-Seol Lee, Tae-Hyoung Park
Publikováno v:
IEEE Access, Vol 10, Pp 125102-125111 (2022)
LiDAR semantic segmentation is essential in autonomous vehicle safety. A rotating 3D LiDAR projects more laser points onto nearby objects and fewer points onto farther objects. Therefore, when projecting points onto a 2D image, such as spherical coor
Externí odkaz:
https://doaj.org/article/45e7da061c794d6e871ac65b53862460
Publikováno v:
Remote Sensing, Vol 15, Iss 16, p 3992 (2023)
Object detection is one of the vital components used for autonomous navigation in dynamic environments. Camera and lidar sensors have been widely used for efficient object detection by mobile robots. However, they suffer from adverse weather conditio
Externí odkaz:
https://doaj.org/article/aca85d545b7d42b7815618fd7c99f9ef
Publikováno v:
Applied Sciences, Vol 13, Iss 7, p 4540 (2023)
A stain defect is difficult to detect with the human eye because of its characteristic of having a very minimal difference in brightness with the local area of the surface. Recently, with the development of Deep learning, the Convolutional Neural Net
Externí odkaz:
https://doaj.org/article/f1a082fe3369472ab5f5907e65178c0a
Autor:
Young-Gyu Kim, Tae-Hyoung Park
Publikováno v:
IEEE Access, Vol 9, Pp 73808-73817 (2021)
Anomaly detection uses various machine learning techniques to identify and classify defective data on the production line. The autoencoder-based anomaly detection method is an unsupervised method that classifies abnormal samples using an autoencoder
Externí odkaz:
https://doaj.org/article/e3e22c3da5714e1dbbf48dafd1206b21
Autor:
Gyuho Eoh, Tae-Hyoung Park
Publikováno v:
IEEE Access, Vol 9, Pp 137281-137294 (2021)
This paper presents an automatic curriculum learning (ACL) method for object transportation based on deep reinforcement learning (DRL). Previous studies on object transportation using DRL have a sparse reward problem that an agent receives a rare rew
Externí odkaz:
https://doaj.org/article/48b25bc474df4944a494878f90bef907
Autor:
Taek-Lim Kim, Tae-Hyoung Park
Publikováno v:
Sensors, Vol 22, Iss 19, p 7163 (2022)
Object detection is an important factor in the autonomous driving industry. Object detection for autonomous vehicles requires robust results, because various situations and environments must be considered. A sensor fusion method is used to implement
Externí odkaz:
https://doaj.org/article/7f15592dcd234ab4a296ca4e4797fefa
Autor:
Gyuho Eoh, Tae-Hyoung Park
Publikováno v:
Sensors, Vol 21, Iss 14, p 4780 (2021)
This paper presents a cooperative object transportation technique using deep reinforcement learning (DRL) based on curricula. Previous studies on object transportation highly depended on complex and intractable controls, such as grasping, pushing, an
Externí odkaz:
https://doaj.org/article/34a3c4dee0354e19b8b2b7d79579af67