Zobrazeno 1 - 10
of 240
pro vyhledávání: '"Seoung Bum Kim"'
Autor:
Sumit Kumar Singh, Jinsoo Bae, Yu Zhang, Saerin Lim, Jongkook Heo, Seoung Bum Kim, Weon Gyu Shin
Publikováno v:
Nuclear Engineering and Technology, Vol 56, Iss 9, Pp 3717-3729 (2024)
Accurately predicting evacuation time in a ventilated main control room (MCR) during fire emergencies is crucial for ensuring the safety of personnel at nuclear power plants. This study proposes to use neural networks alongside consolidated fire and
Externí odkaz:
https://doaj.org/article/e4d6a241f3dd4537a60aa7990370db17
Autor:
Jongwon Choi, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 12, Pp 39495-39504 (2024)
The semiconductor industry, driven by technological advancements, is continuously undergoing process micronization. This micronization has led to an increased complexity in the wafer fabrication process and equipment. Inevitably, this change leads to
Externí odkaz:
https://doaj.org/article/129f01d118a24a28a7b90b4e40cb714b
Publikováno v:
IEEE Access, Vol 12, Pp 60-72 (2024)
The rapid advancement of artificial intelligence has observed increased application in predicting vehicle interior noise levels within the automotive industry. However, the collection of labeled data for training models in this context involves signi
Externí odkaz:
https://doaj.org/article/85776834ced2428f862076aa8e402b61
Autor:
Minjae Baek, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 11, Pp 54363-54372 (2023)
Downtime caused by equipment failure is the biggest productivity problem in the 24-hour a day operations of the semiconductor industry. Although some equipment failures are inevitable, increases in productivity can be gained if the causes of failures
Externí odkaz:
https://doaj.org/article/3fa45e1cbae546b686319df98958dea9
Autor:
Changhyun Kim, Jinsoo Bae, Insung Baek, Jaeyoon Jeong, Young Jae Lee, Kiwoong Park, Sang Heun Shim, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 11, Pp 46504-46512 (2023)
In real-time strategy (RTS) games, to defeat their opponents, players need to choose and implement the correct sequential actions. Because RTS games like StarCraft II are real-time, players have a very limited time to choose how to develop their stra
Externí odkaz:
https://doaj.org/article/7b1dbfa679184a6084f7f59ff9cc110f
Autor:
Hyeryeong Oh, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 10, Pp 120063-120073 (2022)
In real-world classification tasks, deep neural networks show innovative performance in various fields. However, traditional classification methods are constructed based on a set of predefined classes and force unknown classes that determine their ca
Externí odkaz:
https://doaj.org/article/8d640c72041d44a6ab29888f4366e25b
Autor:
Minjung Lee, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 10, Pp 119333-119344 (2022)
The main objective of sensor-based human activity recognition (HAR) is to classify predefined human physical activities with multichannel signals acquired from wearable sensors. In a real-world scenario, signal data is changing over time and undefine
Externí odkaz:
https://doaj.org/article/369d9cda32864d41bc469a0f12fb5d19
Autor:
Jung In Kim, Young Jae Lee, Jongkook Heo, Jinhyeok Park, Jaehoon Kim, Sae Rin Lim, Jinyong Jeong, Seoung Bum Kim
Publikováno v:
PLoS ONE, Vol 18, Iss 9, p e0291545 (2023)
Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sa
Externí odkaz:
https://doaj.org/article/5555aee3bfff4f57b641045fb0070cb6
Autor:
Mingu Kwak, Seoung Bum Kim
Publikováno v:
IEEE Access, Vol 9, Pp 39995-40007 (2021)
Detecting an anomaly in multichannel signal data is a challenging task in various domains. It should take into account the cross-channel relationship and temporal relationship within each channel. Moreover, the signal data is high-dimensional and mak
Externí odkaz:
https://doaj.org/article/ed2901b76d6145be8ee1fbad78273229
Publikováno v:
IEEE Access, Vol 8, Pp 125389-125400 (2020)
In multi-agent reinforcement learning, it is essential for agents to learn communication protocol to optimize collaboration policies and to solve unstable learning problems. Existing methods based on actor-critic networks solve the communication prob
Externí odkaz:
https://doaj.org/article/bdfbf5c114e74e639e433cd1bc262c55