Zobrazeno 1 - 10
of 59
pro vyhledávání: '"ALAGHBAND, GITA"'
Autor:
Ghebrechristos, Henok, Nicholas, Stence, Mirsky, David, Alaghband, Gita, Huynh, Manh, Kromer, Zackary, Batista, Ligia, ONeill, Brent, Moulton, Steven, Lindberg, Daniel M.
This paper presents a deep learning framework for image classification aimed at increasing predictive performance for Cytotoxic Edema (CE) diagnosis in infants and children. The proposed framework includes two 3D network architectures optimized to le
Externí odkaz:
http://arxiv.org/abs/2210.04767
Autor:
Huynh, Manh, Alaghband, Gita
Prediction with high accuracy is essential for various applications such as autonomous driving. Existing prediction models are easily prone to errors in real-world settings where observations (e.g. human poses and locations) are often noisy. To addre
Externí odkaz:
http://arxiv.org/abs/2103.14113
Autor:
Alghamdi, Thoria, Alaghband, Gita
Publikováno v:
16th International Conference on Applied Computing 2019 113-122
We present four high performance hybrid sorting methods developed for various parallel platforms: shared memory multiprocessors, distributed multiprocessors, and clusters taking advantage of existence of both shared and distributed memory. Merge sort
Externí odkaz:
http://arxiv.org/abs/2003.01216
Autor:
Takano, Nao, Alaghband, Gita
Applications that involve supervised training require paired images. Researchers of single image super-resolution (SISR) create such images by artificially generating blurry input images from the corresponding ground truth. Similarly we can create pa
Externí odkaz:
http://arxiv.org/abs/2002.06682
Autor:
Huynh, Manh, Alaghband, Gita
We present a novel adaptive online learning (AOL) framework to predict human movement trajectories in dynamic video scenes. Our framework learns and adapts to changes in the scene environment and generates best network weights for different scenarios
Externí odkaz:
http://arxiv.org/abs/2002.06666
Autor:
Huynh, Manh, Alaghband, Gita
We develop a novel human trajectory prediction system that incorporates the scene information (Scene-LSTM) as well as individual pedestrian movement (Pedestrian-LSTM) trained simultaneously within static crowded scenes. We superimpose a two-level gri
Externí odkaz:
http://arxiv.org/abs/1908.08908
Autor:
Takano, Nao, Alaghband, Gita
Generative Adversarial Networks (GANs) in supervised settings can generate photo-realistic corresponding output from low-definition input (SRGAN). Using the architecture presented in the SRGAN original paper [2], we explore how selecting a dataset af
Externí odkaz:
http://arxiv.org/abs/1903.09922
Autor:
Manh, Huynh, Alaghband, Gita
We develop a human movement trajectory prediction system that incorporates the scene information (Scene-LSTM) as well as human movement trajectories (Pedestrian movement LSTM) in the prediction process within static crowded scenes. We superimpose a t
Externí odkaz:
http://arxiv.org/abs/1808.04018
Autor:
Manh, Huynh, Alaghband, Gita
In this paper, we present a new spatial discriminative KSVD dictionary algorithm (STKSVD) for learning target appearance in online multi-target tracking. Different from other classification/recognition tasks (e.g. face, image recognition), learning t
Externí odkaz:
http://arxiv.org/abs/1807.02143
Autor:
McCarthy, Shawn, Alaghband, Gita
Publikováno v:
Journal of Risk & Financial Management; Dec2024, Vol. 17 Issue 12, p537, 21p