Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Meshia Cédric Oveneke"'
Publikováno v:
PLoS ONE, Vol 19, Iss 4, p e0302197 (2024)
Our study aims to investigate the interdependence between international stock markets and sentiments from financial news in stock forecasting. We adopt the Temporal Fusion Transformers (TFT) to incorporate intra and inter-market correlations and the
Externí odkaz:
https://doaj.org/article/1264c279b06e42d895add8d8acd7cd5d
Autor:
Hichem Sahli, Le Yang, Meshia Cédric Oveneke, Longfei Li, Dongmei Jiang, Ercheng Pei, Yong Zhao, Mitchel Alioscha-Perez
Publikováno v:
Computer Graphics Forum. 40:47-61
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems. 31:1710-1723
In this paper, we present a novel strategy to combine a set of compact descriptors to leverage an associated recognition task. We formulate the problem from a multiple kernel learn ing (MKL) perspective and solve it following a stochastic variance re
Autor:
Andre Bourdoux, Habib-Ur-Rehman Khalid, Abel Diaz Berenguer, Mitchel Alioscha-Perez, Hichem Sahli, Meshia Cédric Oveneke
Publikováno v:
IEEE Access, Vol 7, Pp 137122-137135 (2019)
In this paper we propose a novel framework to process Doppler-radar signals for hand gesture recognition. Doppler-radar sensors provide many advantages over other emerging sensing modalities, including low development costs and high sensitivity to ca
Publikováno v:
Multimedia Tools and Applications. 78:16389-16410
Generating dynamic 2D image-based facial expressions is a challenging task for facial animation. Much research work focused on performance-driven facial animation from given videos or images of a target face, while animating a single face image drive
Understanding human-contextual interaction to predict human trajectories is a challenging problem. Most of previous trajectory prediction approaches focused on modeling the human-human interaction located in a near neighborhood and neglected the infl
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3684912d2487e76f4ca146a2f7eb4e47
https://hdl.handle.net/20.500.14017/c2e0a2d5-c49b-42bb-a700-339928c7ff9e
https://hdl.handle.net/20.500.14017/c2e0a2d5-c49b-42bb-a700-339928c7ff9e
Autor:
Hichem Sahli, Ercheng Pei, Mitchel Alioscha-Perez, Meshia Cédric Oveneke, Abel Diaz Berenguer
Publikováno v:
ICAIIC
In this paper, we address the problem of neural architecture search (NAS) in a context where the optimality policy is driven by a black-box Oracle $\mathcal{O}$ with unknown form and derivatives. In this scenario, $\mathcal{O}(A_{C})$ typically provi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::93aa0bcff16d03be6b8803b910a31265
https://doi.org/10.1109/icaiic48513.2020.9065031
https://doi.org/10.1109/icaiic48513.2020.9065031
Automated facial expression analysis from image sequences for continuous emotion recognition is a very challenging task due to the loss of the three-dimensional information during the image formation process. State-of-the-art relied on estimating dyn
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::f7bebe76665ed1f7d01c17cde8b0e510
https://doi.org/10.1109/tmm.2020.3026894
https://doi.org/10.1109/tmm.2020.3026894
Autor:
Hichem Sahli, Dongmei Jiang, Fengna Wang, Diana Torres-Boza, Werner Verhelst, Meshia Cédric Oveneke
Publikováno v:
Speech Communication. 99:80-89
Finding an appropriate feature representation for audio data is central to speech emotion recognition. Most existing audio features rely on hand-crafted feature encoding techniques, such as the AVEC challenge feature set. An alternative approach is t
Autor:
Ercheng Pei, Yong Zhao, Hichem Sahli, Dongmei Jiang, Abel Diaz Berenguer, Meshia Cédric Oveneke
Continuous affect estimation from facial expressions has attracted increased attention in the affective computing research community. This paper presents a principled framework for estimating continuous affect from video sequences. Based on recent de
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::35bb159e1f52b120ef31d0bbb675162e
https://biblio.vub.ac.be/vubir/leveraging-the-deep-learning-paradigm-for-continuous-affect-estimation-from-facial-expressions(46d1dfd0-de31-4d74-bec5-b8939ce947a2).html
https://biblio.vub.ac.be/vubir/leveraging-the-deep-learning-paradigm-for-continuous-affect-estimation-from-facial-expressions(46d1dfd0-de31-4d74-bec5-b8939ce947a2).html