Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Jocelynn Cu"'
Autor:
Earl Jeffrey Capistrano, Kristen Ann Raphaelle Espiritu, Marybelle Tandoc, Johanna Koon Gan Lim, Jocelynn Cu
Publikováno v:
2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII).
Autor:
Jocelynn Cu, Cedric Jose Herrera, Jennifer C. Ureta, Klint John Poliquit, Judith Azcarraga, Sean Latrelle Bravo, Joanna Pauline Rivera, Edward Carlo Valdez
Publikováno v:
ICAART (2)
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783319606743
PRICAI Workshops
PRICAI Workshops
The main goal of this study is to classify affective laughter expressions from body movements. Using a non-intrusive Kinect sensor, body movement data from laughing participants were collected, annotated and segmented. A set of features that include
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::c3d0a17c0516dd551245dfaac6e67d78
https://doi.org/10.1007/978-3-319-60675-0_12
https://doi.org/10.1007/978-3-319-60675-0_12
Publikováno v:
Journal on Multimodal User Interfaces. 7:135-142
This paper describes the Filipino multimodal emotion database (FilMED). FilMED was built with the purpose of developing affective systems for TALA, which is an ambient intelligent empathic space. We collected a total of 11,430 audio–video clips sho
Autor:
Hal Gino Avisado, Joshua Alexei Gaverza, Rafael Cabredo, Jocelynn Cu, John Vincent Cocjin, Merlin Teodosia Suarez
Publikováno v:
Proceedings in Information and Communications Technology ISBN: 9784431541059
Music emotion research has led to identifying timbre as a feature influencing human affect. This work constructs a user-specific affect model identifying music induced emotion using several timbre features. A corpora of music-emotion data was collect
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::55d46b5e50c3d2f8bf8ddf13f88b694f
https://doi.org/10.1007/978-4-431-54106-6_3
https://doi.org/10.1007/978-4-431-54106-6_3
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783642326943
PRICAI
PRICAI
This study focuses on the development of a real-time automatic affect recognition system. It adapts a multimodal approach, where affect information taken from two modalities are combined to arrive at an emotion label that is represented in a valence-
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::0ab5bb883dd0018ab67f8df456574f4d
https://doi.org/10.1007/978-3-642-32695-0_23
https://doi.org/10.1007/978-3-642-32695-0_23
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783642326943
PRICAI
PRICAI
Rhythm is one of the most essential elements of music that can easily capture the attention of the listener. In this study, we explored various rhythm features and used them to build emotion models. The emotion labels used are based on Thayers Model
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b37c09dcd8cda998bc35c75a7ad63272
https://doi.org/10.1007/978-3-642-32695-0_85
https://doi.org/10.1007/978-3-642-32695-0_85
Publikováno v:
KSE
Laughter has been determined as an important social signal that can predict emotional information of users. This paper presents an extension of a previous study that discovers underlying affect in Filipino laughter using audio features, a posed laugh
Autor:
Rafael Cabredo, Gregory Cu, Paul Salvador Inventado, Roberto Legaspi, Jocelynn Cu, Rhia Trogo, Merlin Teodosia Suarez
Publikováno v:
2010 3rd International Conference on Human-Centric Computing.
Advancement in ambient intelligence is driving the trend towards innovative interaction with computing systems. In this paper, we present our efforts towards the development of the ambient intelligent space TALA, which has the concept of empathy in c
Autor:
Paul Patrick V. Go, Ivan Vener L. Espinosa, Jocelynn Cu, Marc Lanze Ivan C. Dy, Charles Martin M. Mendez
Publikováno v:
2010 3rd International Conference on Human-Centric Computing.
Human-computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user's emotion. Studies on emotion recognition show that combining face and voice signals produce higher recognition rates compare