Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Diego Martinez Plasencia"'
Publikováno v:
Sensors, Vol 24, Iss 3, p 935 (2024)
Most haptic actuators available on the market today can generate only a single modality of stimuli. This ultimately limits the capacity of a kinaesthetic haptic controller to deliver more expressive feedback, requiring a haptic controller to integrat
Externí odkaz:
https://doaj.org/article/b99b6f31741b4da5a36798dd54a602d2
Publikováno v:
ACM Transactions on Graphics. 42:1-13
Phased arrays of transducers have been quickly evolving in terms of software and hardware with applications in haptics (acoustic vibrations), display (levitation), and audio. Most recently, Multimodal Particle-based Displays (MPDs) have even demonstr
Autor:
Lei Gao, Pourang Irani, Sriram Subramanian, Gowdham Prabhakar, Diego Martinez Plasencia, Ryuji Hirayama
Publikováno v:
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Publikováno v:
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Publikováno v:
Lei Gao
Publikováno v:
CHI Conference on Human Factors in Computing Systems Extended Abstracts.
Autor:
Eimontas Jankauskis, Sonia Elizondo, Roberto Montano Murillo, Asier Marzo, Diego Martinez Plasencia
Acoustic levitation has emerged as a promising approach for mid-air displays, by using multiple levitated particles as 3D voxels, cloth and thread props, or high-speed tracer particles, under the promise of creating 3D displays that users can see, he
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::87ebdee35e1d294466d777be8d7814c8
https://hdl.handle.net/2454/45517
https://hdl.handle.net/2454/45517
Autor:
Katsuhiro Suzuki, Sriram Subramanian, Katsutoshi Masai, Akino Umezawa, Yutaka Tokuda, Maki Sugimoto, Diego Martinez Plasencia, Keiji Hirata, Yoshinari Takegawa, Yuta Sugiura
Publikováno v:
ISMAR
This paper presents a thin digital full-face mask display that can reflect an entire facial expression of a user onto an avatar to support augmented face-to-face communication in real environments. Although camera-based facial expression recognition
Publikováno v:
Optical Trapping and Optical Micromanipulation XVII.
Current display approaches, such as VR, allow us to get a glimpse of multimodal 3D experiences, but users need to wear headsets as well as other devices in order to trick our brains into believing that the content we are seeing, hearing or feeling is
Autor:
Maki Sugimoto, Yuta Sugiura, Diego Martinez Plasencia, Katsuhiro Suzuki, Yutaka Tokuda, Hiroaki Taka, Masafumi Takahashi, Keiji Hirata, Akino Umezawa, Yoshinari Takegawa, Sriram Subramanian, Katsutoshi Masai
Publikováno v:
AHs
The goal of this research is to propose the e2-MaskZ, a mask-type display that changes the user's face to the face of an avatar. The e2-MaskZ is composed of a face-capture mask to recognize the facial expression, and a face-display mask to present th