Popis: |
This chapter focuses on recent advances in social robots that are capable of sensing their users, and support their users through social interactions, with the ultimate goal of fostering their cognitive and socio-emotional wellbeing. Designing social robots with socio-emotional skills is a challenging research topic still in its infancy. These skills are important for robots to be able to provide physical and social support to human users, and to engage in and sustain long-term interactions with them in a variety of application domains that require human–robot interaction, including healthcare, education, entertainment, manufacturing, and many others. The availability of commercial robotic platforms and developments in collaborative academic research provide us with a positive outlook; however, the capabilities of current social robots are quite limited. The main challenge is understanding the underlying mechanisms of humans in responding to and interacting with real life situations, and how to model these mechanisms for the embodiment of naturalistic, human-inspired behavior in robots. Addressing this challenge successfully requires an understanding of the essential components of social interaction, including nonverbal behavioral cues such as interpersonal distance, body position, body posture, arm and hand gestures, head and facial gestures, gaze, silences, vocal outbursts, and their dynamics. To create truly intelligent social robots, these nonverbal cues need to be interpreted to form an understanding of the higher level phenomena including first-impression formation, social roles, interpersonal relationships, focus of attention, synchrony, affective states, emotions, personality, and engagement, and in turn defining optimal protocols and behaviors to express these phenomena through robotic platforms in an appropriate and timely manner. This chapter sets out to explore the automatic analysis of social phenomena that are commonly studied in the fields of affective computing and social signal processing, together with an overview of recent vision-based approaches used by social robots. The chapter then describes two case studies to demonstrate how emotions and personality, two key phenomena for enabling effective and engaging interactions with robots, can be automatically predicted from visual cues during human–robot interactions. The chapter concludes by summarizing the open problems in the field and discussing potential future directions. |