Abstrakt: |
Purpose – Mobile handheld devices, such as cellular phones and personal digital assistants, are inherently small and lack an intuitive and natural user interface. Speech recognition and synthesis technology can be used in mobile handheld devices to improve the user experience. The purpose of this paper is to describe a prototype system that supports multiple speech-enabled applications in a mobile handheld device. Design/methodology/approach – The main component of the system, the Program Manager, coordinates and controls the speech-enabled applications. Human speech requests to, and responses from, these applications are processed in the mobile handheld device, to achieve the goal of human-like interactions between the human and the device. In addition to speech, the system also supports graphics and text, i.e., multimodal input and output, for greater usability, flexibility, adaptivity, accuracy, and robustness. The paper presents a qualitative and quantitative evaluation of the prototype system. The Program Manager is currently designed to handle the specific speech-enabled applications that we developed. Findings – The paper determines that many human interactions involve not single applications but multiple applications working together in possibly unanticipated ways. Research limitations/implications – Future work includes generalization of the Program Manager so that it supports arbitrary applications and the addition of new applications dynamically. Future work also includes deployment of the Program Manager and the applications on cellular phones running the Android Platform or the Openmoko Framework. Originality/value – This paper presents a first step towards a future human interface for mobile handheld devices and for speech-enabled applications operating on those devices. [ABSTRACT FROM AUTHOR] |