Autor: |
Zi Wang, Yili Ren, Yingying Chen, Jie Yang |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 6:1-24 |
ISSN: |
2474-9567 |
DOI: |
10.1145/3534606 |
Popis: |
Earables (ear wearables) are rapidly emerging as a new platform encompassing a diverse range of personal applications. The traditional authentication methods hence become less applicable and inconvenient for earables due to their limited input interface. Nevertheless, earables often feature rich around-the-head sensing capability that can be leveraged to capture new types of biometrics. In this work, we propose ToothSonic that leverages the toothprint-induced sonic effect produced by a user performing teeth gestures for earable authentication. In particular, we design representative teeth gestures that can produce effective sonic waves carrying the information of the toothprint. To reliably capture the acoustic toothprint, it leverages the occlusion effect of the ear canal and the inward-facing microphone of the earables. It then extracts multi-level acoustic features to reflect the intrinsic toothprint information for authentication. The key advantages of ToothSonic are that it is suitable for earables and is resistant to various spoofing attacks as the acoustic toothprint is captured via the user's private teeth-ear channel that modulates and encrypts the sonic waves. Our experiment studies with 25 participants show that ToothSonic achieves up to 95% accuracy with only one of the users' tooth gestures. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|