Author
Abstract

We demonstrate an ear-worn technology that recognizes unvoiced human commands by tracking jaw motion. The ear-worn system is designed to achieve continual unvoiced command recognition for robust human-computer interaction (HCI) applications. First, the system reliably extracts the jaw motion signals buried under the noise caused by head motion, walking, and other motion artifacts to track single secondary voice articulator (i.e., word). Then, learning from linguistics and human speech anatomy, we design a novel algorithm that localizes the phonemes in the command, and reconstructs the word. We evaluate the proposed system in real-world experiments with 15 volunteers. Our preliminary results show that the proposed system obtains a word recognition accuracy of 95.6% in noise-free conditions and 93.2% and 91.6%, while head nodding and walking.

Year of Publication

2022
Conference Name

MobiSys Demo
Publisher

Association for Computing Machinery
Conference Location

New York, NY, USA
ISBN Number

9781450391856
URL

https://doi.org/10.1145/3498361.3538665
DOI

10.1145/3498361.3538665
Download citation