Abstract

Textual data entry is an increasingly-important part of Human-Computer Interaction (HCI), but there is room for improvement in this domain. First, the keyboard -- a foundational text-entry device -- presents ergonomic challenges in terms of comfort and accuracy for even well-trained typists. Second, touch-screen smartphones -- some of the most ubiquitous mobile devices -- lack the physical space required to implement a full-size physical keyboard, and settle for a reduced input that can be slow and inaccurate. This thesis proposes and examines "DeepType" to begin addressing both of these problems in the form of a fully-virtual keyboard, realized through a deep recurrent neural network (DRNN) trained to recognize skeletal movement during typing. This network enables typing data to be extracted without a physical keyboard: a user can type on a flat surface as though on a keyboard, and the movement of their fingers (as recorded via monocular camera and estimated using a pre-trained model) is input into the DeepType network to provide output compatible with that output by a physical keyboard with 91.2% accuracy without any autocorrection. We show that this architecture is computationally feasible and sufficiently accurate for use when tailored to a specific subject, and suggest optimizations that may enable generalization. We also present a novel data capture system used to generate the training dataset for DeepType, including effective hand pose data normalization techniques.

Degree

MS

College and Department

Ira A. Fulton College of Engineering; Electrical and Computer Engineering

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2023-02-23

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd13110

Keywords

machine learning, keyboard, type, hand pose, text entry, data entry, mobile, computer vision, deeptype

Language

english

Included in

Engineering Commons

Share

COinS