This thesis presents a novel profile shape matching stereo vision algorithm. This algorithm is able to obtain 3D information in real time from a pair of stereo images. This algorithm produces the 3D information by matching the profile intensity shapes on the same row of the two images from a stereo image pair. The advantage of this profile shape matching algorithm is that the detection of correspondences relies on intensity profile shape not on intensity values, which subject to lighting variations. The user can choose an interval of disparity, and then an object in a desired distance range can be segmented out from the background. In other words, the algorithm detects the object according to its distance to the cameras. Based on the resulting 3D information, the movement and gesture of the control agents, in our test cases the human body and fingers, in space in a desired distance range can be determined. The body movement and gestures can then be analyzed for human-computer interface purposes. In this thesis, the algorithm was applied for human pose and hand gesture estimation. To demonstrate its performance the estimation results were interpreted as inputs and sent to a smart phone to control its functions. While this algorithm does have a trade-off between accuracy and processing speed, we found a balance that can produce the result in real time, and the result has sufficient accuracy for practical use of recognizing human poses and hand gesture. The experimental result shows that the proposed algorithm has higher accuracy and is 1.14× faster than the original version on tested stereo image pairs.
College and Department
Ira A. Fulton College of Engineering and Technology; Electrical and Computer Engineering
BYU ScholarsArchive Citation
Chang, Yung Ping, "Gesture Analysis for Human-Computer Interface Using Profile-Matching Stereo Vision" (2013). All Theses and Dissertations. 3729.
stereo vision, human-computer interface, row profile matching