Abstract

The long white cane offers many benefits for the blind and visually impaired. Still, many report being injured both indoors and outdoors while using the long white cane. One frequent cause of injury is due to the fact that the long white cane cannot detect obstacles above the waist of the user. This thesis presents a system that attempts to augment the capabilities of the long white cane by sensing the environment around the user, creating a map of obstacles within the environment, and providing simple haptic feedback to the user. The proposed augmented cane system uses the Asus Xtion Pro Live infrared depth sensor to capture the user's environment as a point cloud. The open-source Point Cloud Library (PCL) and Robotic Operating System (ROS) are used to process the point cloud. The points representing the ground plane are extracted to more clearly define potential obstacles. The system determines the nearest point for each 1degree across the horizontal view. These nearest points are recorded as a ROS Laser Scan message and used in a simple haptic feedback system where the rumble feedback is based on two different cost functions. Twenty-two volunteers participated in a user demonstration that showed the augmented cane system can successfully communicate the presence of obstacles to blindfolded users. The users reported experiencing a sense of safety and confidence in the system's abilities. Obstacles above waist height are detected and communicated to the user. The system requires additional development before it could be considered a viable product for the visually impaired.

Degree

MS

College and Department

Ira A. Fulton College of Engineering and Technology; Mechanical Engineering

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2017-12-01

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd9677

Keywords

mobility aid, RGB-D camera, point cloud processing, obstacle avoidance, visually impaired

Language

english

Share

COinS