Abstract

In order to reduce bicycle-vehicle collisions, we design and implement a cost effectiveembedded system to warn cyclists of approaching vehicles. The system uses an Odroid C2 singleboard computer (SBC) to do vehicle and lane detection in real time using only vision. The system warns cyclist are warned of approaching cars using both a smartphone app and an LED indicator. Due to the limited performance of the Odroid C2 and other low power and low cost SBCs,we found that existing detection algorithms run either too slowly or do not have sufficient accuracy to be practical. Our solution to these limitations is to create a custom fully convolutional network(FCN) which is small enough to run at real time speeds on the Odroid C2 but robust enough tohave decent accuracy. We show that this FCN runs significantly faster than Tiny YOLOv3 andMobileNetv2 while getting similar accuracy when all are trained on a limited dataset. Since no dataset exists that separates the fronts of vehicles from other poses and is in the context of city and country roads, we create our own. Creating a dataset to train any detector hastraditionally been time consuming. We present and implement a way to efficiently do this usingminimal hand annotation by generating semi-synthetic images by cropping relatively few positive images into many background images. This creates a wider background class variance than wouldotherwise be possible.

Degree

MS

College and Department

Ira A. Fulton College of Engineering and Technology; Electrical and Computer Engineering

Date Submitted

2019-04-01

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd10620

Keywords

Detection, FCN, Real-Time, Machine Learning, Embedded

Language

english

Share

COinS