CAR NAVIGATION THROUGH COMPUTER VISION METHODS
WITH RUDIMENTARY IMPLEMENTATION UNDER ANDROID
The increasingly common problem of navigation past obstacles, e.g.
on roads and motorways, is typically addressed by a top-down approach
(or an overview/bird's eye paradigm) that relies on GPS - or equivalent
- and remotely-stored maps, persistently accessed via a network.
This is not always sufficient and contingencies can be utilised that
are based on real-time data. The situation is further exacerbated
when there is no human operator, e.g. driver, involved in this process
in order to practice complex judgment. Factors such as traffic in
motion are unaccounted for, so automated steering based on obstacles
detection cannot be done reliably. This project undertakes the task
of tackling car navigation by observing objects through in-car camera
(or cameras), utilising mobile device (or devices) mounted upon a
dashboard (or another part of a vehicle). The end goal is to produce
a semi-autonomous (or computer-assisted) driving experience which
exploits off-the-shelf hardware such as Android-based mobile computers
(smartphones and tablets). Android is initially chosen as the target
platform because, according to various US-centric market surveys,
it now holds a majority market share. Results suggest that average
framerate in the existing framework permits information of real use
to a driver to be collected and delivered in audio/visual form; however,
due to lack of time, I am unable to explore refinement of the framework
and subsequent adaptation for real-world applications.