Problem Description

Being able to hover, fly laterally and at low speeds, makes UAVs an optimal platform to accomplish different military and civilian tasks like reconnaissance support in hazardous zones, visual surveillance and inspection. In addition, some relevant industries are starting to use drones for other tasks beyond surveillance (p.e Amazon’s "Prime Air"). Moreover, the most important task in order to achieve UAV autonomy is autonomous navigation. This may prove useful in a near future for tasks in indoor environments such as indoor transportation, object retrieval (p.e a missing part in a assembly line), monitoring misplaced books in a library and autonomously reporting sports events.

Although years of research of GPS position and data tracking have improved outdoor navigation and localization, in environments such as indoors or dense urban areas where maps are unavailable and the GPS signal is weak, a UAV will operate in high hazardous regions, running the risk of becoming lost and colliding with obstacles.

Since the gist of this project consists of enabling an UAV to autonomously navigate in a unknown environment without resorting to GPS localization, the main problem is using visual odometry and on-board IMU to develop localization, estimation and planning algorithms to achieve a autonomous collision and obstacle avoidance navigation. A vast majority of UAVs depend on GPS for navigation, hence this project is more challenging since the GPS is denied.

This project is therefore focused on exploring methods that may allow autonomous flight in- doors without GPS and previous knowledge about the environment while using only on-board sensors.


Problem Refinement

Given the complexity of the stated problem, this project needs to be split into simpler and implicit problems:

  • Acquisition and quantification of sensory data: Processing and evaluating the values from the UAV sensors is of the utmost importance. The readings from the sensors will prove to be a valuable asset when the development of the autonomous navigation algorithm requires sensory data to analyze the state of the UAV.
  • Adjustment of the UAV navigation parameters: In order to create a navigation algorithm, a fully functional communication between the UAVs and the external processing unit (laptop) must be established. Prior to the creation of the navigation algorithm, it must ensured that the laptop is fully capable of sending desirable commands to the UAV. These include adjustments to parameters like yaw, pitch and roll.
  • Image data acquirement from the UAV camera: All the image frames captured by the UAV camera will be used to compute all the necessary data for the navigation algorithm.
  • Image processing: Images previously acquired should undergo a series of processes aimed at detecting points and regions of interest which are relevant to the UAV obstacle avoidance navigation. Such processes may include: Binarization, Segmentation, Noise Reduction and Perspective Transformations.
  • Image feature detection and extraction: All the acquired frames from the camera should be analyzed in order to detect features of interest to be used on the classification stage. The techniques to be used may include: Canny edge detector, SURF [2], HOG, and Hough transform
  • Feature matching and classification: The obtained features will be used to train a classification model to be used by the UAV