Visual Navigation System for Mobile robots

We present two different methods based on visual odometry for pose estimation (x, y, Ө) of a robot. The methods proposed are: one appearance based method that computes similarity measures between consecutive images, and one method that computes visual flow of particular features, i.e. spotlights on...

Full description

Bibliographic Details
Main Authors: Safdar, Wasim, Bădăluță, Vlad
Format: Others
Language:English
Published: Högskolan i Halmstad 2011
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-15595
Description
Summary:We present two different methods based on visual odometry for pose estimation (x, y, Ө) of a robot. The methods proposed are: one appearance based method that computes similarity measures between consecutive images, and one method that computes visual flow of particular features, i.e. spotlights on ceiling. Both techniques are used to correct the pose (x, y, Ө) of the robot, measuring heading change between consecutive images. A simple Kalman filter, extended Kalman filter and simple averaging filter are used to fuse the estimated heading from visual odometry methods with odometry data from wheel encoders. Both techniques are evaluated on three different datasets of images obtained from a warehouse and the results showed that both methods are able to minimize the drift in heading compare to using wheel odometry only.