Sistema de localización mediante odometría visual para vehículos inteligentes

  1. PARRA ALONSO, IGNACIO
Dirigée par:
  1. Miguel Angel Sotelo Vázquez Directeur

Université de défendre: Universidad de Alcalá

Fecha de defensa: 23 juillet 2010

Jury:
  1. Luis M. Bergasa Pascual President
  2. David Fernández Llorca Secrétaire
  3. José María Armingol Moreno Rapporteur
  4. Jose Eugenio Naranjo Hernández Rapporteur
  5. Domenico Giorgio Sorrenti Rapporteur
Département:
  1. Automática

Type: Thèses

Teseo: 298467 DIALNET lock_openTESEO editor

Résumé

Autonomous vehicle guidance interest has increased in the recent years, thanks to events like the Defense Advanced Research Projects Agency (DARPA) Grand Challenge and recently the Urban Challenge. Accurate Global Localization has become a key component in vehicle navigation, following the trend of the robotics area, which has seen significant progress in the last decade. Accordingly, our final goal is the autonomous vehicle outdoor navigation in large-scale environments and the improvement of current vehicle navigation systems based only on standard GPS. The work proposed in this thesis is particularly efficient in areas where GPS signal is not reliable or even not fully available (tunnels, urban areas with tall buildings, mountainous forested environments, etc). Our research objective is to develop a robust localization system based on a low-cost stereo camera system that assists a standard GPS sensor for vehicle navigation tasks. This thesis presents a new approach for estimating the vehicle motion trajectory and global position in complex urban environments by means of visual odometry and a digital map. Firstly the camera models and the stereo reconstruction are described and a model for the uncertainty of the 3D reconstruction is presented. A new strategy for robust feature extraction and data post-processing is developed and tested on-road. Scale-invariant Image Features (SIFT) are used in order to cope with the complexity of urban environments. Comparisons with other feature extractors are performed. In the prototype system, the ego-motion of the vehicle is computed using a stereo-vision system mounted next to the rear view mirror of the car. Feature points are matched between pairs of frames and linked into 3D trajectories. The distance between estimations is dynamically adapted based on re-projection and estimation errors. Vehicle motion is estimated using the non-linear, photogrametric approach based on RANSAC (RAndom SAmple Consensus). A weighted non-linear least squares estimation and a RANSAC based on Mahalanobis distance, are presented as a more adequate solution for the motion estimation problem. The performance of this methods has been checked both on synthetic and real data showing an improvement over previous solutions. A probabilistic map-matching algorithm is used along with a digital map to perform the global localization during GPS outages. The last reliable position given by the GPS is used as starting point for the map-matching algorithm which will perform the localization using the motion estimation from the visual odometry system. The final goal is to provide on-board driver assistance in navigation tasks, or to provide a means of autonomously navigating a vehicle. The method has been tested in real traffic conditions without using prior knowledge about the scene or the vehicle motion.