Metric state space reinforcement learning for a vision-capable mobile robot

Viktor Zhumatiy*, Faustino Gomez, Marcus Hutter, Jürgen Schmidhuber

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.

Original languageEnglish
Title of host publicationIntelligent Autonomous Systems 9, IAS 2006
Pages272-281
Number of pages10
Publication statusPublished - 2006
Externally publishedYes
Event9th International Conference on Intelligent Autonomous Systems, IAS 2006 - Tokyo, Japan
Duration: 7 Mar 20069 Mar 2006

Publication series

NameIntelligent Autonomous Systems 9, IAS 2006

Conference

Conference9th International Conference on Intelligent Autonomous Systems, IAS 2006
Country/TerritoryJapan
CityTokyo
Period7/03/069/03/06

Fingerprint

Dive into the research topics of 'Metric state space reinforcement learning for a vision-capable mobile robot'. Together they form a unique fingerprint.

Cite this