Market-based reinforcement learning in partially observable worlds

Ivo Kwee, Marcus Hutter, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Citations (Scopus)

Abstract

Unlike traditional reinforcement learning (RL), marketbased RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an agent needs to learn short-term memories of relevant previous events in order to execute optimal actions. Most previous work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here we reimplement a recent approach to market-based RL and for the first time evaluate it in a toy POMDP setting.

Original languageEnglish
Title of host publicationArtificial Neural Networks - ICANN 2001 - International Conference, Proceedings
EditorsKurt Hornik, Georg Dorffner, Horst Bischof
PublisherSpringer Verlag
Pages865-873
Number of pages9
ISBN (Print)3540424865, 9783540446682
DOIs
Publication statusPublished - 2001
Externally publishedYes
EventInternational Conference on Artificial Neural Networks, ICANN 2001 - Vienna, Austria
Duration: 21 Aug 200125 Aug 2001

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2130
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Conference on Artificial Neural Networks, ICANN 2001
Country/TerritoryAustria
CityVienna
Period21/08/0125/08/01

Fingerprint

Dive into the research topics of 'Market-based reinforcement learning in partially observable worlds'. Together they form a unique fingerprint.

Cite this