Prediction with expert advice by following the perturbed leader for general weights

Marcus Hutter*, Jan Poland

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

23 Citations (Scopus)

Abstract

When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of √complexity/current loss renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from [KV03] (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.

Original languageEnglish
Pages (from-to)279-293
Number of pages15
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3244
DOIs
Publication statusPublished - 2004
Externally publishedYes
Event15th International Conference ALT 2004: Algorithmic Learning Theory - Padova, Italy
Duration: 2 Oct 20045 Oct 2004

Fingerprint

Dive into the research topics of 'Prediction with expert advice by following the perturbed leader for general weights'. Together they form a unique fingerprint.

Cite this