Bayesian treatment of incomplete discrete data applied to mutual information and feature selection

Marcus Hutter*, Marco Zaffalon

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

4 Citations (Scopus)

Abstract

Given the joint chances of a pair of random variables one can compute quantities of interest, like the mutual information. The Bayesian treatment of unknown chances involves computing, from a second order prior distribution and the data likelihood, a posterior distribution of the chances. A common treatment of incomplete data is to assume ignorability and determine the chances by the expectation maximization (EM) algorithm. The two different methods above are well established but typically separated. This paper joins the two approaches in the case of Dirichlet priors, and derives efficient approximations for the mean, mode and the (co)variance of the chances and the mutual information. Furthermore, we prove the unimodality of the posterior distribution, whence the important property of convergence of EM to the global maximum in the chosen framework. These results are applied to the problem of selecting features for incremental learning and naive Bayes classification. A fast filter based on the distribution of mutual information is shown to outperform the traditional filter based on empirical mutual information on a number of incomplete real data sets.

Original languageEnglish
Pages (from-to)396-406
Number of pages11
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2821
DOIs
Publication statusPublished - 2003
Externally publishedYes
Event26th Annual German Conference on AI, KI 2003 - Hamburg, Germany
Duration: 15 Sept 200318 Sept 2003

Fingerprint

Dive into the research topics of 'Bayesian treatment of incomplete discrete data applied to mutual information and feature selection'. Together they form a unique fingerprint.

Cite this