A kernel two-sample test

Arthur Gretton*, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, Alexander Smola

*Corresponding author for this work

    Research output: Contribution to journalReview articlepeer-review

    3813 Citations (Scopus)

    Abstract

    We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.

    Original languageEnglish
    Pages (from-to)723-773
    Number of pages51
    JournalJournal of Machine Learning Research
    Volume13
    Publication statusPublished - Mar 2012

    Fingerprint

    Dive into the research topics of 'A kernel two-sample test'. Together they form a unique fingerprint.

    Cite this