A convex formulation for learning scale-free networks via submodular relaxation

Aaron J. Defazio, Tiberio S. Caetano

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    23 Citations (Scopus)

    Abstract

    A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovász extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset.

    Original languageEnglish
    Title of host publicationAdvances in Neural Information Processing Systems 25
    Subtitle of host publication26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
    Pages1250-1258
    Number of pages9
    Publication statusPublished - 2012
    Event26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012 - Lake Tahoe, NV, United States
    Duration: 3 Dec 20126 Dec 2012

    Publication series

    NameAdvances in Neural Information Processing Systems
    Volume2
    ISSN (Print)1049-5258

    Conference

    Conference26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
    Country/TerritoryUnited States
    CityLake Tahoe, NV
    Period3/12/126/12/12

    Fingerprint

    Dive into the research topics of 'A convex formulation for learning scale-free networks via submodular relaxation'. Together they form a unique fingerprint.

    Cite this