Gaussian Processes with Monotonicity Constraints for Preference Learning from Pairwise Comparisons

Robert Chin, Chris Manzie, Alex S. Ira, Dragan Nesic, Iman Shames

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    In preference learning, it is beneficial to incorporate monotonicity constraints for learning utility functions when there is prior knowledge of monotonicity. We present a novel method for learning utility functions with monotonicity constraints using Gaussian process regression. Data is provided in the form of pairwise comparisons between items. Using conditions on monotonicity for the predictive function, an algorithm is proposed which uses the weighted average between prior linear and maximum a posteriori (MAP) utility estimates. This algorithm is formally shown to guarantee monotonicity of the learned utility function in the dimensions desired. The algorithm is tested in a Monte Carlo simulation case study, in which the results suggest that the learned utility by the proposed algorithm performs better in prediction than the standalone linear estimate, and enforces monotonicity unlike the MAP estimate.
    Original languageEnglish
    Title of host publicationProceedings of the IEEE Conference on Decision and Control
    Place of PublicationPiscataway, United States
    PublisherIEEE
    Pages1150-1155
    ISBN (Print)9781538613955
    DOIs
    Publication statusPublished - 2019
    Event57th IEEE Conference on Decision and Control, CDC 2018 - Miami, USA
    Duration: 1 Jan 2018 → …

    Conference

    Conference57th IEEE Conference on Decision and Control, CDC 2018
    Period1/01/18 → …
    OtherDecember 17-19 2018

    Fingerprint

    Dive into the research topics of 'Gaussian Processes with Monotonicity Constraints for Preference Learning from Pairwise Comparisons'. Together they form a unique fingerprint.

    Cite this