User-driven lossy compression for images and video

Nathan Brewer*, Lei Wang, Nianjun Liu, Li Cheng

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    2 Citations (Scopus)

    Abstract

    In any given scene, a human observer is typically more interested in some objects than others, and will pay more attention to those objects they are interested in. This paper aims to capture this attention focusing behavior by selectively merging a fine-scale oversegmentation of a frame so that interesting regions are segmented into smaller regions than uninteresting regions. This results in a new type of image partitioning which reflects in the image the amount of attention we pay to a particular image region. This is done using a novel, interactive method for learning merging rules for images and videos based on defining a weighted distance metric between adjacent oversegments. We present as an example application of this technique a new lossy image and video stream compression method which attempts to minimize the loss in areas of interest.

    Original languageEnglish
    Title of host publication2009 24th International Conference Image and Vision Computing New Zealand, IVCNZ 2009 - Conference Proceedings
    Pages346-351
    Number of pages6
    DOIs
    Publication statusPublished - 2009
    Event2009 24th International Conference Image and Vision Computing New Zealand, IVCNZ 2009 - Wellington, New Zealand
    Duration: 23 Nov 200925 Nov 2009

    Publication series

    Name2009 24th International Conference Image and Vision Computing New Zealand, IVCNZ 2009 - Conference Proceedings

    Conference

    Conference2009 24th International Conference Image and Vision Computing New Zealand, IVCNZ 2009
    Country/TerritoryNew Zealand
    CityWellington
    Period23/11/0925/11/09

    Fingerprint

    Dive into the research topics of 'User-driven lossy compression for images and video'. Together they form a unique fingerprint.

    Cite this