A composite framework for affective sensing

Gordon McIntyre*, Roland Goecke

*Corresponding author for this work

    Research output: Contribution to journalConference articlepeer-review

    Abstract

    A system capable of interpreting affect from a speaking face must recognise and fuse signals from multiple cues. Building such a system requires the integration of software components to perform tasks such as image registration, video segmentation, speech recognition and classification. Such software components tend to be idiosyncratic, purpose-built, and driven by scripts and textual configuration files. Integrating components to achieve the necessary degree of flexibility to perform full multimodal affective recognition is challenging. We discuss the key requirements and describe a system to perform multimodal affect sensing which integrates such software components and meets these requirements.

    Original languageEnglish
    Pages (from-to)2767-2770
    Number of pages4
    JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
    Publication statusPublished - 2008
    EventINTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association - Brisbane, QLD, Australia
    Duration: 22 Sept 200826 Sept 2008

    Fingerprint

    Dive into the research topics of 'A composite framework for affective sensing'. Together they form a unique fingerprint.

    Cite this