From joyous to clinically depressed: Mood detection using multimodal analysis of a person's appearance and speech

Sharifa Alghowinem*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    15 Citations (Scopus)

    Abstract

    Clinical depression is a critical public health problem, with high costs associated to a person's functioning, mortality, and social relationships, as well as the economy overall. Currently, there is no dedicated objective method to diagnose depression. Rather, its diagnosis depends on patient self-report and the clinician's observation, risking a range of subjective biases. Our aim is to develop an objective affective sensing system that supports clinicians in their diagnosis and monitoring of clinical depression. In this PhD work, my approach is based on multimodal analysis, i.e. combinations of vocal affect, head pose and eye movement from a video-audio real-world clinically validated data. In addition, this work will investigate the cross-cultural generalization of depression characteristics from different languages and countries.

    Original languageEnglish
    Title of host publicationProceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013
    Pages648-653
    Number of pages6
    DOIs
    Publication statusPublished - 2013
    Event2013 5th Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013 - Geneva, Switzerland
    Duration: 2 Sept 20135 Sept 2013

    Publication series

    NameProceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013

    Conference

    Conference2013 5th Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013
    Country/TerritorySwitzerland
    CityGeneva
    Period2/09/135/09/13

    Fingerprint

    Dive into the research topics of 'From joyous to clinically depressed: Mood detection using multimodal analysis of a person's appearance and speech'. Together they form a unique fingerprint.

    Cite this