TY - GEN
T1 - From joyous to clinically depressed
T2 - 25th International Florida Artificial Intelligence Research Society Conference, FLAIRS-25
AU - Alghowinem, Sharifa
AU - Goecke, Roland
AU - Wagner, Michael
AU - Epps, Julien
AU - Breakspear, Michael
AU - Parker, Gordon
PY - 2012
Y1 - 2012
N2 - Depression and other mood disorders are common and disabling disorders. We present work towards an objective diagnostic aid supporting clinicians using affective sensing technology with a focus on acoustic and statistical features from spontaneous speech. This work investigates differences in expressing positive and negative emotions in depressed and healthy control subjects as well as whether initial gender classification increases the recognition rate. To this end, spontaneous speech from interviews of 30 subjects of each depressed and controls was analysed, with a focus on questions eliciting positive and negative emotions. Using HMMs with GMMs for classification with 30-fold cross-validation, we found that MFCC, energy and intensity features gave highest recognition rates when female and male subjects were analysed together. When the dataset was first split by gender, log energy and shimmer features, respectively, were found to give the highest recognition rates in females, while it was loudness for males. Overall, correct recognition rates from acoustic features for depressed female subjects were higher than for male subjects. Using temporal features, we found that the response time and average syllable duration were longer in depressed subjects, while the interaction involvement and articulation rate wesre higher in control subjects.
AB - Depression and other mood disorders are common and disabling disorders. We present work towards an objective diagnostic aid supporting clinicians using affective sensing technology with a focus on acoustic and statistical features from spontaneous speech. This work investigates differences in expressing positive and negative emotions in depressed and healthy control subjects as well as whether initial gender classification increases the recognition rate. To this end, spontaneous speech from interviews of 30 subjects of each depressed and controls was analysed, with a focus on questions eliciting positive and negative emotions. Using HMMs with GMMs for classification with 30-fold cross-validation, we found that MFCC, energy and intensity features gave highest recognition rates when female and male subjects were analysed together. When the dataset was first split by gender, log energy and shimmer features, respectively, were found to give the highest recognition rates in females, while it was loudness for males. Overall, correct recognition rates from acoustic features for depressed female subjects were higher than for male subjects. Using temporal features, we found that the response time and average syllable duration were longer in depressed subjects, while the interaction involvement and articulation rate wesre higher in control subjects.
UR - http://www.scopus.com/inward/record.url?scp=84865055679&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781577355588
T3 - Proceedings of the 25th International Florida Artificial Intelligence Research Society Conference, FLAIRS-25
SP - 141
EP - 146
BT - Proceedings of the 25th International Florida Artificial Intelligence Research Society Conference, FLAIRS-25
Y2 - 23 May 2012 through 25 May 2012
ER -