Abstract
Modality is a key facet in medical image retrieval, as a user is likely interested in only one of e.g. radiology images, flowcharts, and pathology photos. While assessing image modality is trivial for humans, reliable automatic methods are required to deal with large un-annotated image bases, such as figures taken from the millions of scientific publications. We present a multi-disciplinary approach to tackle the classification problem by combining image features, meta-data, textual and referential information. Our system achieved an accuracy of 96.86% in cross-validation on the ImageCLEF 2011 training dataset having 18 imbalanced modality classes, and an accuracy of 90.2% on the Image- CLEF2010 dataset having 8 well-balanced modality classes. We evaluate the importance of the individual feature sets in detail, and provide an error analysis pointing at weaknesses of our method and obstacles in the classification task. For the benefit of the image classification community, we make the results of our feature extraction methods publicly available at http://categorizer.tmit.bme.hu/illes/imageclef2011modality. Keywords: image classification, image feature extraction, image modality, text mining.
| Original language | English |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 1177 |
| Publication status | Published - 2011 |
| Event | 2011 Cross Language Evaluation Forum Conference, CLEF 2011 - Amsterdam, Netherlands Duration: 19 Sept 2011 → 22 Sept 2011 |
Fingerprint
Dive into the research topics of 'Multi-disciplinary modality classification for medical images'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver