Classifying very high-dimensional data with random forests built from small subspaces

Baoxun Xu*, Joshua Zhexue Huang, Graham Williams, Qiang Wang, Yunming Ye

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

71 Citations (Scopus)

Abstract

The selection of feature sub space s for growing decision trees is a key step in building random forest models. However, the common approach using randomly sampling a few features in the subspace is not suitable for high dimensional data consisting of thousands of features, because such data often contains many features which are uninformative to classification, and the random sampling often doesn't include informative feature s in the selected subspaces. Consequently, classification performance of the randomforestmodel is significantly affected. In this paper, the authors propose an improved random forest method which uses a novel feature weighting method for subspace selection and therefore enhances classification performance over high-dimensional data. A series of experiments on 9 real life high dimensional datasets demonstrated that using a subspace size of [log2 (M) + 1] features where M is the total number of features in the dataset, our random forest model significantly outperforms existing randomforest models.

Original languageEnglish
Pages (from-to)44-63
Number of pages20
JournalInternational Journal of Data Warehousing and Mining
Volume8
Issue number2
DOIs
Publication statusPublished - Apr 2012
Externally publishedYes

Fingerprint

Dive into the research topics of 'Classifying very high-dimensional data with random forests built from small subspaces'. Together they form a unique fingerprint.

Cite this