Extensions to quantile regression forests for very high-dimensional data

Nguyen Thanh Tung, Joshua Zhexue Huang, Imran Khan, Mark Junjie Li, Graham Williams

Research output: Contribution to journalConference articlepeer-review

8 Citations (Scopus)

Abstract

This paper describes new extensions to the state-of-the-art regression random forests Quantile Regression Forests (QRF) for applications to high-dimensional data with thousands of features. We propose a new subspace sampling method that randomly samples a subset of features from two separate feature sets, one containing important features and the other one containing less important features. The two feature sets partition the input data based on the importance measures of features. The partition is generated by using feature permutation to produce raw importance feature scores first and then applying p-value assessment to separate important features from the less important ones. The new subspace sampling method enables to generate trees from bagged sample data with smaller regression errors. For point regression, we choose the prediction value of Y from the range between two quantiles Q0.05 and Q0.95 instead of the conditional mean used in regression random forests. Our experiment results have shown that random forests with these extensions outperformed regression random forests and quantile regression forests in reduction of root mean square residuals.

Original languageEnglish
Pages (from-to)247-258
Number of pages12
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8444 LNAI
Issue numberPART 2
DOIs
Publication statusPublished - 2014
Externally publishedYes
Event18th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, PAKDD 2014 - Tainan, Taiwan
Duration: 13 May 201416 May 2014

Fingerprint

Dive into the research topics of 'Extensions to quantile regression forests for very high-dimensional data'. Together they form a unique fingerprint.

Cite this