TY - JOUR
T1 - Extensions to quantile regression forests for very high-dimensional data
AU - Tung, Nguyen Thanh
AU - Huang, Joshua Zhexue
AU - Khan, Imran
AU - Li, Mark Junjie
AU - Williams, Graham
PY - 2014
Y1 - 2014
N2 - This paper describes new extensions to the state-of-the-art regression random forests Quantile Regression Forests (QRF) for applications to high-dimensional data with thousands of features. We propose a new subspace sampling method that randomly samples a subset of features from two separate feature sets, one containing important features and the other one containing less important features. The two feature sets partition the input data based on the importance measures of features. The partition is generated by using feature permutation to produce raw importance feature scores first and then applying p-value assessment to separate important features from the less important ones. The new subspace sampling method enables to generate trees from bagged sample data with smaller regression errors. For point regression, we choose the prediction value of Y from the range between two quantiles Q0.05 and Q0.95 instead of the conditional mean used in regression random forests. Our experiment results have shown that random forests with these extensions outperformed regression random forests and quantile regression forests in reduction of root mean square residuals.
AB - This paper describes new extensions to the state-of-the-art regression random forests Quantile Regression Forests (QRF) for applications to high-dimensional data with thousands of features. We propose a new subspace sampling method that randomly samples a subset of features from two separate feature sets, one containing important features and the other one containing less important features. The two feature sets partition the input data based on the importance measures of features. The partition is generated by using feature permutation to produce raw importance feature scores first and then applying p-value assessment to separate important features from the less important ones. The new subspace sampling method enables to generate trees from bagged sample data with smaller regression errors. For point regression, we choose the prediction value of Y from the range between two quantiles Q0.05 and Q0.95 instead of the conditional mean used in regression random forests. Our experiment results have shown that random forests with these extensions outperformed regression random forests and quantile regression forests in reduction of root mean square residuals.
KW - Data Mining
KW - High-dimensional Data
KW - Quantile Regression Forests
KW - Regression Random Forests
UR - http://www.scopus.com/inward/record.url?scp=84901260865&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-06605-9_21
DO - 10.1007/978-3-319-06605-9_21
M3 - Conference article
AN - SCOPUS:84901260865
SN - 0302-9743
VL - 8444 LNAI
SP - 247
EP - 258
JO - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
JF - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
IS - PART 2
T2 - 18th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, PAKDD 2014
Y2 - 13 May 2014 through 16 May 2014
ER -