TY - JOUR
T1 - Representing and reducing error in natural-resource classification using model combination
AU - Huang, Z.
AU - Lees, Brian
PY - 2005/5
Y1 - 2005/5
N2 - Artificial Intelligence (AI) models such as Artificial Neural Networks (ANNs), Decision Trees and Dempster - Shafer's Theory of Evidence have long claimed to be more error-tolerant than conventional statistical models, but the way error is propagated through these models is unclear. Two sources of error have been identified in this study: sampling error and attribute error. The results show that these errors propagate differently through the three AI models. The Decision Tree was the most affected by error, the Artificial Neural Network was less affected by error, and the Theory of Evidence model was not affected by the errors at all. The study indicates that AI models have very different modes of handling errors. In this case, the machine-learning models, including ANNs and Decision Trees, are more sensitive to input errors. Dempster - Shafer's Theory of Evidence has demonstrated better potential in dealing with input errors when multisource data sets are involved. The study suggests a strategy of combining AI models to improve classification accuracy. Several combination approaches have been applied, based on a 'majority voting system', a simple average, Dempster - Shafer's Theory of Evidence, and fuzzy-set theory. These approaches all increased classification accuracy to some extent. Two of them also demonstrated good performance in handling input errors. Second-stage combination approaches which use statistical evaluation of the initial combinations are able to further improve classification results. One of these second-stage combination approaches increased the overall classification accuracy on forest types to 54% from the original 46.5% of the Decision Tree model, and its visual appearance is also much closer to the ground data. By combining models, it becomes possible to calculate quantitative confidence measurements for the classification results, which can then serve as a better error representation. Final classification products include not only the predicted hard classes for individual cells, but also estimates of the probability and the confidence measurements of the prediction.
AB - Artificial Intelligence (AI) models such as Artificial Neural Networks (ANNs), Decision Trees and Dempster - Shafer's Theory of Evidence have long claimed to be more error-tolerant than conventional statistical models, but the way error is propagated through these models is unclear. Two sources of error have been identified in this study: sampling error and attribute error. The results show that these errors propagate differently through the three AI models. The Decision Tree was the most affected by error, the Artificial Neural Network was less affected by error, and the Theory of Evidence model was not affected by the errors at all. The study indicates that AI models have very different modes of handling errors. In this case, the machine-learning models, including ANNs and Decision Trees, are more sensitive to input errors. Dempster - Shafer's Theory of Evidence has demonstrated better potential in dealing with input errors when multisource data sets are involved. The study suggests a strategy of combining AI models to improve classification accuracy. Several combination approaches have been applied, based on a 'majority voting system', a simple average, Dempster - Shafer's Theory of Evidence, and fuzzy-set theory. These approaches all increased classification accuracy to some extent. Two of them also demonstrated good performance in handling input errors. Second-stage combination approaches which use statistical evaluation of the initial combinations are able to further improve classification results. One of these second-stage combination approaches increased the overall classification accuracy on forest types to 54% from the original 46.5% of the Decision Tree model, and its visual appearance is also much closer to the ground data. By combining models, it becomes possible to calculate quantitative confidence measurements for the classification results, which can then serve as a better error representation. Final classification products include not only the predicted hard classes for individual cells, but also estimates of the probability and the confidence measurements of the prediction.
KW - Model combination
KW - Natural-resource classification
KW - Reducing error
KW - Representing error
UR - http://www.scopus.com/inward/record.url?scp=20444484128&partnerID=8YFLogxK
U2 - 10.1080/13658810500032446
DO - 10.1080/13658810500032446
M3 - Article
SN - 1365-8816
VL - 19
SP - 603
EP - 621
JO - International Journal of Geographical Information Science
JF - International Journal of Geographical Information Science
IS - 5
ER -